Since childhood, Govindaiah Simuni has been fond of computers and learned programming at an adolescent age. He has always been a highly motivated, focused, and enthusiastic individual and realized the skills he had even when he was a child. To make the most of those skills, he started his Technology journey in 2003 and started working with Fintech companies. He moved to the US in 2007 to fulfill his American dream and continued to work with Fintech companies.
Govindaiah is a Batch and data Solution Architect specializing in the architecture of complex solutions for batch, data, AWS cloud, and on-premises products such as Active Directory, Exchange, Windows Server products, and other AWS technologies and solutions.
The role of the Data Solution Architect includes maintaining relationships with customers and ensuring a smooth deployment of services to AWS. This includes designing proof-of-concepts and implementing them, helping customers with templates and recommended architectures, and translating business and technical requirements into solid cloud-ready deployments.
He is passionate about solving enterprise data architecture problems and ensuring customers achieve their desired business outcomes. His recent research on providing solutions to current data processing model issues is really impressive. This solution provides a solution for Synchronization issues in a large production environment with an alternative environment to test a fix and release it to production with minimal latency. It explains how to identify and synchronize a minimal data set from a large production environment to a smaller alternative environment. It also provides Data migration solutions between heterogeneous environments with minimal oversight, resources, and cost.
In big data warehouses, thousands of jobs will run in nightly batches with strict SLAs that need continuous monitoring. Currently, teams of support personnel manually monitor, often more than one application at the same time, and react to any issues. Manual monitoring is monotonous, routine, laborious, and human resource intensive. The invention systematically uses Machine Learning to build a framework to monitor end-to-end batch process systems that learn, heal, and improve themselves. The AI/ML system learns the structure, schedules, pace of the run, errors, and fixes of a batch process as it monitors.
Improves itself to be more efficient over time and improves reliability, stability, and scalability. Decisions will be taken in no time with decentralized autonomous organization (DAO) to minimize and/or eliminate the need for complex schedules of global teams in multiple time zones to monitor systems.
In organizations, data is ingested by batch and/or application processes daily into relational databases. Relational data models enforce referential integrity to maintain data consistency. This is typically accomplished by setting dependencies on relevant jobs/processes. However, dependencies enforced by referential integrity force batch and/or applications to ingest data in a serial fashion. The referential integrity could lead to process or job errors when a parent-child link is missing. Thus, the dependencies lead to frequent SLA breaches.
Data integration is the process of combining and harmonizing data from multiple systems belonging to different companies involved in a merger, essentially unifying all the disparate data sources into a single, cohesive system to enable efficient operations post-merger. Key points about data integration during systems merger: Key points about data integration, Data mapping, Data transformation, and Data consolidation. Post-data integration systems continue to maintain data sustainability.
SQL query interpreter focuses on predicting query performance based on plans and query performance available in databases. Further details around the usage of machine learning techniques on predicting a query performance in isolation though features that affect run-time are evaluated based on static and dynamic factors and ranks them, buckets them based on % of similarity, and uses them to create a tool or interface to the user which acts like an interpreter. Dynamically organize data (automatic normalization, de-normalization, purge, partition, etc.) based on the table/column usage and time of day by users or applications and group them based on frequency of execution. The framework will dynamically move data out into separate files or merge (directories in case of list bucketing).
Govindaiah Simuni has a strong background in IoT, telecommunications, and advanced technologies, with nearly two decades of experience in the field. He currently works as a Data Solution Architect at Bank of America, where he contributes to various initiatives in the FinTech and technology sectors, focusing on the application of AI, machine learning, and augmented intelligence. He holds a Master of Engineering degree in Electronics and Communications from the University of Madras and has contributed to the field of Data Architecture with several professional writings and research contributions. Simuni remains dedicated to exploring and developing solutions to address complex challenges in data and technology.
Published by: Gracia M.