Generative AI and the Future of Software Engineering: Insights from Anish Cheriyan at MLDS 2025

admin

At the Machine Learning Developers Summit (MLDS) 2025, hosted by Analytics India Magazine, technology leaders, data scientists, and AI researchers gathered to explore the transformative power of artificial intelligence across industries. One of the most insightful and forward-looking talks came from Dr. Anish Cheriyan, Vice President of Project Test Domains, Intelligent Cockpit at HARMAN Automotive. His session shed light on how Generative AI and advanced machine learning (ML) techniques are fundamentally redefining the software engineering landscape.

Dr. Cheriyan’s extensive experience in software quality, cybersecurity, and agile development positioned him well to deliver a talk that bridged theory with practical implementation. His presentation, deeply technical yet highly accessible, focused on how AI technologies—especially generative AI—are revolutionizing the way software is developed, tested, deployed, and maintained.

The Rise of Generative AI in Software Engineering

Generative AI has emerged as a groundbreaking technology capable of synthesizing content, generating code, simulating environments, and even optimizing complex systems. Unlike traditional rule-based software systems, generative models can learn patterns from large datasets and create novel outputs—whether that be text, images, or even lines of functional code.

According to Dr. Cheriyan, the significance of generative AI in software engineering lies in its ability to accelerate development, improve code quality, and reduce human error. Tools powered by large language models (LLMs) and transformer-based architectures can assist developers by auto-generating boilerplate code, refactoring legacy systems, or even writing entire test suites based on functional specifications.

He emphasized that this shift doesn’t just automate repetitive tasks; it frees up engineers to focus on high-value, creative problem-solving, fundamentally enhancing productivity and innovation.

Key Technologies Driving the Transformation

Dr. Cheriyan highlighted several key ML techniques that are making significant contributions to modern software engineering practices:

1. Embeddings

Embeddings provide numerical representations of code, text, or other data types, enabling machines to understand contextual relationships. In software development, embeddings can be used to detect similar functions, recommend fixes, or flag anomalies by comparing code snippets across large repositories.

2. Transformers

The transformer architecture, the backbone of models like GPT and BERT, enables deep contextual understanding of sequences. In code generation and analysis, transformers can predict the next line of code, suggest documentation, and identify logical inconsistencies. Dr. Cheriyan noted how this has dramatically improved developer productivity and reduced debugging time.

3. Reinforcement Learning

In QA and test automation, reinforcement learning (RL) can train agents to optimize code execution paths, improve test coverage, and dynamically adjust test cases based on system behavior. RL can even be used to optimize CI/CD pipelines, making software deployment more robust and efficient.

4. Graph Neural Networks (GNNs)

Software systems are inherently graph-structured—functions call other functions, modules depend on others, and APIs connect systems. GNNs are uniquely suited to model these relationships. Dr. Cheriyan explained how GNNs can be used to predict component failures, analyze dependency risks, and enhance code maintainability.


AI-Powered Decision-Making in Software Projects

One of the more strategic applications discussed by Dr. Cheriyan was AI-driven decision-making. With vast amounts of historical code, testing logs, and bug reports available, machine learning models can help project managers and architects make smarter decisions about resource allocation, risk assessment, and timeline estimation.

For instance, AI can:

Predict the likelihood of bugs in a new codebase based on past commit patterns.

Identify modules likely to break when changes are introduced.

Recommend optimal developer-task matches based on skills and code familiarity.


These insights can fundamentally shift how software projects are planned and executed—turning development into a data-driven discipline.


Automating Quality Assurance

Quality assurance (QA) is a critical but traditionally labor-intensive phase of software development. Dr. Cheriyan highlighted how generative AI is transforming QA by automating test case generation, bug detection, and defect prediction.

AI models trained on historical bugs and test cases can:

Auto-generate edge case tests from requirements documents.

Detect vulnerabilities during static code analysis.

Predict the impact of code changes on overall system stability.


By integrating these tools into the DevOps pipeline, companies can drastically reduce time-to-market while maintaining or even enhancing software reliability.

Shifting Engineering Roles and Skillsets

One of the most thought-provoking points in Dr. Cheriyan’s talk was the changing nature of engineering roles in the age of AI. Engineers are no longer just coders—they are becoming AI orchestrators, responsible for integrating intelligent systems, ensuring model reliability, and interpreting AI-driven insights.

As AI becomes a co-developer, the traditional “write and debug” paradigm is being replaced by a collaborative human-AI workflow. Engineers must now possess a hybrid skillset that includes:

Understanding ML model behavior and limitations.

Evaluating AI-generated code for context and correctness.

Continuously monitoring AI tools to avoid technical debt.


Organizations that invest in continuous learning and reskilling will be better positioned to thrive in this new environment.

Building an AI-Ready Organization

Dr. Cheriyan concluded his session by emphasizing the need to build learning organizations—companies that embrace experimentation, foster knowledge-sharing, and evolve with technological trends.

To become AI-ready, organizations must:

Establish AI literacy across engineering teams.

Promote cross-functional collaboration between data scientists, developers, and QA professionals.

Create feedback loops where AI outputs are reviewed, improved, and fed back into training systems.


At HARMAN, such principles are being actively pursued as part of their intelligent cockpit initiatives, where AI is being used to not only power user experiences but also optimize the software development lifecycle itself.


The Road Ahead

Dr. Cheriyan’s talk at MLDS 2025 underscored a key message: Generative AI is not just a tool—it’s a paradigm shift in how software is conceived, built, and delivered. From automating mundane tasks to enabling intelligent decision-making, from transforming QA to redefining developer roles, AI is ushering in a new era of software engineering.

As companies navigate this transition, the winners will be those that adapt quickly, invest in AI-driven innovation, and cultivate teams that are agile, skilled, and open to change.

In this new world, code doesn’t just come from developers—it is co-created by machines and humans, working together to build faster, smarter, and more resilient software systems.

Share This Article
Leave a Comment