What It Takes to Make Agentic AI Work in Retail
Summary
An Infosys podcast features Prasad Banala explaining how his team uses agentic AI for software development, from requirements to issue resolution, with human oversight.
Retailers are deploying agentic AI now
Prasad Banala is moving agentic AI into the daily workflow of a major US retail software team. Banala, the director of software engineering at a large American retailer, detailed his strategy for embedding autonomous agents into the software development lifecycle (SDLC) during a recent discussion with the Infosys Knowledge Institute. The move shifts AI from a simple chatbot interface to a system that can execute multi-step technical tasks.
Banala’s team focuses on three specific areas of software production. They use AI to validate requirements, generate complex test cases, and speed up the resolution of software bugs. This approach aims to reduce the manual labor that typically slows down large-scale enterprise deployments.
The implementation relies on agentic workflows rather than simple prompt-and-response interactions. These agents do not just suggest code; they reason through business logic to ensure the software meets the original project goals. This transition marks a shift in how large corporations handle the "middle" steps of software engineering that often require the most human oversight.
Building agents that can reason
Most companies use Generative AI to write snippets of code or summarize documents. Banala is pushing further by using agents that can perform requirements validation. This process involves the AI reviewing business documents to ensure they are technically sound before developers write a single line of code.
The goal is to stop errors before they reach the development phase. If a business requirement is vague or contradictory, the agent flags it for a human reviewer. This pre-emptive strike on technical debt saves engineering hours that would otherwise be spent fixing fundamental logic flaws later in the cycle.
These agents operate with a level of autonomy that standard AI models lack. They are programmed to follow specific rules and governance frameworks established by the retail organization. By automating the verification of business logic, the team can move from the planning phase to the coding phase significantly faster.
Automating the software testing phase
Testing remains one of the most time-consuming parts of the software lifecycle. Banala’s team is using AI to generate and analyze test cases automatically. The system looks at the code and the requirements simultaneously to determine what needs to be tested.
The AI does not just create simple "pass or fail" scripts. It identifies edge cases that human testers might overlook, such as how a retail system handles simultaneous high-volume transactions during a holiday sale. This level of automated scrutiny improves the overall stability of the retail platform.
The benefits of this automated testing include:
- Reduced manual testing time by identifying redundant test scripts.
- Higher code coverage through AI-generated edge case scenarios.
- Faster bug detection during the initial build phase.
- Better resource allocation by letting humans focus on complex UX testing.
When a bug is found, the agentic AI assists in issue resolution. It analyzes the error logs, identifies the likely source of the failure, and suggests a specific fix. This reduces the "mean time to recovery" (MTTR) for critical retail systems that must stay online 24/7.
Keeping humans in the loop
Despite the autonomy of these agents, Banala emphasizes that human-in-the-loop review is mandatory. No AI-generated code or test case goes into production without a human engineer signing off on the result. This creates a safety net that prevents the AI from "hallucinating" or introducing security vulnerabilities.
Governance is the primary focus for the team. They have built strict guardrails around how the AI accesses internal data and what changes it can propose. These guardrails ensure that the AI follows the company's existing security protocols and coding standards.
The retail organization measures quality outcomes to track the success of these AI agents. They aren't just looking at how much faster they can code. They are looking at whether the software has fewer bugs in production and whether the end-user experience is more reliable.
The future of retail engineering
The scale of a large US retailer requires software that can handle millions of transactions across thousands of locations. Banala’s work suggests that agentic AI is no longer a theoretical concept for these types of organizations. It is becoming a necessary tool for managing the complexity of modern retail infrastructure.
The shift toward agentic AI represents a change in the engineering mindset. Developers are moving from being "coders" to being "orchestrators" of AI systems. They manage the agents, set the parameters for success, and verify the final output.
Banala identifies several key factors for successfully operationalizing AI at this scale:
- Data quality must be high to ensure the AI agents have accurate context.
- Governance models must be established before the AI is deployed.
- Measurable metrics are required to prove the value of the AI investment.
- Continuous training for the human staff is essential to keep up with the technology.
As retail organizations continue to face pressure to digitize, the use of autonomous agents will likely expand. Banala’s team is providing a blueprint for how to balance the speed of AI with the safety requirements of a multi-billion dollar enterprise. The focus remains on measurable quality rather than just following the latest technology trends.
This approach treats AI as a productivity multiplier for the existing engineering team. By removing the repetitive tasks of requirements checking and test generation, the retailer can ship features faster. The real test will be how these systems perform during peak retail seasons when the pressure on software stability is at its highest.
Related Articles
Why 40% of AI projects will be canceled by 2027 (and how to stay in the other 60%)
Many AI projects fail due to siloed efforts on speed, cost, and security. Success requires a unified AI connectivity platform that integrates all three for sustainable deployment.
Python virtual environments: isolation without the chaos
Use local virtual environments to isolate Python project dependencies, preventing version conflicts and ensuring each project runs reliably with its own packages.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.
