Build What’s Next: Digital Product Perspectives

AI in Software Development: Designing & Delivering Real ROI

Method

Forget the AI hype and focus on real ROI in the Software Development Lifecycle (SDLC). This episode features Method's Jason Rome and Raj Sethi with ISG experts Ashwin Gaidani and Tapati Bandopadhya, who trace a clear path from AI tools to measurable outcomes. They argue that coding speed isn't the bottleneck—specs, testing, pipelines, and change management are.

We break down the mechanics of ROI: how specification elaboration unlocks downstream gains, the decision between human-in-the-loop vs. agent-in-the-loop, and integrating GenAI into CI/CD. We also discuss cost, risk-adjusted ROI (F1 score plus risk), and practical wins for legacy modernization, like AI-driven requirement discovery and service-oriented modernization. The conversation also introduces 'stability lanes' and covers what leaders get wrong (tooling without process change, microservices by default), advocating instead for platform thinking and a conductor's mindset to orchestrate micro-tasks for real lift.

Episode Resources:

Jason Rome on LinkedIn: /jason-rom-275b2014

Raj Sethi on LinkedIn: in/rajsethi

Ashwin Gaidani on LinkedIn: in/ashwin-gaidhani

Tapati Bandopadhya on LinkedIn: in/tapatibandopadhyay

Method Website: method.com

GlobalLogic Website: globallogic.com

ISG Website: isg-one.com

Jason Rome:

Welcome back, everyone, to another episode of Build What's Next. I'm your host, Jason Rome, a leading digital strategy and solutions here at Method, joined by an esteemed panel today. First, my co-host here with me in Charlotte, Raj Seti, who I've worked with and learned so much from over the years, who is our SVP and go-to-market leader for AI and SDLC, and someone who every time we talk, I learn something new, and I'm excited for you all to learn from today. And then we've brought in two of our partners from ISG who have been writing and learning a lot about the ecosystem and trends in AI, cloud, and adoption in the SDLC. Ashwin Gadani, who is a research partner and lead analyst and subject matter expert when it comes to enterprise services, based in Hyderabad. And Tapti Bendapater, who is an expert leader based in Urban, California. She's been working in AI since 1997, and we're going to get great views from her on the past, present, and future of AI and bringing those insights in. So stick around today. We'll be talking about, you know, fact fiction in terms of in the AI, SDLC, and PDLC about ROI. Where is their return? Where is there not? What do we expect to happen in terms of the costs? Where are the tooling going in that space? We're gonna talk about adoption, change management in the environments. How do you curate an environment at your company for your employees to work with? Um, and then we'll get into some specific use cases around uh legacy technology modernization, around testing, around elaboration, um, with the goal of you walking away today with a couple tactical things that you can hopefully take back to your organization to start improving now as well as building towards the future of what's next. So enjoy the show.

Josh Lucas:

You are listening to Methods Build What's Next, digital product perspectives, presented by Global Logic. At Method, we aim to bridge the gap between technology and humanity for a more seamless digital future. Join us as we uncover insights, best practices, and cutting-edge technologies with top industry leaders that can help you and your organization craft better digital products and experiences.

Jason Rome:

Well, welcome in, everybody. Really excited for the conversation we're gonna have today. I think one of the leading indicators of a good podcast is when you go to prep for the podcast and you actually should have just recorded the prep because you have the whole conversation. I think uh when we got this group together, that was the conversation that we had. So again, uh I'm joined by my colleague uh Raj Sati today uh in the room here, who leads our uh go to market and SDLC for Global Logic. Um, Dr. uh D. Uh Bandopatiai, who um is an expert leader as well as Ashwin Gadani analyst from ISG. Uh, and we're gonna be talking about the end-to-end adoption of SDLC with AI right now. It feels like not a day goes by where some prestigious university um doesn't release a report that says we're seeing really high ROI, we're seeing no ROI. And I think our goal here today is to try to find the way in between that and and separate the truths from the lies. And really we're gonna focus on three things. One is this concept of ROI. And I think we're gonna hit on both the R and the I today. So, what is value? How should companies be thinking about value now versus in the future? But I think the what does investment look like? Uh there's been some interesting articles on less is more that we'll talk about in some of these smaller models. And what does the expense look like going forward? The second thing that we want to talk about is share some observations of what's blocking adoption, what's hurting companies from from using these tools from a process perspective of are we evolving our operating models to be able to adopt these new tools? And then finally, just change management and the human side of these things. What have we seen? What have we learned in terms of getting the employee buy-in, creating the right environment around the employees in this economic times that we have for uh Gen AI adoption in the PDLC? So, Raj, I'll turn to you first. You know, we've had a lot of conversation about this. We do a lot of work with companies even before AI on their operating model and end-to-end. Um, what have you seen so far? What are some of the big observations around what have been the roadblocks to companies that want to take advantage of these tools in the market right now?

Raj Sethi:

Well, I think um the couple of areas when you adoption comes into play, right? Um we work with lots of companies that have their own processes and we typically adopt them. There's not much room or wiggle room that we have to redesign their processes. And often that's one of the problem areas when it comes to adoption. The understanding a lot of organizations have is that, you know, I I sort of now came up with another analogy around it's like buying the Peloton and thinking that once you bought it, uh, you're gonna be fit. That's not how that works. It becomes an expensive road, uh, you know, uh uh a coat hanger eventually, because unless and until you really redesign your processes and start looking into where do you bring the tool? It's more really about tool augmentation than anything else. And I think often organizations really struggle with that. And you know, we've seen stories where enterprises have adopted the top tools out there, and you know, the claim is no ROI. Yeah. And it's it's almost like they give the tool to the same team running the processes the same way and not um, you know, getting any ROI. The key aspect though is really if you've got a road with a lot of stop signs on it, you're just not going to be able to accelerate. And these stop signs are really human orchestration stop signs, whether it is about PI planning, how do you groom your backlog, it's how do you ensure that you do your release management. All of these eventually are areas where human orchestration really doesn't bode well for an AI adoption. So if you start looking at from that perspective by changing your processes, you will see an ROI day.

Jason Rome:

Yeah. I um I did a workshop on this with a bunch of executives in Atlanta. And when I was down there, I told them, you know, if I gave you a car that was three times as fast, you're not going to get to work any faster in Atlanta traffic. And I think that's a lot of what you're saying is, you know, yes, it's a faster car, but have we redesigned the roads to get where we're going and take advantage of that speed? Tapati and Ashwin, I'll I'll turn to you both because I'd be curious, you know, from what you've seen in the market, especially on this topic of return and value. Is it that there's not value? Is it how we're measuring value? Like what what are you seeing? What's the actual truth and where is there variability there in what you've seen?

Ashwin Gaidani:

Uh I think I'll take a first uh you know the jab at it. And I think I'll just start with a very fundamental and foundational approach up here. You know, so let's let's look at these three very critical words, which is architecture, engineering, and innovation. So architecture initially, you know, I think not far in the past, but started with the digital aspect and now slowly transformed into agentic aspect. Take engineering. I think we fundamentally uh look at engineering that started with code and now is here with the vibe. So now I think we are you know trying to deal with vibe coding as a matter of fact. And then innovation, which was part of a which was included or which was considered as a differentiator, but now is a core capability. As a core approach, we all saw these three independent streams completely siloed. But now, if you look at uh the current complexities and the development in the tech space, all these three have converged into a very intelligent enterprise fabric. And now for us to extract the value out of it, I think we need a very strong consulting foundation. And now with that consulting approach, I believe we need to find out the new human agent equilibrium, because this is how we are going to try to get the maximum out of our investment, because it's not just gonna be human or it's just not gonna be agent. As let me take the analogy a step further, where you have a three times faster car, and then if you have even a better, wider roads, but without proper skill or without proper direction or guidance or navigation, you're still not gonna reach your destination, right? So all these attributes play a very crucial role. But over to you, Tapti. Um to add your view.

Jason Rome:

Tapti, I'd love, especially because you know you've been working in AI since I I think 1997. So you've ridden a few waves of hype cycles and and everything and what's worked and and what hasn't, not just in the SDLC, obviously, but all aspects of AIs. I'd love to, is that as after if you talk about this, maybe give us the historical context. Because I think a a question I hear a lot is is it different this time? And or what's different about it too, from all of some of the previous hype cycles on AI.

Tapati Bandopadhya:

Absolutely. And in fact, uh, when I started working on AI those days, we used to call it expert systems. So uh that was uh actually what has come back now in mixture of experts uh models in a genetic AI, in context engineering, in generative AI. So the fundamentals have pretty much remained, but things got added, as Ashwin mentioned, in terms of the uh options available for different kinds of architecture in deep learning. For instance, when we talk about transformers, which is the core architecture that has made the recent AI waves, the chat GPT moment possible, is essentially a deep learning neural network architecture, right? The attention network. So technically we are uh going quite fast and uh we are trying to solve the problems that are unsolved yet. For instance, the problem of less data, the problem of uh efficient algorithms, energy efficient algorithms, uh algorithms that are not the guzzlers, equivalents of cars that we have been talking so long. So those are certain areas that we are talking about, and there are waves coming which are essentially in quantum CPU quantum computing waves, uh certain kinds of workloads um actually will benefit tremendously from the quantum computing uh landscape. So that's how, in fact, over the three decades I have seen AI evolve. And essentially uh from an application standpoint, I think we are currently overfocusing on language-related knowledge automation with AI, which is what the LLMs do. But uh the other school of thought essentially is the robotic AI, physical AI space, where we talk about physical AI being agents with physical embodiment, the androids and humanoids of the world that can actually negotiate in an industrial environment or in a world model kind of a context. Those are the extreme use cases that we can think of that are being tried and tested.

Jason Rome:

Yeah. No, I mean very different and and I think very exciting for for where companies are now. You know, Raj, maybe maybe we're recording this at the end of the year. And I think if we look back and maybe if you share a couple of the conversations you've had with folks, because you know, I I remember a a CTO said to said to us at Global Logic, my developers are all saying they're 20% more efficient, my costs aren't down 20%, and I'm not shipping 20% more. And you and I have had some interesting conversations. Um, and and even there's there's tooling partners we work with where one of our tooling partners said to me, He's like, you know, we have analytics on hundreds of organizations and developer velocity is almost never the problem. And I think that's the first problem we maybe tried to solve with AI was coding faster. Talk to me though about where is the actual bottleneck in terms of like end-to-end value creation versus where we're locally optimizing with AI and how are you seeing companies start to, and how is Global Object starting to think about that differently? So I think, you know, two things.

Raj Sethi:

One, of course, we kind of joined the bandwagon with the rest of the industry where the whole focus was around code generation. It's our understanding now, and we have got statistics to back that, that that comprises no more than actual 15% of the developer's time. Majority of the developer's time actually goes in planning, thinking. And what we found out also internally is that a lot of chattiness goes on between the developers, the product owners, the business analysts, the architects. And the question is, what is this nature of this chattiness? It's actually the spec. There's no clarity on it. And the question is, how well have you elaborated your spec? So it boils down exactly to the problem, which is the core. It's the spec. And if we were to focus entirely on the spec, I think we've got good generation technologies that can actually produce a lot of code as well as configuration, as well as design and architecture. Um, and that would actually yield into the ROI. So the question really is where are we spending time on? Right. And I think it is a hard problem, but it is the, I think the the nexus of all the problems that we see in the SDLC is the spec, how well it's elaborated.

Jason Rome:

Yeah. And I I think even AI aside, right, if we look at a lot of the modern product assessments that you and I have done with a lot of these companies, because you know, we we do come in a lot and we hear, hey, my developers are slow, and that's my problem. And that's that's never actually been the problem that we found. But it is a lot of, you know, we have really poor requirements, you know, in some of these organizations that are very traditionally engineering tech heavy, you might have one product person for 30 to 40 tech folks, right? I think the other one that that we've seen a lot too is is the other end of the pipeline, you know, things, which is the other place that developers that I've heard love not to spend their time is I don't have a good accurate QA environment. It takes forever for me to get work merged back to master and committed. You know, the pipelines are broken. I I can't get an accurate build out, I can't do root cause analysis. And so I think that's one of the interesting things I've been observing is people, where we started with AI, we're kind of, you know, we're we're fixing something that wasn't actually our problem. And and one of the things we say is, you know, these AI tools, they will magnify your biggest inefficiency as an organization. They're gonna shine a spotlight on it. And so if you're an organization that is prone to tech debt or code that's not maintainable, AI would love to contribute to that. If you're an organization that doesn't have strong security practices, AI would love to contribute to that problem. And if you're an organization that doesn't elaborate really good requirements, AI would love to highlight that fact for you. So I think it's one of those things, if you're an organization reflecting on this, the the introspective nature of needing to know, hey, this this is what's not been working for us, and this is where AI is going to hit us. Ashwin, I want to pick up that thread with you because I think you talked about this interesting intersection of architecture and engineering and innovation and how those used to be separate things and now they're all kind of on top of each other. And I think even further than that, right? There there wasn't a time too long ago where, you know, digital IT systems integration were separate enough domains, but now, you know, every project, right? It's a little bit of agile and product, it's a little bit of systems integration, it's a little bit of platform integration. Talk to me about have you seen anything companies have done or changed to help the fact that these disciplines are so you know entangled to be able to help use these tools still while making sure those systems work together?

Ashwin Gaidani:

Absolutely. You know, I think it is it's very critical to understand the roles that all these each attributes play. For example, I know let's let's look at augmentation versus automation. Okay. Now if you look at the thought process, it is we need to define human in the loop versus agent-in-the-loop approach. Explicitly for each stage of SDLC, because uh we have so much of expertise of or or or we have a focus on those attributes which keeps on shifting with uh change in problems. And everything started the conversation with you know, so let's have human in the loop to take a decision, but now we are shifting that to agent in the loop to take a decision, right? So that is very critical. The second piece is uh the experience. Experience led to interactive design. Now, with change in systems or change in architecture, one of the pivotal points is experience, as a matter of fact. And I think that experience component is associated with each and every interacting component. It can be a developer or it can be a consumer, it can be a system. Every attribute needs to be designed and defined through the lens of uh experience and interactive design, which usually you know the focus is to minimize the cognitive overload by ensuring agents are providing those summaries to our human developers and those recommendations that help them make those quick decisions. And in the end, the most important aspect is organizational change management. I think you know we need to start focusing on addressing the fear and resistance in adopting these agents, in infusing them in our normal workflow value chain, right? Because by adopting the agents in the right construct plays a very crucial role in the success of the entire value chain. And that's exactly where the ROI can see the light of day. So that's my perspective.

Jason Rome:

Yeah. No, that's really interesting. And I want to definitely come back. I think we're gonna we're gonna spend some time on adoption and and setting up the environment and everything. Um so I I think if we look at the R of ROI, which we've only gotten one letter into the podcast so far, I think what we're saying is, you know, we haven't spent enough time on adoption and change management. And, you know, I I think where companies are are struggling is do they have the introspection to really understand like how they need to evolve their process to be able to get that return. Now, I I think the other concern that people have is the cost. And I've already heard certain use cases that people get a bill and they're like, we used how many tokens on certain things and certain use cases. You sent some really interesting articles on on less is more that I think start to look at a question is the future of AI about scaling laws or is it about curation laws? You know, we're we're pre-training to learn knowledge, but more of where companies are gonna win is in the fine-tuning to change the behavior of these models. Can you just talk about like what you're seeing and what people should maybe anticipate from, hey, we we've given these tools the history of the internet and everything we know, but is that always how we're gonna train these things in the future and how much data do we actually need to be able to beat some of these models potentially?

Tapati Bandopadhya:

Absolutely. And uh I think uh another discussion here in US, uh we are hearing increasingly is about the technology debt that we are eventually creating, probably without analyzing it extensively. So if you look at uh the the kind of processing or data center infrastructure that are being built, so typical life cycle of these technology stacks are about uh two to three years. Now we are planning to stretch it to at least six to eight years, and we don't really know what will be the state of art uh of the infrastructure, neither from an algorithmic point of view nor from an infrastructure point of view. I think there is a danger of overinvesting in current state and creating a take date, which will be a problem of the future. And what you mentioned, I think that's a very critical thing in terms of how do we measure the cost in the first place, right? So are we looking at uh the future value? Are we really using the depreciation models that we typically use for any kind of assets, right? Those are certain areas that we need to create uh, you know, the AI infrastructure costing models uh in a very comprehensive manner. And I keep telling, you know, our clients that we should look at a unique combination of AI ROI, which is a combination. Of F1 scores plus sharp ratio. We keep on talking about the F1 score, the accuracy, precision of the AI models. We don't talk about the Sharp ratio in terms of the risk adjusted returns, right? So the risk of failure, the risk of error of AI infrastructure is going to be huge. We saw it like two weeks back, AWS went down for a few hours and it uh created havoc in most of the organizations and the applications that run on AWS cloud. Now you think of our remediation process for that to be an N plus N architecture, the typical BCP DRP on another cloud. Now who's going to pay for all that, right? So for instance, as we talk about that uh, you know, a Ferrari needs a road that deserves a Ferrari, who builds that road, right? Who pays the price for that road? So that's where I think uh we uh we take a lot of things for granted. Just imagine if uh you are driving on uh I and then a Google map uh stops functioning, right? It creates utter chaos. I mean, nobody reaches anywhere. So that's the kind of cost, risk versus value that trade-off needs to show up in all enterprise AI use cases, be it generative design, be it in customer service, any kind of agentic uh software development even. Uh so the cost of error, the cost of bugs, uh, and uh the risk of uh running technology dates are certain areas that we need to talk about more frequently.

Jason Rome:

Yeah. And it's interesting. This morning opened up the Wall Street Journal and I saw this headline big tech soaring profits have an ugly underside, open AI's losses. And so I think, you know, the other side of this, there's a lot of people that have put a lot of venture capital into a lot of these companies that are gonna look for monetization. Um, and and that's gonna change the price, which changes the ROI formula, which I think is coming eventually at some point. Um, and you know, Raj, it reminds me a lot of five, six, seven years ago. I mean, we still see these. It was a lot of get me to the cloud. And now it's optimize my cloud costs because you know, there was a lot of credits given out, and and a lot of people then someone started getting the bill at some point and they said, Hey, did we do this right? And I think we'll probably see a similar pattern of depending on how much OpenAI charges me, I might not ask ChatGPT some of the questions I ask it a couple of years from now. Um, I might have to go back to Googling it.

Raj Sethi:

So this is not unusual, right? We've seen this with the cloud, we've seen with data. There is a fear of missing out. That's for most there, whether you like it or not. And a lot of organizations just follow the suit. Right. If if my neighbor has something, then I gotta have it too. Um and a lot of that thinking comes much later. So I think there are two aspects that I see, at least on the AI side. One, of course, is that there's a lack of understanding of what the current technology is capable of doing. I think people are trying to stretch it more than it's capable of. Do you have an example of that? So I you know, we know what the transformer technology is based on. And the key aspect is that there are lots of people, and I've had conversations with very, you know, experienced folks, they think that there are the realm of reasoning that's in the current technology is beyond what the technology actually offers. If you can stick around the semantic side of it, it does a wonderful job. But if you get into the higher-end inferences, which is where the human mind excels in, it fails. It fails miserably. And the question is, how much do are you ready to push? Yeah. And our sense and our recommendation has been when we look at primarily from my perspective, when I look at SDLC, is to focus on areas where we think generative technologies are best suited for, which is elaboration. They're wonderful at that. And and using them in that capacity itself can bring significant value to the organization without getting into the part of what wipe coding can do and so on and so forth. I think there are lots of areas where people get seduced by wipe coding. I don't think that's suitable for most of the enterprises that we work with.

Jason Rome:

Yeah. I mean, I I talk to a lot of CTOs that are being asked by their CFOs how much money you're going to save next year for Gen AI. And I say, that's a pretty scary question, right? And, you know, I think, you know, again, looking at the human element, you know, you have a lot of people out there that now that that productivity gain trickles down to somebody. But you have a lot of folks being asked to to do a lot more with a lot, a lot, a lot less. And so I think it'll be really interesting seeing the organizations that we already saw this in some cases, right? I think we're eight months into the prediction that there'll be no more software engineers in six months. Um and and obviously there's there's I think still a couple of software engineers out there. But I think that's one of the hard things is are we giving ourselves time to get good at these tools and adopt them compared to when we're expecting the gains to come through? And even, you know, one of the things that that we're finding and we're learning with a lot of our clients is, you know, these stacks, especially for elaboration and product, right? You know, for a developer, you've got your developer interface, you know, a code line interface, you're working in your development environment. Like your tooling is is pretty consistent in an organization. And so when you build, you know, cursor or copilot or something into that, it's in the tool. With product and design, we've seen other tools that aren't integrated into Jira or product board or AHA. And so now we're introducing a lot more tools and we're actually proliferating information. And I had a conversation with with one of our senior designers, and we were talking about you know, if we look at Figma make versus UX pilot versus lovable, like which one of those becomes a tweener solution versus not in terms of the PDLC? And how many, how many new tools can the PDLC take? And with a lot of these tools, I think to your point around vibe coding, a lot of that is very, it's a very greenfield aspiration. And if there's one thing we see less of than we used to, it's greenfield for a lot of enterprises right now. So I don't know. Any thoughts there, especially on the code side where you're seeing it work versus, hey, this hasn't worked?

Raj Sethi:

So so I I think I want to give you an analogy here. Yeah. When you look at an orchestra, and the conductor who conducts actually knows every nuance of the music. He's just not there to simply orchestrate. So when you when it comes to wipe coding, the person who drives all of that, doesn't matter whether it's with agents or humans and on or in loop, doesn't matter, needs to understand the nuance of that. So the expertise doesn't go away. We're not there yet. Where the conductor is the Gen AI. Yeah. Not there yet. But there is presumptions that that is where things are at every role. And hence either there is an overburden or over expectation from the technology, which never materializes. But the key aspect that I see is that if you were to look into specific areas within the SDLC, break it down, you will find very specific micro roles or micro tasks that can be easily done by Gen AI with a very high efficacy, but needs orchestration, test automation. Test automation classic example, or just orchestration. I'll give you one example. But let's say you are elaborating a backlog. Well, backlog is there's a functional side of it, but that can be addressed with a PO and maybe some assistance from a business analyst, or maybe just an agent could do that. So that's one area where you could see a significant lift. But then you can also look at the same functional aspect that you could pass it to an agent to get architectural design done on it or tests generation from there. So if your specs are done well, when I mentioned spec, it's it's a multidimensional aspect to it. There are areas where you would get that lift. And when you aggregate all of that in and orchestrated well without having these roadblocks where stories are getting generated and groomed in real time, where the AI is doing, and rather than having these wait times put in there, you will see that you could actually move towards a continuous delivery, which has been the dream for most software companies.

Jason Rome:

Yeah. It's so fascinating because one of the conversations we have with folks of are you going to put AI into how you work today? Are you going to reinvent how you work because AI exists? And it's a tough question to answer. But I think about what you just said, and I think about writing a good spec. And I think one of the things that, you know, Agile does on purpose is preserves ambiguity. In some cases, you iterate and you learn as you go along. Now, that's not how every organization does it. But I think sometimes, you know, Agile hides and enables teams not to have to think through a spec really well. And sometimes that leads to iteration, and sometimes that leads to technical debt to what was mentioned earlier. And so I think it it changes some of the skills you need of like, hey, if you're an organization, do you have strong orchestras like or conductors in your orchestra? Or, you know, do you have a lot of, you know, the woodwind section, the string section, they're all doing what they want to. And they're, does it sound beautiful or not? And so uh it'll be interesting for organizations to to think about that. Cause I think that's one thing I hear from a lot of folks is we're really good at writing user stories and requirements, but can we write a really good, well thought out PRD that's gonna hold up most of the way over more than 12 weeks? And one thing I see over and over again from a lot of organizations is they're they don't know what they're gonna be doing 12 weeks from now. Like it's it's let's get through this next increment and then they struggle from there. I want to shift the conversation to where we are seeing some more use cases and and adoption. So, Ashwin, you mentioned change management earlier. Um, and and what have you seen in terms of maybe what companies should expect to invest in and the time it takes to drive change and and how they should be measuring adoption and and thinking about this in their organization?

Ashwin Gaidani:

So, especially when it comes to change management, right? I think we need to understand the various aspects of change management. And I think it starts with the technical aspect, but then the organizational aspect of change management. And so there is a lot to do with how we deal with the technical change management, especially when it comes to AI powered tools and you know AI solutions. So that's uh a very critical aspect because uh we think we are handling an accurated stack of solutions which need a completely different type of change management, especially to maintain, manage and improvise those solutions from a technical standpoint. From an organizational standpoint, I believe it is very important for us to understand the implications of those changes on the uh on the organization, on the business, on the customer, and how will data be uh accepted with the outcome of a failed organizational change management process, whereas the outcome of a failed technical change management process can be completely different but uh equally impactful, right? So this is where I believe the focus on strategic partnership is extremely important, where you know uh so I believe everything now is a multi-party ecosystem, as as Raj just said. It's just not uh so just AWS or just any other hyperscaler. So now we are dealing with a more complex generative AI environments where a hyperscaler, a platform, an engineering partner, all these three play equally important crucial roles. And that's how a change management, which branches out into technical and organizational change management, plays a very crucial role.

Jason Rome:

Yeah. Um and and Tapsi, I'd love to get your thoughts here. I mean, I think we introduced two topics there, right? When you start talking about an ecosystem, I think one of the things that comes there is the importance of going from product thinking to platform thinking and what organizations need to do there. So if you have thoughts on that one, and then I think the other one is that comes up a lot is data and security concerns and and what providers and companies can do in that space in this ecosystem. So would love your thoughts on one or both of those kind of areas as far as it comes to AI SDLC.

Tapati Bandopadhya:

Uh so essentially from uh the use case uh adaptability standpoint, let's uh focus on SDLC first because that's the domain that we all are most comfortable with. So essentially, uh as Raj was mentioning, I think uh AI probably also has an opportunity to be used for identifying the opportunities within SDLC for identification. So requirement specifications discovery, for instance, from legacy code modernization projects is a very, very viable um use case for AI, for agents, for instance, multiple agents. The other aspect of SDLC that benefits tremendously from AI is testing, where we generate test cases, we generate very uh inclusive synthetic data, and we also kind of do an exhaustive testing. 100% of the uh bugs get tested during runtime, during production environment as well. So there are in fact um big benefits to be achieved for applying checker agents and maker agents, discoverer agents for code discovery, requirement specification discovery, functionality discovery from legacy code. You look at, you know, we we people are like the dinosaurs of AI IT industry now for 30 years. When we came into the industry, we we worked on COBOL, right? Now, uh, do you get any software engineer who is comfortable with COBOL at an entry level? You don't, right? So if you give a youngster, 22, 23 year old, straight out of college, huge bunch of COBOL code and tell them to discover the requirements, specifications, reverse engineering it, hardly you will find those skills, right? So those are the areas where SDLC automation with agents make a lot of sense. And we do have curated data, curated knowledge base, curated patterns available to train these models, train these agents. The context engineering PC is pretty advanced and mature there. So I think those will be the starting points once we can actually deliver the good results in terms of not just productivity but code quality to be improved, right? Because as we were talking about that, if we automate a mess, we end up in a bigger mess. Right. So first we clarify in terms of uh decluttered the mess, we discovered the real valuable code inside the code, inside the legacy, and then we use uh a translator kind of a mechanism that can produce optimized code to run on today's infrastructure of thumb.

Jason Rome:

Yeah. I love that last point. And you know, Raj, this is one area that you and I haven't talked a ton about, but I'm genuinely curious because I know we've tried some of these projects, right? And I think one of my questions, and I think maybe for the audience, one of their questions might be you know, how far can Gen AI take it when it comes to code modernization? Is it great for just understanding current state? Is it, you know, suggesting a new architecture or what the code could be? Is it writing the code? Like where do you put a human back in that loop and what's real and what's not? And does it change given the context of the language? So I'm going to give you a uh a case in point.

Raj Sethi:

And I think it's it's not your traditional example that you'd find. Perfect. We recently uh finished on a project. It's actually going on, but there is a significant part of that was done where uh the customer had to release a product that they've been promising their customer base for about four years. I've never heard that before. And so so the language is a 4GL language called Progress. So I'm sure if you look at it, you'll find out a little bit about it. And the goal was really not to convert that into any modern language. It was actually to go from a procedural language from 4GL to an object-oriented version. So we were going from legacy to legacy. There's not enough data out there on what Progress4GL is. So there wasn't none of these LLMs were trained on that data. But what we found out that LLMs are great and they have this emergent properties that sort were discovered that they're good at even writing programming languages. We were able to do pull out enough information from the code base where we could have a product owner design high-quality backlog without having significant contributions from SMEs. We saved about 900 hours of work. Wow. And using that same backlog, we were able to now generate test cases. And uh we brought the project back online and we actually did the release sometime in November, early November. So here's a classic example where it's not your standard Java.NET world. You're talking about just emergent properties. We did it for another customer that had a variant of basic that they had written off their own. And we're running into those kinds of scenarios again and again, where customers are asking, how can you use Gen AI to do something like this? Yeah. Because all the other ones, the semantic side of it, where known languages exist, known frameworks exist, those are you would find a lot of examples of it. Majority of the work that we do is in that space. But there are some of these examples that truly give you the promise of how you can use Gen AI from basic ability to understand language or linguistic models.

Jason Rome:

I mean, and that's a strong engineering perspective there. I know we talked about architecture earlier, but are you finding for companies that are that are trying to do uh a monolith decomposition as well? You know, are we starting to be able to use these tools to help us understand that? Or is that still an artist area where the architect is gonna have to take that further?

Raj Sethi:

So here's here's the pragmatic choice that I give a lot of customers. Why do you want to go there? So eight out of ten don't want to do that. Eight out of ten are good enough with service-oriented architectures. Don't touch my database. Just give me modernization, open up the API so I can extend my capability, I can modernize that. I could use these capabilities as tools within my agentic frameworks and so on and so forth. They're not least interested in saying, let's rewrite everything. There are some cases where the runtime changes. We want to move away from COBOL completely because there's just no skill there. You have no choice but to rewrite. And in those cases also, sometimes a service-oriented architecture just works fine. You really don't have to go domain-driven because that is a significant cost you would have to pay. And it's still an art because two humans, given the same task, don't decompose the domain same way.

Jason Rome:

Yeah. I mean I think that, you know, the the two examples that that you gave and the one Tops you gave earlier, I think are the three that that we see people struggling with the most, which is, you know, I feel like there's all these uh coding languages from the 90s that are kind of the 90s are having a comeback and people are realizing all these coding languages still exist. And I think COBOL gets the most screen time, but I think there's there's other ones that you know you've never heard of before and you learned it exists, is one. I I think this, how do I manage my monolith, to your point? And I think you you hit an important point that I think in the the the broader technology vernacular sometimes. I think somewhere along the line, we stopped saying service-oriented architecture, and we just started saying microservices, and we started treating those two things like they were the same. And to your point, the cost structure is is vastly different. And you know, I think you and I have both seen companies that that went too far the other way, and and now they have too modern of an architecture, and that's a type of tech debt in itself. And then I think that the last one being this kind of cloud uh modernization as well, and and companies thinking about how do I, you know, de-risk my tech, allow for future innovation with my architecture to Ashwin's point while continuing to work, because there's nothing that people don't love more than parity, which is spending two years doing a bunch of work and I get the same thing on a new one architecture is very unsatisfying for folks. So, Ashwin, I'm I'm curious your thoughts you know, in this code for modernization topic and what you've seen from an industry perspective.

Ashwin Gaidani:

Sure. Let's start this thought with a shift in mind from a product mind. Set to platform mindset. So that's the core uh you know idea that we need to live with where we've seen API first because now that we have a platform, we start focusing on how API drives the revolution. And let's also think through of the CI C D of the Gen AI augmentation. That's how I believe we are going to improvise the entire life cycle because continuous integration and continuous development of Gen AI solutions or applications into your STLC stages and phases is going to iteratively improve how you are going to define, design, and deploy your solutions. So that's something a very straightforward approach on how we can look forward to start improvising this entire offering.

Jason Rome:

Yeah. Um Tapti, you you started us on this track that was not planned for our conversation today. Any other thoughts or any other examples you've seen of I think both where the tools really help and I think where people have tried to put too much on the tool and we're just not there yet in terms of legacy modernization.

Tapati Bandopadhya:

And uh I really love the example that Raj mentioned, and because it takes us again to the fundamental of uh computer science in the first place, that ultimately every piece of code or every piece of infrastructure is code, right? It's the finite state automata that Turing developed, right? So I think that foundational thinking is something that whatever we have learned so far, we have trained our machines, but machines are still not able to, like for instance, uh as Raj gave the example of 4GL, so 4GL existed, so the machine could discover it through logic. But if there is a programming language that is to be discovered, which will actually best fit a particular type of workload, for example, the quantum computing related workloads, right? It's a very complex protein molecule synthesis kind of very complex workloads for those kinds of complex workloads. What would be the ultimate language? Even that discovery, we are not there yet. Also, because there are not enough examples. I mean, zero shot is fine, but zero shot also requires some kind of pre-trained uh knowledge so that in the inference stage it can learn with no examples, it can discover on its own. So in an SDLC context, in fact, uh I slightly differ from the uh, you know, the service-oriented or microservices kind of boxing of the architecture because I think the architecture also needs to be very agile. I really like this concept of stability that uh some of our peers are talking about. Where there are two swim lanes essentially, right? Uh for all kinds of workloads. Some of them require stability first. Where we actually need to, like, for instance, in BFSI, if you just tell them that okay, whatever the risk, you just uh keep running the transactions at speed of light, that's not a valid approach for them, right? So for a swim lane one, which is stability, we have to look at legacy code modernization use cases which are focused on quality and risk. For the agility swim lane, the swim lane two, we need to where we talk about innovation, we talk about new use cases, new kinds of code to be discovered, the codes that have not been written yet, the programming languages that have not been developed yet by humans in the first place. For those kind of things, we need to look at throwing more compute discovery, opportunity assessment, identification of those kinds of problems in the first place, where uh a lot of new generative AI applications need to come. We are still overfocused on swimlin one, but swimlin two are the harder problems, for instance, uh the ones that we do not really know yet. So those are the ones that need to be focused on. So we we should be looking at a stability architecture going forward for AI, not putting everything within the same box. And uh for certain kind of use cases, for instance, legacy modernization, there will be uh the fitment of the swimlin one stability focus. For certain kinds of use cases where there are new algorithms, new programming models to be discovered through AI itself, through generative AI, those are the ones that need uh, you know, uh service discovery kind of a new kind of architecture. Attention models were actually the starting point of it, right? But it's uh long way to go. And that's where I think a lot of RD from these um hardcore algorithmic uh companies are going on.

Jason Rome:

I love the word stability. So that is definitely gonna be my word of the day, and I'm gonna use that in the next week in a sales call. So thank you. I think that actually brings us to kind of our close out here, which is gonna be like, what do we think the future looks like here? I talked to a group of our interns over the summer, um, and and they were asking me, you know, career advice, especially in the age of AI. And I I told them, hey, listen, don't let the job titles and the career paths of my generation define yours. Um, because I'm already seeing a lot of these tools enabling designers to do more in engineering, engineers to get involved in writing PRDs. And I think we're we're seeing folks flex. And I think one of the things I'm excited about in the SDLC is just how we communicate with each other and how we share knowledge. I think so much when I do process optimization, I think about three processes in the SDLC. There's the decision-making flow, there's the information flow, and there is the work flow of how requirements become code. And I think we're really focused right now on the requirements side of things. We're focused on the work, the code, the user stories, the testing. But I think how we share information with each other and how we make decisions is a really interesting area that we'll have to rethink if we really truly want to optimize the work side of things. So, Raj, I'm curious your thoughts, like you know, two or three years from now or five years from now, however far you can see in into the future. How do you see the SDLC being different than it is today? To be really serious, I can't see beyond one year.

Raj Sethi:

And there we are, done. But I I do think there is one uh I've I've been part of a couple of other conversations where people have asked me about what's the future of the junior developer. And my point is it's great because if we can't, or if there is always a risk for the future generations to excel, humans are gonna die as a species. So I'm never worried about the future because we'll adapt to it. The key aspect that I see, and it is something that I pass to everybody, is there couldn't be a better time to live. You have information and knowledge that is democratized at levels that has been unparalleled before. You can bring ideas from other sites into your work. You can bring ideas that are not related to your field today and use those opportunities to create new products, new offerings, new ways of thinking, new frameworks. I think that has not existed prior to this. So I see it to be bright. In the short term, there may be some pain because people will have to change their roles and what they do. But I think that if you have an open mind and the ability to learn constantly, this is a great time. I wouldn't be worried about the future at all.

Jason Rome:

We'll adapt. Yeah, I I love a hopeful message there. And and I would agree. I I think you talked about where people spend their time being a key thing. And I think if all of us step back and looked at a lot of our days, how many of our meetings and how much of our correspondence and time is just overcoming, I speak a specific language and I have ideas, or I was in this meeting and I saw this thing and I'm trying to share that with you. That's what these LLMs are good at. They're good at seeing language, learning from language, and sharing that. I think we've adjusted to how we work yet to move away from this more bureaucratic, heavily synchronous type of work versus this, you know, empowerment is the other big buzzword, obviously. And I think these tools do open up true empowerment. So I really appreciate everyone's time today. I think, you know, summing up here, you know, I I think the the first thing that the Ashwin's talked about is, you know, navigating, you know, how you're architecting for the future, how you're doing your end-to-end end engineering and innovation and not seeing those things as three separate things, but an integrated system and having the right people or the right partner that can be honest with you around what's working and what's not. And and we talked about this car analogy a lot and not just looking at the car, but thinking about the road. Um, and so, you know, if you're just throwing these tool licenses at the wall and hoping it sticks, you know, that that's not going to work. And so being able to have a conversation around an honest introspection on where are we getting blocked? Um, and or where does where does the money stop? Like if we weren't going to inject capital into our SDLC, where would it get stuck? I think the second thing and and the rule of thumb I've I've been telling people is, you know, spend $2 on change for every $1 you spend on a license. Play the long game. Raj, I love what you said about junior talent because I think that's something I've heard a lot. But you know, to give others hope, you know, one of the things I've heard from some clients is they're hoping to do more with junior talent because of these tools and they're trying to shift more of that work. And so I think there's companies out there that are just using their senior talent to give these tools. And there's companies out there that are, you know, working with more junior talent to do more. We're gonna see both strategies. Both sides will benefit. Will there be some short-term pain? Yes, likely. We're gonna get through it like we have before. And then I think if we look at the technology is still evolving. We talked about some of these small language models, we we talked about quantum computing. There's other problems ahead of us that Topti talked about that that we haven't solved for right now. And so the cost side of this is going to continue to change. So figuring out where the return is going to be, but but adjusting to what you use and how you use it and thinking about in the long term, what where is it, where is it worth investing here? And it's really solving a problem I have. Um, and and where is it someone playing an instrument versus needing a conductor for our orchestra as we go through? So I really appreciate everybody's time and and thank you all so much for for being on episode of Build What's Next.

Josh Lucas:

Thank you for joining us on Build What's Next Digital Product Perspectives. If you would like to know more about how Method can partner with you and your organization, you can find more information at method.com. Also, don't forget to follow us on social and be sure to check out our monthly tech talks. You can find those on our website, and finally, make sure to subscribe to the podcast so you don't miss out on any future episodes. We'll see you next time.