.jpg)
Build What’s Next: Digital Product Perspectives
The process of developing digital products and experiences can be a daunting task organizations often find themselves wondering if they are solving the right problems the right way hoping the result is what the end user needs. That’s why our team at Method has decided to launch Build What’s Next: Digital Product Perspectives.
Every week, we’ll explore ways to connect technology with humanity for a simpler digital future. Together, we’ll examine digital products and experiences, strategic design and product development strategies to help us challenge our ideas and move forward.
Build What’s Next: Digital Product Perspectives
Navigating the AI Landscape: Insights from Jon Webster
Jon Webster of CPP Investments discusses how AI, particularly large language models (LLMs), is changing organizational decision-making. He emphasizes the practical usefulness of LLMs over their "intelligence" and introduces the "generator-verifier gap." Webster also explores how LLMs can break down linguistic barriers in organizations, make complex thinking more accessible, and highlights that in the age of AI, human differentiation will increasingly come from emotional intelligence and relationship skills.
Jason Rome on LinkedIn: /jason-rom-275b2014
Jon Webster on LinkedIn: /jrwebster/
Method Website: method.com
CPP Investments: cppinvestments.com
You are listening to Method's Build what's Next digital product perspectives presented by GlobalLogic. At Method, we aim to bridge the gap between technology and humanity for a more seamless digital future. Join us as we uncover insights, best practices and cutting-edge technologies with top industry leaders that can help you and your organization craft better digital products and experiences.
Jason Rome:Welcome back everybody to another episode of Build what's Next. I'm your host today, jason Rome, head of solutions for Method. Really excited about my guest today and the conversation that we're about to have, john Webster who's joining us. I'll have him introduced in a second, but John and I have gotten to know each other over the past couple of years and probably some of the most mentally gymnastic, challenging, thought-provoking conversations I have had in my entire career. So I think this may be the first podcast where there's not only a bibliography but potentially a footnotes section based on the amount of academic research and conversations that John has had. So really excited we're going to tackle today.
Jason Rome:You know everyone's excited about AI. Everyone's excited about LLMs. You know. I think John and I are going to tackle really the forefront of where that is and looking at the intersection of LLMs, organizational design and complex thinking. You know John and I both spend time in this space. You know, john from an investment management perspective and myself from thinking about how product teams can change, how they're making decisions about company strategy. So really excited looking at the limitations of the current system and how organizations can leverage that. John, welcome, if you don't mind, maybe give a little bit of background on you and the topic we're going to talk about.
Jon Webster:Yeah, no, great Jason. Thank you so much. That's quite an intro, Footnotes and academic. So, John Webster, I'm the Chief Operating Officer of CPP Investments and maybe, if I sort of just introduce the Canada Pension Planning, it really is, I think, one of Canada's great success stories $114 billion invested at home in Canada, over $714 billion AUM globally. We're a global investment organisation and we invest the CPP Fund for the next quarter century so not the next quarter to help create retirement security for future generations of Canada, and what really sets us apart is our long term investment horizon and certainty of assets and our sort of singular mission. In terms of the topic today, I think we view artificial intelligence as a really dynamic and emerging business opportunity and risk and as an organization, it's really important just to explore how we can leverage generative AI capabilities and broader AI capabilities to deliver the best outcomes for the CPP funds. So at the heart of, I think, everything we'll talk about today is sort of almost a singular question, which is will it make us a better investor?
Jason Rome:So I'm really looking forward to the conversation today, yeah, and with CPP, obviously I'm grateful because part of my company journey. We are Globalogic a success story from the portfolio as well. So, jumping in, I think some of the research right now and some of the debates that happen are around. You know how actually intelligent is LLMs or are LLMs? And I think also you know, does it matter, because I think there's a debate of LLMs can achieve these amazing feats and seem like they are extremely intelligent, but there's a debate on you know, are we going to achieve general intelligence? How intelligent are they? I guess a two-part question in your thoughts here, john of you know, does it matter how intelligent they are compared to the outcomes you're seeing? And you know, as you've been exploring the space maybe, what's kind of surprised you about where they're able to help. But then we'll start to drift towards what are the limitations that we see, especially in a complex space like that which you're in.
Jon Webster:Yeah, I mean I think it's a great place to start. I mean I have this presentation and I start with a few pictures. I start with a picture of Jan LeCun, I think, probably back in 2024, saying you know, llm's are as intelligent as the cat. And then Jeff Hinton, maybe about a month or so ago, saying you're going to have to be super smart for them to not to be able to do what you do. And then you've got like two economists you know Nobel Prize winner, one from Stanford who, again, completely opposite ends of the spectrum, one saying you know, I think a huge amount of money is going to be wasted. Only 5% of what people do is going to be, you know, impacted by this. And then you know another one saying it's, it's the most impactful thing that we've ever seen.
Jon Webster:I think in some sense, at this point in time, it's just it's the wrong question. It's not a question of whether it's intelligent, it's whether it's useful. And then it's a question of, uh, can it do what you do? Can you make it do what you want it to do? How far and fast will it go, uh, and how's it going to impact your value chain? So I would come back to like just the real essence of, uh, you know, anytime you're making a decision about the deployment of particularly technology, and I would, I would accept. Actually, this is a different technology. It will have agency in the sense of being able to do things and make choices, in some sense to do things which technology hasn't had before. But at the heart of it, you've got to sort of understand what you do and how this applies to what you do. Um, and I think a lot of the conversations are getting sort of tied up in the sort of existential about is it intelligent or not? Yeah, and actually not back to is it fundamentally useful? Um, and then so, then, so what do you do? Well, then you have to kind of study the nature of the technology and sort of dig into it and at the heart of it, um, I'm sure we'll talk about this more as we go through, given the conversations we've had.
Jon Webster:Jason, you know you go back to what was the nature of the fundamental technology underneath this right, the Transformer was about actually language and translation of languages, and actually I come back to a very simple, you know, perspective from a sort of economic and sort of, I guess, technological and sociological perspective, which is actually language, is what holds a lot of value chains together. The specialization of language, the generalization of language, the impact that that has on communication costs in the value chain much like the internet, change parts of the communication costs between parts of the value chain. So does language have a specific effect on that? And I think if you really want to dig into what it might do to value chains, to industries, you start there. And then of course, there's been a huge amount of progress in the last, just even the last two or three years, layering on capabilities on top of that. But get back to the question is it useful?
Jason Rome:Yeah, I like that a lot, and we're going to definitely talk value chains today, and you know, are we? Where are we putting AI into our value chains versus? Where do we need to reconfigure them? But then also, I think to your point LLMs sometimes force us to question how strong of a grasp we have on our value chain in the first place. You know, just thinking about some of the conversations I'm having right now, more on the use of LLMs for the software development lifecycle, some with PEs and their port codes, some with not saying, hey, you know, we're seeing developers become more efficient, but our costs haven't gone down and we're not shipping any more software.
Jason Rome:So what gives here?
Jason Rome:And so I think the importance there to your point of, and something that you like to think about, which is LLMs are going to help us remove our mundane tasks.
Jason Rome:But you know, just because we have a mundane task doesn't mean it impacts our entire value stream.
Jason Rome:And being able to think about that and I think that's one of the early things that I'm seeing is a lot of our AI projects are business process reengineering and cultural reexaminations hiding in the guise of an AI automation project, and so you know, getting into some of those limitations and where it's useful, where it's not. You know, I think we can all definitely say at this point there are a lot of tasks that can be automated. Where we have highly repetitive, content-rich type tasks, ai is going to outperform us, you know, just like the calculator outperforms us at math, pretty easily there. But then we get into this other space, and one of the things that you talk about is this concept of the generator verifier gap and being one of the fundamental limitations of how LLMs will be applied in a complex theory list domain, if you will. So talk a little bit about kind of the theories within domains and also what the generator verifier gap is and how you got to that point.
Jon Webster:Yeah, I think the I mean, I guess I mean some of this comes from gap is, and how you got to that point. Yeah, I think the um, I mean, I guess I mean some of this comes from my, you know, as you start to you just think in your practical, everyday usage of it. Yeah, um, so you know, I remember the first time well, actually, as first time it makes it sound like it's a eons ago and first time I ran a deep, deep research report. Maybe that was like what, four months ago, or something like that it feels like forever.
Jon Webster:So it feels like forever. And you get an output and you're like, huh, actually that looks pretty good. Give me a scenario on something unfolding from an economic perspective, maybe over the next five, ten years, and you get something which is a range of scenarios ranging from the plausible to the possible. You get actually quite a rich perspective on, you know, all the factors that are unfolding. And then you look at actually some of your own work and some thinking you've done around it. You're like huh, I'm not actually sort of sure I can tell the difference between these two things. Yeah, yeah, um, and that, for me, is where sort of the verification part comes in. Actually. So, actually, uh, you know some things have no answer at this point in time. Yeah, so some things will unfold over time and are hard to verify.
Jon Webster:And typically in investing, many of the things that we deal with unfold over time. You know, from the time you start the idea to the time the results, you know all sorts of things have happened in the world. All sorts of things have happened in the investment. All sorts of things happened in, you know, in the company, um, uh, and what turns out in reality might have been very different and that sort of verification stage from your sort of initial idea to to the results, um, you know, is genuinely in the sort of you know it's complex, it's unfolded, uh, very differently than you thought, so I think. So that's like one one thing to sort of hold. At the other end, you've got like just the classics of reinforcement learning with like a verified signal where you're straightforward into a if I've got some, you know, genuine way of guiding the reinforcement learning with a verified outcome, I can actually get you know machines to do a bunch of stuff really, really reliably. And that's not the domain actually that sort of investing lives in to begin with. Yeah, we're back in this hard to verify space.
Jon Webster:So then I think you've really got to think about what's the nature of, if you like, the work we're trying to do, and actually a lot of research is, or sort of investment, is exploration to begin with. First of all, actually you're going to explore many ideas and actually, you know, part of being a great investor is, by definition, if you're gonna, you need to have a contrarian perspective on the market in order to sort of do something other than the market would have naturally done. So now you need to actually hold heterodox beliefs as opposed to just the orthodox belief, and I think that's a great place where generative ai in particular can actually really bring something different than you're able to do today. Actually, the ability to explore many parallel perspectives, that sort of ensemble effect of looking at many alternative theories about how things might unfold, is a beautiful place for generative AI to play. You don't need to be worried about. You know the precise facts and figures of 3.4% and 5%. You're dealing with broad possibilities that I think really inform. So that's like one space, I think, is like you know brilliantly, right, I think.
Jon Webster:Then the other part I think about space, I think is like you know brilliantly right, I think. Then the other part I think about, um, probably less to do with the sort of verifiability, but more to do with actually how has, almost how does every value chain evolve, and every value chain evolves through? You know it's a gigantic computation making a pencil. There's a gigantic computation across a huge supply chain. Um, and actually you know many of those supply chains.
Jon Webster:Supply chains are not just the things that go in our head, they're real physical things. You've got big stuff out of the ground, have factories, move things around, but actually investing is a lot of research, a lot of information processing, a lot of thinking and then a lot of how our processes have evolved in a sense are down to the computational constraints of us as human beings. And if those computational constraints are really changed by working with the machine together, I think we can imagine very different ways of doing our work. So I sort of hold those two things. Like you know, how can we use the technology to reduce the computational constraints? How do we think about what is verifiable Was it what isn't verifiable? And use those to sort of guide where we use the technology?
Jason Rome:Yeah, it's a really interesting examination and you know, I think, a lot of things to unpack there and even going back to knowledge, but thinking about even the prompt you mentioned of show me all these different scenarios, you know, I think illustrates one thing that you and I've talked about, which is, you know, it's no longer about having the answers, it's about asking the right questions, and one of the quotes that I've stolen from you and repeated many times is, you know, the importance of being able to separate your ego from accuracy, because I think there's a certain intellectual maturity in that exploration, of being able to hold multiple potential beliefs in your mind at once and not kind of committing to one needing to be true, versus going and asking the LLM give me the most likely scenario, or give me the answer to this question, versus show me alternatives, show me things that are counterintuitive to my thinking. I know, personally, one of the things that I've done when I'm using the tool is you know two things I give it something or a conclusion, or something I've written. I say tell me what might be wrong about this. Like you know, critique this, show me what's wrong. Or the second is you know, I'll give it something. Then I'll say, hey, interview me where my ideas are sparse and force me to go deeper, like force me to continue to do the critical thinking.
Jason Rome:And one of the key things there for me, looking at some data this week on this prep and for this was there's some early studies. You know, in the product development space, for example, where you know senior product managers, they gain productivity increase and they maintain the quality of their deliverables. Junior team members gain productivity but lose quality of their deliverables. And then another study which I think the results are still emerging to your point there's everyone on every spectrum here of you know the use of LLMs inhibiting creative thinking for certain individuals. So I think there's this real delicate balance of you know LLMs are an all-purpose tool, but there is going to be so much empowerment and difference in how strong you are at thinking about thinking and bringing those in and asking the right question, which isn't something that we're always building into our value chains really well, because we're trying to move things forward.
Jason Rome:So I think in the thought space like you're in, where you're constantly doing something new, and it reminds me how a lot of backtesting models don't work right, because as soon as you put a new model out into the universe. Lot of backtesting models don't work right Because as soon as you put a new model out into the universe, it starts changing the ecosystem that it's working within. So that fundamental challenge of how do you use LLMs when they're potentially not a right answer and you're trying to use them to help you get the right answer but change how you're thinking and change how that impacts your value chain is really interesting. With that, you know, I want to go back to something else you said about language, because I think that's another really key thing. Can you unpack, you know how you see LLMs changing that dependency on language in our value chains a little bit, because I think that's one of the fundamental points that I think got us to this point in the conversation.
Jon Webster:Yeah. So I think this could go on a bit. You said it was going to have footnotes and a sort of theory warning, so let's go right. So I think the you know, I went back to like when, when llms came out, I was genuinely I was amazed. I had some proper yeah, proper aha moments. I thought this is, you know, this is just transformative and to and I remain, you know, of that conviction today, recognizing there's places where, um, it's still not appropriate to use but there are many places to use. So you go back to like.
Jon Webster:You know, there are lots of theories about the emergence of language, but one I guess intuitively like maybe there's a bias in here is actually it was all about trying to keep things secret. So if you could just like, physically copy what I did and I found something that was really cool, then you could just watch what I did, you could copy it. There's no advantage for me. So if there were little subtle parts of what I did that needed communication with you to sort of transmit those so you could learn how to do it, I suddenly could hold on to my advantage. So if you buy the idea that language actually part of the selection pressure was to keep things secret, you can sort of get to a view of huh, isn't that why, like you know, legal people speak legalese technology people speak technology, different flavors of technology, people speak different flavors of technology. Tax people speak tech because actually it's part of keeping our part of the value chain to us and part of that specialism. Now there's knowledge in there as well. So you can't just be an empty content box just having a language, but sort of you know nothing inside that box. But actually when you look at how value chains form, they do actually form quite a lot around specialized and generalized knowledge. So you can then start from perspective. If there is a machine that can sort of help you understand those sort of business dialects or any form of dialect differently, you might legitimately say well, actually I can imagine how that would start to weaken the forces in the value chain. Yeah, so for the sort of theorists, if you're in software you'll talk about Conway's law a lot. There's a sort of extension of that called the mirroring hypothesis, which runs across many scales, and in effect we end up with this sort of very niche supplier, provider ecosystem, people who do credit by credit systems, people that do private equity by private equity systems. People that do real assets want to work with other people that know real assets in operations, and that's actually a large part of what keep the value chain together. So I think an important part of what's happening I think you're already seeing it with LLMs is they're starting to weaken some of those forces in the value chain. That mirroring perspective is really strong, so it won't sort of dissolve overnight. So I think that's one part of what you should think about as you're considering these things.
Jon Webster:I think the second part is just, you know, go back to what LLMs have been trained on right, so you can run experiments. So I've done this no-transcript. I want some great questions. I'm in a board discussion. I know there's going to be sort of you know, genuinely testing questions. I want to have a view of what are those sorts of testing questions on the topic I'm presenting.
Jon Webster:You run that in almost in any LLM and you'll get a good high quality set of questions out of it. If you happen to know who the board members are and then sort of put in you know I've got board member X, he or she, this person, and run the same query, you'll likely get actually a much worse set of questions and then you sort of like, well, what's going on? Then you've got an intuition right, which is the. The training data for board questions is high quality. You can go to the strategic um you know the institute of directors. You can go to people that sort of advise boards, and they'll publish the sorts of questions board members will ask. You've got a sense of high quality training data in there. So what you're getting reflected back at you is probably a reasonably high quality reflection of the high quality training data. You do that with like a specific name board member. You've got some weak representation. They found a news article or something else, so you've got like, just this less good perspective.
Jon Webster:Yeah, then you go further and you think, well, okay, well, aren't there like really many great ways of thinking about problems that I don't know? Yeah, there are many great thinking frameworks out there. You may have read I mean, I read a lot of books. You know I like Roger Martin. I like his when to Play, how to Win book. I read it probably for the first time 10 years ago.
Jon Webster:I was like I'm absolutely going to use this in my strategic thinking. And then the number of times I used the where to play, how to win framework after that was probably broadly zero, yeah, until LLMs came along and I was like, huh, I wonder if it knows where to play how to win. Turns out it does know where to play how to win. Yeah, now it knows a doesn't know it like Roger Martin knows where to play how to win, I'm sure, but it does know a reasonably good version of it. And if I ask it questions about, I've been thinking about this strategic question, could you help help me use this thinking framework? Well, actually I start to get something pretty interesting.
Jon Webster:So I talk a lot about like heuristics, because our heuristics are the sort of human version of the algorithms that we used in all thinking problems, and sometimes people have fantastically well developed heuristics that they built through time over expertise. Sometimes we have weak heuristics in domains we don't know very well, and I think you can see people who use LLMs really well thinking hard about how they dial into a way of thinking, a frame of reference that actually is additive already to what they do, and then you have the bonus, of course, that there's just a great big knowledge base and then with some of the sort of tooling that's coming along, like with deep research and web search. Now you're able to bring those thinking frameworks, yours and the machines, together with a big knowledge base, and then I think you start to open up a whole bunch of possibilities. And I think that's stuff that you can, in a sense, reason out about how you should use it today, and then you can start to think about, well, what's going to come next.
Jason Rome:So I mean, I think that hits on a couple key. You know beliefs that you and I had when we started exploring this space. You know one was that you know we can create a better data set to get a better average. So there's what any one person knows, there's what the internet knows, which is what these bots have been trained on, and then there's what the experts know that is probably better than the average. The second thing was you know, once we understand what this looked like, we can probably start to extend that and train people to think better, not just to know the answers, but shape their thinking a little bit better. And then the third thing being that probably starts to unlock abilities to reduce waste and accelerate value in a complex thought oriented value chain. Is that a fair way to kind of summarize how to think about that?
Jon Webster:I think that's the right way. I mean, I think you know, if you start with, you know I used, you know, the example of, like, lots of great thinking frameworks that are well researched. These are really, you know, there's a reason why they're hard. We read the books and then we don't use them because they're cognitively hard to use. Yeah, they require us to rewire our practices. Actually, we have to go and practice using them. You've got to practice them for 12, 18 months and often the practice is not just that, if you like the content, it's that, it's the framework, because, by definition, the frame, the thinking framework has has been constructed to help guide you through a problem solving process in the right way. So, so, in some sense, I think you know there are many like, if you like, above average thinking processes out there already. They're in every library, they're on amazon. You can buy books from.
Jon Webster:You know, well-researched, you know people are my favorite, roger, as an example, was that, you know the one I just used as an example um, uh, and I think, uh, you know, bringing those sorts of things that are already understood by machine debate. But then, to your point, you, you can go better, right, you can actually start to say, well, I wonder who's already got like really good, well-developed heuristics? There's probably a set of people that already have really good. I wonder how we can get them to describe those really good, well-developed heuristics in a way that can be brought to bear with you know what I know, what I understand, understand, um, and that's why I think you know, you start to sit down with the people in your organization and you start to understand that you know the tacit knowledge that's in your head. How can you articulate it in a way that actually can be brought to bear through this sort of technology? Yeah, and made accessible for others to use.
Jon Webster:And who doesn't want to use the great heuristic that your colleague has that they it was previously hard to express and put somewhere, and then, equally, them have, like you know, use your, your really good heuristics as well. Now there's a, there's a, there's a caveat to that right. It forces you. You've got to ask yourself are your heuristics above average? Which is where I think actually the sort of the performance bar raising exciting part of this is. I think it forces us all to really think about what it is we do know well, what we don't know well. I think it also forces us to think about if the machine can do that as adequately as I can, great, let's have the machine do that. And then I'm going to spend my time doing things that the machine can't do, and whether that's thinking, whether that's engaging in relationship discussions, whether that's in other things that are, if you like, uniquely the purvey of humans still. But I think that's really exciting. I don't see no-transcript through.
Jason Rome:What does the AI enabled product development lifecycle look like? And there's a lot of tools out there that I view as people having the LLM do their paperwork for them, if you will. I think some of the way we work has evolved, where it's not sharpening our thinking and it is, you know, creating paperwork that might serve a purpose, but the way that those documents or artifacts have been structured. It's not forcing people to think differently. You know, an example for me has been product requirements documents, business requirements documents, you know, prds, brds. I work with a fair number of organizations where that becomes a box that gets checked, where people are confusing activity with progress. And so, you know, we started writing LLMs. You know, have LLMs write our PRDs for us? I'm not sure if you've been on the receiving end of some of those, but I have. They're longer than they need to be and they are just full of words. They're full of way too many features, and what I see in a lot of these documents, whether it's LLM generated or not, is, you know, here's a whole bunch of business benefits, here's a whole bunch of features with no clear causal link or set of hypotheses or beliefs linking those. That is going to drive to a delivery plan or set of hypotheses or beliefs linking those that is going to drive to a delivery plan, and people read that and they they kind of confuse being aligned in clarity of thought with exhaustive documentation, versus, you know, for me in the age of ai, like, what are we actually trying to accomplish in the requirements definition phase and how are we trying to sharpen someone's thinking and what's the mental model we need someone to work through that comes out with a better outcome? And how can an LLM, you know, kind of provide an input to that which I think is very similar, you know, over the course of a diligence process, for example, of where early on you're really trying to sharpen someone's thought with these heuristics and if you don't, you know the LLM is going to produce something very average.
Jason Rome:And one thing I tell people is you know, remember the LLMs have been trained to make us happy, not to be right. You know it wants to give us an answer where it can go on and do the next thing. You know it's not agonizing over like, is this perfect? They're asking is it done? And I think that's a key difference as people think about the potential limitations, think that's a key difference as people think about the potential limitations. And, to your point, if you don't have that kind of senior person, that human in the loop, with a discerning eye, using it properly, it's very easy for bad information to now enter your value chain at the wrong point and continue being passed on. If you're not able to see that and it's not even a hallucination, it's not even wrong information, it's just the wrong way of looking at the information. That I think is really key as people think about you know, to your point.
Jason Rome:I want to move this to the next conversation, which is what are the implications on these LLMs and on AI, human collaboration and what that looks like. So you know, we've talked a little bit about. You know what are heuristics, how do we get to that? What is the limitations of data? How can we verify? You know, turning this to a practical perspective, you know what have you seen? What have you learned? What's working, what's not working in terms of you know places where that human AI relationship is evolving and it's driving true value versus a local optimization, if you will.
Jon Webster:I would start with actually, I think, actually, to begin with, local optimization is fine. I think actually we're still in the realm of people exploring how it helps them every day for the five minute tasks, the 15 minute tasks, the 15 minute tasks. I think that genuinely is great. I think that's a that is an important part of, I guess you know, getting used to the technology and remember the. You know the capabilities are moving so quick there's no time for like practices to form right. Know, what I used to do with o1 pro in december 2024 became obsolete when I don't know deep research came out, was obsolete, you know, after 03 came out, when 03 program out. So I do think it's really important. Um, you get that sort of individual productivity engagement with it, but, as you said, you know that's going to lead to a set of local in the long run, a set of local optimizations, whereas actually this sort of technology, I think, can just cause you to completely rethink, you know, I mean I describe, you know, back to that sort of computational constraint. A lot of our workflows, the way we engage, are functions of sort of time and space and people have to be inconvenienced at the same point in time, Whereas actually I think this allows you to get much more to sort of object-centered. I've got an investment thesis or I've got a model or I've got a portfolio. I can think in some sense, think about that as the thing I want to continue to improve and then bring expertise and bring different perspectives into it, because in some sense, the bottleneck has always been in almost every human collaboration has always been the ability to do structured communication. That's always been the bottleneck. Yeah, back to the. You're talking one language, I'm talking another language. You're talking one dialect of technology, I'm talking another dialect of technology. So I think we're seeing, I'm seeing the most promising results when we just, in a sense, genuinely take a blank sheet of paper and just think look, what is it we're trying to achieve? Yeah, how would we do this differently, given what this technology can do? And we're not subject to the same, if you like, computational constraints as we, as we have been before, which is why I think some of the debates about you know, the debates about you know will there be software engineers in the future or not, software engineers in the future, and the role of vibe coding, and the debates have been running probably for the last three, six months around that sort of very publicly. Again, it's a bit of the same conversation as is it intelligent or not? It's the wrong conversation. It's like is it useful? Can it change the way you work together? Is it going to improve your collaboration or not? That, I think, is where you sort of get into this. So, elaboration or not, that I think, is where you sort of get into this.
Jon Webster:So I give you, like some you know, really concrete examples. What one of my favorite prompts? I made it two years ago now. Yeah, I made a problem that just extracts the beliefs from a document does nothing more than that. You put any length of document in. It turns out llms are really good at working out whether should, could, would, I believe. I think are basically the same thing as a belief.
Jon Webster:I want to find out what the seven things a document is expressing in terms of its beliefs. I put it into this and I get like seven crisp things and I get a little view of like, what are the logical contradictions in the document? What evidence is presenting? That is a superpower for many conversations. Thank you so much for sending me a 30-page document it was.
Jon Webster:There's obviously a lot of things in there, but I think there are seven things that you are saying that underpin the beliefs in this document. So let's talk about the evidence, let's talk about the contradictions, let's talk about the logic, because because where you improve your logic is by making your logic clear, so other people can debate the logic with you and you can collectively improve it. And I think that's where you really really need to go on this, like rethink ground up how you are collaborating and communicating and how these tools can help you improve that, because otherwise you are going to just get an llm written 30 page document that's actually more verbose, harder to read, yeah, yeah, less clear than otherwise would have been well. So people, the people are going to win. Are the people that the best at expressing intent? That's always been true, and you can use these tools to do that, yeah, or?
Jason Rome:not do that, and then you've either got a superpower or not a superpower. Yeah, and I think this is the point that lands. And, specifically as people think about what you just said, the question I'd have people ask themselves is you know in a week in their life, in a month in their life, in their role, you know, especially for those in positions that you know, have a lot of strategy work or making decisions.
Jason Rome:You know how much of their time do they spend really examining their underlying beliefs and how open are they to challenging those. You know, I think there's an aspect too of psychological safety and culture here of, and specifically the utmost of safety is challenger safety, where people can really challenge beliefs, and that's why I think it goes back to this concept of separating ego from accuracy, and we have to. There's so much pride of authorship, of intent, there's such a you know, we get so caught up in the sunk cost fallacy, and so there's all these anti-patterns of heuristics that we have right now. That prevents us from taking advantage of these tools that can contradict us, can present logical fallacies in our thinking, can challenge those. But if I take things one further from what you said as well, it's not only being able to look at what are our existing beliefs, but being able to articulate what would change them, and I think that's where a lot of people and a lot of companies struggle. I see it in my world in product and design and technology, where people go out to do a discovery engagement or you know right now, if we're staying on topic, people go and do a AI POC, right? Hey, we're going to test this new AI concept and they finish it and then they're trying to make a decision. Do we go to production? And they realized they had absolutely no frame of reference for how they were going to make that decision when they started. So things just get stalled out.
Jason Rome:Or Four out of 10 people like it. Was that enough? Was that what we were shooting for? We were shooting for six. Were we shooting for seven?
Jason Rome:And I think you and I have talked about this before. A lot of people have a hard time articulating what evidence would change my mind, what evidence would change this belief. And this gets a little bit into the conversation around the value of minimizing surprise versus information gain and navigating ambiguity, and we start to get into the concept of a free energy and risk here, which really going to need some footnotes now. But you know, I think that's the other thing is like as an organization if you take one thing away from this today, like how well are articulated and how honest are you about your beliefs and where the uncertainty is in those and how open are you to being wrong and changing your mind, and are you searching for that evidence or are you kind of hunkered down in your turtle shell, relying on it? I'm curious your thoughts on that. Just from a change management and psychology perspective. What have you seen so far in terms of people being able to adjust to operate?
Jon Webster:Yeah, I can imagine people listening are like how did you two get to this? From large language models, right? And I think maybe for me just to re-emphasize the link, the most transformative technology, the one that really unlocks civilization language, but, more importantly, writing, Because writing is what continues to allow us to evolve. There's a category theorist David Spivacki says he talks about writing as um stabilizing engagement. You have a conversation, we form some beliefs out of it, we write it down. Yeah, it's there. Some point later, you and I come back to it, somebody else comes back to it. They go wow, that's really interesting, we should talk about that. And they, they engage this stable, written down thing and that's the. That's the way, like, new stuff gets created, these stepping stones that we've recorded through history as we've learned to write things down. But it's also the way we formalize our beliefs and express our beliefs, and then we come back to them. We can test them. So there is something genuinely deep between this new technology and how it changes our epistemic rhythm, our ability to describe our beliefs, stabilize our beliefs, come back, engage with those beliefs from different perspectives. So I do think actually this is probably the most, in some sense, the most profound technology when it comes to how we form knowledge over time. Now, that's partly because it knows a lot itself, but it also can transform how we engage amongst one another, to do a lot of that knowledge chain and and sort of belief definition. If you accept that's how it, you know that that's how we do this stuff. But then you've got to come with an open mind, right, you know you've got, you've got to actually, and that's hard. Well, let's just be frank, that's hard, right, we often come, you know, even before this technology, we would come into meetings and we'd have a, a well-pol deck, that sort of laid out our theories. Actually, it's quite hard to see the logic of it. You couldn't quite see what the beliefs really were. You couldn't see what would have to be trues out of that, and then we'd debate something and would we get to a conclusion on it. I think you can really leverage this technology being much, much greater clarity to how you're expressing yourself. You can look at things from many different perspectives, as a team, as yourself, just with an individual. You can ask many different perspectives of something. You can test your own thinking in ways that you couldn't do before, and that's why I think it's sort of transformed.
Jon Webster:But in a sense of you know, we are back now to the surprise minimization. Right, you know we are surprise minimizing machines ourselves. That's what our brain seeks to do. It wants to make sure that our mental model of the world is as close to the way the world works as it can be. So we are not surprised by anything.
Jon Webster:We have two ways of doing that. We can see the world, update our model to look like the world, or we can go out in the world and take action and try and change the world to look like what, what our mental model says. Um, you need to do both, but we are, you know, we'll do the minimum energy version of that and often that is just update our model to, just, you know, fit the way we think the world currently works, as opposed to take the action side of that. So I think it's a challenge because we're sort of we're wired to surprise minimization and we're supposed we're wired to do that in a minimum energy way. Um, but nothing great was created by sitting on your hands, right, nothing great was created by just reading the world and not going and acting in the world. So I think you know thinking about where this technology helps and takes us. I think it gives us I actually think it gives us much more agency to go and do things that we haven't done before, if we choose to use it in that way.
Jason Rome:It's one of the things where I think you've used the term AI helps amplify expertise where you know that's letting us, you know, navigate what you call the transparency uncertainty trade-off, because I think in all of this there is an element of analysis. Paralysis is still possible and you know, I think in anything we do, whether it's investment management or whether it's in product development you know you can spend unlimited time trying to diligence every company out there. You can spend unlimited time trying to talk to every single one of your users and get 100% data back on your survey and ask them all the questions and completely de-risk something before you act, but there's a cost to that. The last podcast I did talked about discovery as a risk-reducing exercise, and discovery has a cost, and so I think that's one of the places where humans have their greatest value here and need to leverage this is deciding when is enough enough, when do I have sufficient evidence to be able to act and what is the next point at which I need to check in? With this hypothesis, that might change my mind, and I think that's what helps bridge this kind of generator verifier gap where you talked about this, and you know, looking at very long term investments.
Jason Rome:There's so many things that are going to impact that structural original thesis you had and the beliefs you had and how you act upon that. You know, same thing in product development right, whether a feature was a good or a bad idea depends a lot on how you continue to iterate on it, how you continue to change it. And so you know, making sure you work in those points where you go back and you're able to re-examine those underlying beliefs and figure out. You know, I might not know if the whole thing I believed was right, but I can figure out if parts of what I believe are right and being able to update that mental model. And you're still kind of minimizing surprise, but there are a bunch of these little surprises versus a big surprise. You know, one of the terms I've always framed, or one of the terms I've always hated, is the concept of failing fast.
Jason Rome:I like to think about small, calculated mistakes or in today's parlance, it's where are you placing bets and how are you placing those bets?
Jason Rome:And so, thinking in terms of the things you don't know of placing a bet, even in the face of uncertainty, is how you move forward. And then looking at an LLM of how do you leverage the LLM to maximize the effectiveness of the bet you want to place and overcoming that uncertainty while still navigating the tradeoff? And so I like where we ended up on this topic, because I think it gives people a practical way to look at hey, in my day-to-day, am I being honest with my beliefs? Do I understand the ways that I'm making decisions and am I relying on my own individual weakness, or am I leveraging the expertise of what's out there and maximizing that, and am I open to changing my mind? Am I seeking the ways to update those beliefs over time, to navigate ambiguity, but both investing in information gained without being paralyzed by that uncertainty? And how can an LLM be worked into my value chain to be able to reorient around that? So how does that sound as a summary point for a takeaway for the audience today?
Jon Webster:I think it's perfect. I mean as you nailed it. I mean I can't help, but sort of you know, as you were saying, there were things forming in my head. Right, In some sense, we're all going to be able to read the world. The question is or look at the world in ways that we couldn't do before, and if we all look at the world in the same way, we won't get any value out of it. So the way I look at, like you know, LLM, this technology is going to help us look at the world actually in many different ways. If I had one way of looking at it before, I can afford five or six ways of looking at that. That gives me new perspectives on how I might think that's creative and generative.
Jon Webster:Then the other half of the equation is I need to act in the world based on my interpretation. My belief system might be different. We run heterodox beliefs right, and that's why we're going to bet against one another. But then these technologies give me a way of acting in a way that I hadn't had before. They expand the possibilities of things that I could have done If I couldn't code before. Now I can code, I can express my intent.
Jon Webster:You see that, with the small startup teams, very different shapes, so they help us look at the world in ways that we were computationally unable to do before. But we've got to choose to do it and we've got to choose to use that to update our beliefs. But having done that, we can act in the world in ways that we couldn't have done before and for me that's like the most fabulous part about this, that's like the empowering part of the technology. I can see the world in many different ways and I can choose to act in the way in the world in ways I could never have done before. Why wouldn't we want that?
Jason Rome:Yeah, and, yeah, I can't help but respond because I think the other key underlying that is, you know, if technology lets us look at where we could see something one way and now we can see it five ways or 10 ways, you know, I think what separates the people that are going to have the greatest advantage in this space from those that won't going back to the concept of quality, is, you know, the ones that can really get to the crux of what matters within those different views, and I think that applies across all domains. You know, for example, in product development or risk management, if I see a risk log that's 25 items long, I can probably look at it and I can point to the five and like these are going to be what syncs your ship. It's probably going to be a data issue or an auth issue. Those are the normal candidates for syncing a tech project. You know. Same thing I had a client team share a large PRD or BRD with me recently and I was able to kind of look at all the feature lists and I was able to find the two or three that are never going to make it to production. They're going to exceed the cost, they're going to under-meet the value. I could just tell they were concepts. They were kind of bolted on, pork-barreled in, if you will, and under-researched, and so I think that's still one of the big parts of the value is being able to parse, because I'm seeing this already LLMs like to write longer responses than we might like.
Jason Rome:You know I think it goes back to the quote. Right, I would have I think it was Twain I would have wrote you a shorter letter but I ran out of time and I'm already frustrated. I don't know if this happens to you, but when people send out a copy and paste of the co-pilot meeting notes, nobody has time to read that. It's really long, and it's longer than any summary email that has ever gone out about a meeting that people will actually read to review it, and so I'm starting to see this of excessive documentation and just over-reliance on it as we go through. So I think that's the other important thing is, you know, making sure, as you're developing these tools, that they're parsing the information as well and not just overwhelming, and I think that's where we come in still is being able to develop that expertise and kind of bring that craft in of that. So I know we're coming up on time here Any other parting thoughts?
Jon Webster:Yeah, maybe two. I mean one thing I say you know if, actually, above average expertise and you know more or less the world's knowledge is at your fingertips and fabulous capabilities, the ability to code, develop, do other things are at everybody's fingertips. Back to the thing I said about the bar raising right, there's never been a worse time to be average, right, and I think that's great. I think there's never been a worse time to be average, which means we're all going to have to up our game. Yeah, I think that's never been a worse time to be average, which means we're all gonna have to up our game. Yeah, yeah, I think that's fabulous. I think that's fabulous for everybody.
Jon Webster:Um, the second thing is, I think is really, I think to your, to some things we talked about where value migrates in the future, I think is also, um, pretty exciting, right, because I think value migrates. It's a little bit back to the accuracy and the sort of ego. Maybe there's an iq and ego version of that, right, um, so, if know, if you want to separate your ego and your IQ, maybe you really need to double down on the EQ side of things, right, I think you know the ability to have great conversations. You know, engage with people in a very relationship-focused way, really get to the heart of the intent of what's being done, move people to do things they otherwise wouldn't have done. Think about the specifics of your stakeholder reality, the, the, the, your institutional reality, and it plugs into.
Jon Webster:These are, like, really human things. Yeah, really always been important. Yeah, but sometimes we get lost in the, the analysis and the and the intellectual part of it. Yeah, and I'm not saying intellect is going away, there's still an important part of that. But actually, that ability to work in the world to move things, to make things happen that otherwise wouldn't have happened, given that you've set the direction and understood where you want to go, I think that's very exciting about it, because that means we have to double down on the human, the things that we do brilliantly, yeah, the stuff that's like unique to us. That's no-transcript.
Jason Rome:Thank you for the conversation and thank you for the time today. I appreciate it.
Jon Webster:Great Thanks, Jason. It's great to be here.
Josh Lucas:Thank you for joining us on Build what's Next? Digital Product Perspectives. If you would like to know more about how Method can partner with you and your organization perspectives. If you would like to know more about how Method can partner with you and your organization, you can find more information at methodcom. Also, don't forget to follow us on social and be sure to check out our monthly tech talks. You can find those on our website. And finally, make sure to subscribe to the podcast so you don't miss out on any future episodes. We'll see you next time.