Build What’s Next: Digital Product Perspectives

Confidence in Planning: Addressing Uncertainty and Risk

Method

In this podcast episode host Jason Rome and guest David Brown talk about the crucial role of "confidence" in effective planning. They explore how organizations can honestly assess certainty and uncertainty in project estimates, distinguishing between confidence in "what it'll take to do" and "what this thing will do." The discussion highlights the importance of early, honest conversations about risks and major assumptions, emphasizing that overconfidence can be detrimental. The conversation provides valuable insights into how to build a culture of honesty and clear communication in planning, ultimately aiming to mitigate risks and ensure that projects are not only feasible but also truly desirable for users.


Speaker 1:

You are listening to Method's Build what's Next Digital Product Perspectives presented by GlobalLogic. At Method, we aim to bridge the gap between technology and humanity for a more seamless digital future. Join us as we uncover insights, best practices and cutting-edge technologies with top industry leaders that can help you and your organization craft better digital products and experiences.

Speaker 2:

Next question Confidence, confidence.

Speaker 3:

In plan. Yes, Fan.

Speaker 2:

You're very confident. I am pro Pro confidence.

Speaker 3:

How do you weigh confidence levels into the planning sequence to get a feel you had mentioned earlier? You know, know, maybe there needs to be some sort of balance between easy stuff, hard stuff, but that's subjective or maybe it's a little bit associated if we've done something like it before. But how do you build that into the human side of planning to where you have a good feel of what high confidence, which low confidence things are and how to mitigate some of them?

Speaker 2:

yeah, yeah, I mean confidence is uh, or you know, call it certainty, call it uncertainty, uh, to me is one of the most important things and it's why I like ice as a prioritization framework, because I think a lot of other frameworks don't capture confidence. Um, and there is a lot of inherent um uncertainty built into estimates that is hiding a lot of times, and you know something that we're estimating at a five with a 5% error bar versus a 30% error bar we need to capture. So I think it comes into place in three or four spots. One I think you have to articulate what is it that you are not certain or not confident about with a thing that's being prioritized? Because there is confidence in terms of do we know what it'll take to do this thing? And there's confidence in terms of do we know what this thing will do? And I think it's really key to articulate that, because one of those is a solution design problem of hey, how do we reduce and manage our costs and make sure that we've accurately estimated and we understand what it's going to take to accomplish this? And one is a more of a market and user discovery problem of if we do this thing, is it going to actually have the impact that it's going to have, and you have some initiatives that have airbars. On both of those of hey, there's a wide cost range and there's a wide benefit range, which, on our last podcast, we talked about.

Speaker 2:

You know, discovery is a risk management tool and that's why discovery is so important, and so I think the first is having a conversation about confidence on both of those, or certainty, and, I think, being extremely honest about that as early as possible. But the mistake I see teams make with it is they get confused with the long tail of things they don't know, with uncertainty, and what I mean by that is is there's uncertainty that is going to change the outcome of how much something costs. You know, call it long poles in the 10, call it critical path or the the impact of it, called major assumptions in the value chain, and there's just stuff that we don't know, that we have to figure out, but we're going to figure it out because we have before, and I think teams confuse those things and I see teams end up with just a laundry list of risks and uncertainty, but no one said like, hey, these are the three to five that are going to sink and swim this thing and they're going to really swing this interval. The second thing you mentioned with confidence, and so you know I like to do an exercise called low fidelity alignment, which is just early on, before people have written a ton of documentation, having a conversation around what are the biggest discovery risks that would make this thing fail and start pre-mortem. What are the?

Speaker 2:

My favorite questions to talk to engineers and architects about is not how long will this take, but what makes us take longer or shorter and help me find those rocks and we can choose to steer into those or away from those. What are going to be the hardest things about rolling this out and the change management effort and like let's just get all the big risks on the table. And I like to do that earlier because I find once there's a PowerPoint presentation, once there's a BRD or a PRD of teams are doing that, definitely once there's user stories, I think the downstream teams implementing that assume all of those upstream things have been thought out fully and they all have to be done, when in fact that's not necessarily the case and you end up, like with really big things. So I like that uncertainty to be passed to my implementation team in a very honest way, if it's something coming from the business about what we know and what we don't know and I think that's a sign of a really good culture is people can be honest about what they know and what they don't know versus having to.

Speaker 2:

So maybe I'm against confidence, like I don't like teams to be overconfident, I like them to be very honest and so, capturing those two things, so you have a really good action plan coming out of planning on what are we going to do with this thing, where is the risk and are we OK taking that risk and are we all aligned that it's worth the risk? And that's where you can bring in things like assumption mapping of and I love Amazon's framework of the one-way door versus the two-way door. So you could look at hey, based on this thing that we're doing, this decision we're making, is it reversible? Is it irreversible? If we make this decision, if we're wrong about our assumptions, what's the business impact? And so there's a lot of things that let you like really dive down into the confidence once you know where your big areas are. So one of my favorite topics, thank you that was a great question.

Speaker 3:

Thanks for responding, yeah.

Speaker 2:

Any other ones? I saw you writing.

Speaker 3:

No, I mean, when you talk, I listen and you get my wheels turning on a few different things. One of the ideas, and maybe you've implemented or used it, but like looking at the three to four risks depending on who you're looking at, if it's Marty Kagan or IDEO but confidence tied to those things. So if you broke confidence out by desirability, how sure are we that our consumer base wants this and is willing to choose it, if that's value or desirability? If it's feasible, how confident are we in solving this problem? Or viability, how aligned to the strategy is this? Or usability? So have you broken down not by roles per se, but what risks or lenses we're taking to deliver it.

Speaker 2:

Yeah, dbf is interesting, right, because I find for a lot of stuff going through the backlog and through the pipeline, viability is almost always the product of feasibility and desirability crossed together. Sometimes there's situations where it's not like, hey, this thing isn't actually viable from a compliance reason, but a lot of times things aren't viable because the money doesn't make sense. You know, like the upside, the upside, the desirability upside is not there based on the feasibility downside of like what the cost might be. Now there's bigger things in terms of business innovation, does it fit with our core? But for a lot of what a team interacts with on a day-to-day basis, that is core to their product or adjacent. You know, the viability questions may be a tiebreaker or periphery thing but they don't impact it as much In terms of desirability and feasibility.

Speaker 2:

For feasibility, again, I think one of the mistakes people ask and one of the just anti-patterns of behaving like an IT organization versus kind of a more of a modern product organization is asking engineers and architects early how long is this going to take? Because they're going to turn around and ask for detailed requirements and you're going to go write detailed requirements and they're going to be wrong, and you're going to miss a chance to have a conversation with an engineer, architect, especially with how complex architectures and AI and the feasibility of things are, and how many you know how many companies are going through. Hey, we're redoing our service layer, we are moving to, you know, a new data lake fabric, mesh, lake house, whatever it might be. We're modernizing our UI in front, and so there's all these parallel things that might impact the thing that you're wanting to do, and so there's all these parallel things that might impact the thing that you're wanting to do.

Speaker 2:

And so to not take the opportunity to make the technology folks and shift them left and make them solutioning partners early, especially with how fast technology is changing, is a huge missed opportunity, because they might see it, there might be a huge thing that you could say hey, actually, this is the crux of the problem. You could take it down to a much smaller thing using this technology or aligning to this adjacent thing. So not asking how long something will take, but asking them what will make this take longer. Now the flip side is for the technical folks. They have to be okay, leaning into the ambiguity and laying out options, but we can't punish them by holding them accountable to an early thought that they had, because that's what they're worried about. They're worried about saying a number, giving a range, and that goes on a piece of paper somewhere and now it's now, it's out there. But we need that early view because we need to say like hey, no this is way more expensive than we thought it was going to be.

Speaker 2:

So a great example of this I was working with a company and they had an initiative they wanted to replace an external vendor system and bring it in-house, because they just said they'd customized it to the nines over the years. And this company had a pretty robust process for the business to even I was playing a product role at the time even talk to me, and you know they had to go ask for eight weeks of funding to do a discovery on this thing. And the head of product at the time was like hey, grab time with Jason, walk through the business case and then I'm going to have him talk to the head of architecture and we're going to see if we can do this in two hours instead. And so I got on with the head of architecture and said, hey, business case has cleared me, like there's money here to be saved based on what we're paying. And I said where, where's the but? Like what's going to be hard about this? You know who, who owns the APIs of integrating this? He's like actually we built all the APIs to them, so the API code is all ours already. Like we already have the services. I said, okay, well, that's great. And he's like the problem is the logic. He's like it's a, it's a Gordian knot of if statements and you're going to have to have a BA reverse engineer all of the business logic we've just added to this data that we're pulling in to transform it over the years, because we've just never had like a good rules engine. So I was like OK, so it's BA work for the most part, the front end is already there. We were worried about all the custom APIs that were going to be built, and so we were able to take something that was being requested for funding for eight weeks and have two hours of conversations to go back to and say like, yeah, this is definitely feasible, we should move forward. So I think to your point earlier on, like what decision are we trying to make? That's a great example of having a better conversation about feasibility.

Speaker 2:

The other thing that I've pushed a lot of organizations towards is architectural decision records, where technology presents alternative approaches to a technology solution, presenting the tradeoffs of what something would cost against. Hey, do we need real-time data or do we need near real-time data? You'll arise services again, not an architect, but it's what architects tell me, you know, over, get something ready to scale. That's only going to have, you know, a thousand concurrent users at any time. You know there's some things that it doesn't make sense to do a big, complex pattern, just a basic, monolith and easy. And so having the ability for architects and product and the business to present those trade-offs or how to accomplish that and say, hey, do you really need the scale, do you really need the speed? Here's how it changes the cost. I think that's a really rich feasibility conversation, because feasibility, like very rarely today, is something not feasible. You know you might not have the data, but you can figure out how to get it, and so it usually comes down to like how much will something cost? So that's the feasibility side.

Speaker 2:

And then on the desir of like, what are all the things that have to go right for this to work? And then, specifically, I'm looking at a major initiative. The question I try to get to is are all these features independently trying to address the problem or do they all build together towards a larger solution? Because then we can break this apart into six smaller things. So if I read a big, long BRD, I always try to say is this one thing, or is this five things? Is it 10 things? I'm really trying to decompose it. I think decomposition is a lost art sometimes and things end up too big. That's the first thing, for desirability is like, hey, can I de-risk this by breaking into smaller pieces? Second, where's the assumptions in my value chain of, like, what are all the things that have to go right? So I understand, like, what is the critical path? And then we can get into assumption mapping and figuring out our discovery plan.

Speaker 2:

And the last thing I look at is you know, is there anything I can hear from a user that is going to change my mind, or do I have to put this in prod and see how it goes? There's a lot of things. I was talking to someone about this last night financial health you go to talk to any user about what they want in their banking app, financial health stuff, build it in your app. Nobody uses it and it's been that way for a long time, and so there's a lot of things people will say and not actually use, and so sometimes you only know when it's in prod. And then let's just minimize our cost and get in prod and do a pilot and see how it goes.

Speaker 2:

And so the question I love is what would have to be true for us to change our mind? If we do a user test Because you've done a lot of user testing, right you ask 10 people something. It's like six out of 10 people liked it out of 10 people liked it. Is that good, I don't like. Does that change our mind? You know what would change our mind?

Speaker 3:

So we need to put that out front, saying what's what's willing to change our mind.

Speaker 2:

So that was a long answer and desirability and feasibility, but I think two techniques people.

Speaker 3:

Well, I think you distilled it down and right in the middle. You said there that if somebody wants to hours, yeah, no, that's the takeaway that is a uh.

Speaker 2:

No, I'll get kicked out of the company for that. So sometimes somewhere ahead of sales just perked his head up somewhere and he's very angry with me. So but you know, I I do think having good, honest conversations and getting the products in all seriousness. You know, we've changed our approach to some of these discoveries where, you know, week one we pull everyone into a room for for a day and a half, um half and figure out the truth of where things are and like what people actually believe and like what are the tension points and what are the decisions we have to be made. And like, honestly, sometimes working with an outside firm just having permission to think about something deeply and spend time on it for a day and a half, versus in distracted areas and have someone ask questions and have all the people in the room Like sometimes that's just a really valuable starting point of like someone asking you the right questions and I think you know I've talked to a lot of my clients about this is, you know, someone asking questions who has no political skin in the game, that can ask those questions.

Speaker 2:

Uh, is a really key thing. And so there there's a lot of. You know there's some of the clients joke like, hey, the first two weeks, you know, is really valuable. I mean, all the hard work has to happen after that because often there is 20% left and that last 20% is as hard as figuring out the first 80% and the solution details. But yeah, that, um, that first investment of time and really nailing like what is the question we have to to answer what is the decision we have to make?

Speaker 2:

For planning is is key. Yeah, um, all right, I'm taking back over here, hopefully. Um, yeah, I mean do brazilian jiu-jitsu. You know you can, uh, you can ask as many questions as you want because, uh, you know you're gonna win that battle. So let's look downstream, let's look at road mapping, let's look at sequencing. Oh, we've got our portfolio, we've our priorities. How do we start to make sense of the roadmap and how does that look in a mature organization? And then, what do you see as roadmap anti-patterns versus things you've rolled out to help teams kind of clean that up and have that view?

Speaker 3:

Yeah, I do have a love-hate relationship with roadmaps.

Speaker 2:

Okay, have you been hurt by a roadmap in the past?

Speaker 3:

I've been hurt by being on the road and having a map, some of the anti-patterns. So at first it's a visibility tool, it's an alignment tool and if it's used for that, that's great. It's not a contract and you know, I can't remember who said this, but they said it's a prototype for your strategy or your implementation. When you say words like prototype, it implies back of the napkin of fidelity, like things can move. Now that doesn't translate well to how we actually are used. Roadmaps where there's spatial awareness. Things really matter where they're at, because if not you'll get lost. And so where I try to decouple those is what is your roadmap for you and your organization? And let's build for that. So there's almost a jobs to be done approach to why are we roadmapping and what are we using it for.

Speaker 2:

Yeah, and that second type of you know roadmap. I guess you know there's a difference between planning and plan. I think maybe Eisenhower quote it's like plans are useless but planning is essential, right, it's kind of what you're saying. It is Now, when it comes to a plan versus a roadmap, what does that look like? Is that a Gantt chart? Is that a sprint plan? Is it all of the above? Does it depend Like what's the right? Little fidelity that provides that visibility and comfort while enabling flexibility? How do you balance those things?

Speaker 3:

In my experience I've had a bias to using more that now, next, later, now next, future style. Sometimes that gets more complicated with how we want to do sequencing or showing dependencies across them, but often I've seen more benefit from using that sort of structure for again, teams or teams of teams to be able to coordinate and say this is what we need to do to get things across the finish line. What I don't like about that structure is the done column. It kind of falls off, and so I like to have some sort of extension to where, when we're looking ahead, we see now, next and future to stay focused. It should be driving focus on the now things and creating space to say not now for everything after, Because there are types of work that have to occur over, but really all of our focus are the things that are in the now category, and hopefully there's not too many.

Speaker 2:

Now, I was talking about this the other day with the head of product and she was mentioning and we're talking about planning process how much she desperately needs a highly visible parking lot so she doesn't have to answer the same question over and over again, why we're not doing this thing of the same ideas that get generated and come up, uh, and like how much time goes into that, and that you know, and so that that concept of not right now, we've heard you, here's where it is, here's why we're not doing it, and it's in the parking lot and having that rationale somewhere, because so much time is spent like rehashing those concepts and same ideas and like ideating on it and all that stuff.

Speaker 2:

So, um, I love that one. I think that's a really important call out, um, and it's interesting like the now next later or now near future. Um, you know, if dave on my team is listening to this, he likes to debate like now near next versus now next later. I have moved to his side of now next later. All right, so what are we saying? I think we're saying now next later, now next later. Yeah, you know those views because you're kind of also hitting on Kanban here, I think. So like having that roadmap view versus how a team uses Kanban like is Kanban like thoughts on teams using Kanbans as well, like how have you used those to be successful?

Speaker 3:

I mean, my baseline is use what works and throw away what doesn't. But I've hardly ever seen where a team not using some sort of visual indicator of where our focus is and what we need to be attention to, doesn't support them, and usually that's some type of Kanban. Everybody's Kanban looks a little bit different, maybe how they use colors when they block something. I think there are some things that are probably universal is don't confuse state and status, but essentially it's keep it as simple as possible as long as possible, and only complicate or add complexity if you need to. So not all work is the same, it's different and it's ever-changing.

Speaker 3:

I've used features on a Kanban board where I threw them away and I'll say I'll never do that again. And then something happened in life and it made sense to do it again, absolutely. So sometimes I've seen or I've worked with teams and they'll say they'll want to create a lane for, let's say, blocked. Well, blocked doesn't always happen there. Or, especially if this is in a digital format, the metadata is really important.

Speaker 3:

And so let's say you have the statuses of, you know, active versus inactive, versus canceled, versus rejected. If something gets to, let's say, you have to do or prioritize, and then you have a couple of different versions of in progress and then you have done, which done is never really done. Then you have to then communicate a demo and integrate and everything else You'd want to know if something got rejected in the middle. But if you combine state and status, you may just move something all the way to the very end and call it rejected and you may lose valuable information associated to how healthy are our funnels for moving things, how do we measure flow and what are our on ramps or off ramps?

Speaker 2:

that we need to consider because it may change our operating when um, because that you know that can end up with a pretty full board like what's your cadence for? Generally doing like a, hey, let's clean, let's, let's look at where things got rejected, but it's time to clean them up and, you know, not clutter things like how do you, how do you manage that cadence and like a rhythm of business kind of thing.

Speaker 3:

Most things in life, I try not to start with a time base. So I won't say annually or I won't say quarterly.

Speaker 3:

I'll try to say like you know what are the behaviors or the big events that may make sense here, and so it may be. You know team composition changes. That's a great time to go look at the board because you want to bring a new voice into what goes on it. It may not change anything, but that's a good point. I look at these kind of like stoplights in the road. Sometimes when I'm driving to work, I may stop at 10. Sometimes when I'm driving to work, I may stop at 20. But those stops help make sure that everybody on the road is moving in a very progressive manner and they all get to their destinations. So that's how I like to look at like internal operating changes as like when should we stop? Because it makes sense for all of us to move together.

Speaker 2:

Yeah, um, I don't have a great segue to this, but you talk a lot about starting versus finishing work and that being like a cultural thing of part of planning being limiting the ability to start. Can you talk about what you've, what you've rolled out there and just the behavioral change that you've had to kind of put in place and like why it's so valuable?

Speaker 3:

We hit on a few of the elements, but I think often when a group lacks transparency or doesn't have a portfolio view into things, that when a business case comes up and they say, hey, should we do this, you have a shark tank without anybody understanding what their balance of their book looks like or what their capacity is. And so if a good idea sounds like a good idea and it's pitched really well, you'll say yes all day. But you also are now distracting from yesterday's promises, and I don't like that. I don't enjoy it, the teams don't enjoy it, and so I like to create an environment, or make sure that environment is there, so that anytime we say yes, all of yesterday's yeses are also right there in front of us as well. And we sometimes have to make that tradeoffoff decision of I may be saying yes, but that doesn't mean you get to go tomorrow, and I think that was.

Speaker 3:

Creating space in between, whether it's phases or whatever you want to call it, or portions of the life cycle is really important. You've just finished discovery. We can't assume that the same team that took you to discovery is now going to go into the next thing, if that's delivery or pilot, mvp or whatever it is. If it's not the same group of people, then it's not. It's also not safe to assume that you finish discovery on Friday and you start a form of delivery on Monday.

Speaker 3:

There's a form of reorganization and calibration or collaboration that needs to occur, and we talk about frameworks and different methods all the time. I really enjoy Klaus Leopold's flight levels, because creating that space in the middle of you have to either invest in collaboration or it's a cost to the organization. Taking it back to planning. If we are not building in maybe a cycles times capacity, whatever it is for groups to work together, we just assume that when one group is done, the next group picks it up and runs with it, we are missing a big element and we're going to spend the money the way we might as well as treat it like a spend that we're able to forecast and plan for, or it'll be a hidden cost or a tax to us anyways.

Speaker 2:

Yeah, I love that. I mean, I think you know better to be 100% of the way with 60% of the things than 60% of the way with 100% of things, especially because we're not talking as much about delivery today, but just the cognitive load of and context switching. That goes into like having multiple things done. And I think too you talked about done as a column that you know you're you're you're not necessarily pro in terms of that, and I think that's one of the problems is people think about done in terms of deployed or delivered or released, not done in terms of like measured, and so you know I forget what the stat is, but it's something like, you know, 80 of the cost of the feature is after it's released. Um, and especially when you have a team doing a new something like hey, once you have something in pilot, some of that team's capacity is going to be going towards managing the pilot, listening to users, working with users, triaging bugs, managing bugs. So you lose some of your capacity to do these things and then you go to production. Okay, now how much of the team's capacity is to managing that release, collecting feedback, you know, taking things into the backlog, and so I think that's one of the key assumptions is and I think one of the tough conversations is hey, no-transcript column has a ton of stuff that's not yet all right, we're comfortable with this, we're really done with it. And I think that's where you can say okay, we accomplished this goal. Yes, there's a V2 of it, but let's put that V2 back in the beginning and decide do we start working on it, like there's a momentum to just kind of keep taking user feedback, versus saying like, hey, we've satisfied the brief.

Speaker 2:

I run into this a lot with private equity companies and looking at their investment profile, with their product teams, and I think that's one of the things that you know product mindset talks about a lot, which is you know, a product's never done, but we can finish with a set of goals and get something to stable before we and have those, and that's why you can still have milestones, even within a product. I think is a really important thing for a team to be able to decompose, have that break, have that thought and say like, hey, this isn't a good spot. Yeah, yes, there's going to be parts of it that we're continuously tweaking and optimizing and improving and testing. And again, it varies on the type of product development. You know, I think, especially with employee experiences and I know in the work that you do, a lot of it is kind of employee facing you got to take into account continually messing with something and the change management around that.

Speaker 2:

And I worked with one company and we did much user interviews and user said like hey, I appreciate how much tech you guys are building, but if I don't check my email for a day, I don't have to do my job. When I log in in the morning.

Speaker 2:

It's a sentiment versus you know Amazon's going to do whatever the heck they want to and you're going to buy something the next day and you're probably not going to notice a lot of what they change. So I think that concept of really finishing something and giving teams the time it takes to complete the golf swing and make sure it meets the original intent, not just the stories that have been accepted is really key.

Speaker 3:

Yeah, I think at the team level, measured or some sort of bucket to talk about, we know this worked or didn't. So proven or proof is another one that I've seen A lot of times when I'm working with leadership teams, executive teams I'll put communicated on the very end because we'll be in a meeting, We'll talk about a very hard thing. Maybe it happens several meetings in a row and then finally a decision is made and in their internal leadership Kanban they'll move it over to done. Are we done? Talking about this? Yep, yep, All of us agreed, Great. But the issue is then who then owns that? And it really should be all of that team. So if there's a column after saying communicated and then adopted, like two more columns, then usually that awareness of it's not done. We call it done for this conversation. But we still have to align ownership or accountability to getting it all the way to the very end.

Speaker 2:

And how do we all support that as well when you see that it's almost like you reserve capacity to make sure to get it all the way there, opposed to just shifting to the next thing, I did an assessment with a health tech company and one of the things I found with the assessment was their team was using the word done three different ways. There was feature complete at the dev team. There was sit tested and merged back into the master branch, and then there was FDA certified done, and so every team had a different definition of done. And so every team had a different definition of done. And so, as they were finishing their backlog in their project, they were communicating things up to executives on the percentage done. But there was a big backload of risk of like, hey, the features are complete, but it's not hardened, it's not sit tested, it's not end to end tested.

Speaker 2:

And so I think having clarity around what done means and I think it's okay to have different levels of done for different teams, because some of this stuff is going to be complex, it's not going to take the whole team focusing on something, but that is a complex thing to your point around communicated and adopted. I couldn't agree more, but I, you know, I don't feel it's fair to do a podcast on change without sarah and britney, so we'll have to do part two of that when we can invite the two of them back. I think so.

Speaker 3:

I'll go back to one point and you had mentioned to earlier around the pilot and how do we get planning better? And it's to understand that we don't scale pilots and so often is that ever built into planning Like, hey, we did a pilot, it worked, we validated it? It's like hooray, you know, pop the confetti, but great, how much work do we have to now undo Because we can't just build off the top of this and that needs to go into the planning cycles as well.

Speaker 1:

Thank you for joining us on Build what's Next Digital Product Perspectives. If you would like to know more about how Method can partner with you and your organization, you can find more information at methodcom. Also, don't forget to follow us on social and be sure to check out our monthly tech talks. You can find those on our website. And finally, make sure to subscribe to the podcast so you don't miss out on any future episodes. We'll see you next time.