Sometimes we can’t see the tree for the forest

treeforforest.jpg

This time of year many of us are making resolutions. Try something new. Get some improvement.  Of course, the biggest challenge is often how to stick with the something new long enough to see the improvement. 

Take your team for instance. Over the past year, how many new things did they try? How many stuck? Did any make a positive difference?

If you're struggling with these questions, here's an idea. Simplify the picture. Sometimes we can’t see the tree for the forest. So rather than attempt to understand the impact of multiple new things, try focusing on just one thing for the next month or so.

Have your team pick one agile practice. It can be one that they are already doing or something that's new. Have everyone pay attention to what difference that one practice makes. What is the team learning through its use? Does it provide new insight to their process?... their product? ... the organization?... themselves?

Above all, don't give up too soon. Give the practice a chance... perhaps 2 to 3 Sprints. What changes are happening Sprint over Sprint? The team might be surprised the difference even one agile practice can make. By focusing on the change, the team is more likely to see the benefit of that one practice and will be more likely to continue the practice. If you find this approach effective, continue by adding another practice and try that for a few Sprints. After a bit of understanding the trees, the forest will emerge.

Uncertainty

The Cone of Uncertainty

I first encountered the phrase “cone of uncertainty” in the mid 1980’s while reading Barry Boehm’s book, Software Engineering Economics  (1981). Using a traditional software project context, the “cone of uncertainty” model showed that the amount of uncertainty in a software project is greatest at the beginning, ultimately converging to zero (0) at project end. In his book, Boehm reveals the magnitude of the uncertainty through research showing that estimates provided at project start are generally off by a factor of four (4). That’s right, a factor of four… and you thought your estimates were bad!

cone_of_uncertainty.jpeg

This “cone of uncertainty” made sense in a traditional project context. That is, if you’re assuming that the scope of what is needed can be determined and fixed upfront. But, we have learned that software isn’t actually like that. Let me explain.

How many times have you built exactly what the customer asked for only to hear on delivery that despite the product being what was asked for, it doesn’t meet the customer’s need? The cone of uncertainty described above is premised on the customer being right in their initial ask. Once you deliver what they asked for, you’re done. But what if they’re not right? Pause for a moment and think… in the case of a new product, has the customer ever known exactly what was needed before it was built? Is it even reasonable to expect them to know?

What I’m suggesting is that it is not estimation and our inability to accurately estimate that is the issue. In fact, our focus on estimates and estimation have distracted us from what is more important — that is, building the right thing.

The Cone of Possibilities

If we’ve never built this exact product before, we have no hard data proving that it is the right fit for the customer need. Before something is built, the idea of the thing to be built is simply a value hypothesis. We won’t know if it’s right until the product is built and in use by the customer. The sooner we build something and start generating feedback, the sooner we will have data on whether or not what we are building is right, and, if it’s not, how to change it so that it is more likely to be right.

Building software is analogous to building a new product. Building a new product is a journey from the known to the unknown. Instead of a cone starting wide and narrowing towards a point, building a new product is more like a cone starting at a point and widening out to a myriad of possibilities. A more appropriate model to describe the product evolution is a hurricane forecast cone. In the picture below, “X” marks the spot where the hurricane is now. That spot, plus where the hurricane has been in the past is what is known. Where it is going is uncertain. We can model a likely path based on current information, but our ability to accurately predict beyond the immediate future is limited. And the farther we try to predict the more uncertainty we encounter. Beyond a certain point, it’s not even worth predicting.

For a new product, we always have a current state of the product — what it is currently — even if that state is that it doesn’t yet exist! That current state is what is known. We won’t know what the next iteration of the product will be until it is actually built. Any number of unexpected things might happen in the interim.  For example, the customer changes their mind, the infrastructure on which a new feature depends turns out not to support the feature, a key team member departs, a new law or regulation limits or blocks the deployment of the feature, and so on.

Hurricane forecast models are updated frequently based on new data to improve our ability to react and prepare for the effects of the hurricane. The potential value in saving lives and property and minimizing damage to infrastructure warrants the investment in keeping the model current and useful.

Optimizing for Maximizing Customer Value

A product backlog represents a product’s potential future state. Like a hurricane forecast model, the product backlog represents possibilities. The actual track of the product is known only after the fact. The product owner’s decisions in product backlog content and order influence future product direction. With maximizing customer value as the primary optimization goal, the things that might influence the product owner’s decisions in reordering the product backlog and changing its content can be as complex as the factors influencing a hurricane’s direction. 

Just like the hurricane forecast model,  the product forecast model, the product backlog, must be updated frequently. Interestingly, the myriad possibilities of the future backlog also means that there is no upper bound to customer value creation. The main part of the product owner’s job is to manage the backlog for this purpose. Unlike in a project, the goal is not to be done, but rather to maximize value to the customer. If we are to focus on building the right thing, the more appropriate model then is an open-ended cone, rather than a closed one.

Have you considered your future product potential, what influences its direction, how to effectively forecast, model, and communicate its possibilities?


Did you know?
In 1973, Barry Boehm shocked the computer industry by predicting that software would outrun hardware costs.

Agile n+1

My Agile is better than your Agile.

Nah nah, no it's not.

Yes it is!

Is not!

Is too!

Et cetera...

To the point that eventually someone says, "Well, we're beyond Agile anyway. It's so yesterday."

To which I ask, "What part of Agile are we beyond?" If it's dogma, immutable bias, or canned solutions, I say good riddance for a' that! (with apologies to Robert Burns). They weren't consistent with Agile in any case.

Agile, in the Manifesto for Agile Software Development sense, is and always has been about a mindset. That mindset is expressed through 4 value statements and 12 principles. If you haven't checked them out in a while, go ahead and refresh your memory (http://agilemanifesto.org). I'll wait...

Good, you're back. As you may have noted, as expressed in the "Manifesto", Agile is not a methodology or a specific set of practices. It's not that simple, and we haven't even begun to get it even slightly right in most implementations. Face it. There's a lot of bad Agile out there.

So let's stop these Agile 2.0, Agile 3.0, Agile n+1.0 escalations and brash, sword-rattling posturing about abandoning Agile for the next bright shiny object and get on with it. There's still a lot of work to do.

Agile:

    think,
    
    understand,
    
    act;
    
    quickly;
    
    repeat.

An argument for a look ahead

Photo by Andy Reynolds/DigitalVision / Getty Images

Retrospectives are great, no doubt about it. Retrospectives, along with the experiences that we are retrospecting, are the fuel for our future improvement. But what about when we are just starting out? Is there a place for a retrospective before we begin?

By definition, retrospectives are a look back. When we are just starting out, we don't have a back to look at. We only have our way forward. But which way forward... and how forward? Even with agile and its short iterations, we typically wait to retrospect until the end of a iteration. (I know we don't have to wait until the end, but that is the way it is normally played.)

What if we started with a something like a retrospective at the beginning? Only we couldn't call it a retrospective because it wouldn't be a look back now, would it? What if we started with a "prespective", a before look, a sort of "look before you leap" or "begin with the end in mind"? What difference might a prespective make?

Before diving into the possibilities, let's define prespective.

Prespective |prē'-spĕk'tĭv|
adj.
1. A look ahead
noun
1. An forward-looking activity in which future possibilities are considered with intent to influence focus or direction

Sounds a bit like a kickoff or a launch doesn't it? However, kickoffs or launches tend to focus on the thing that we're creating rather than the process by which we will create it. Instead of looking at the what, a prespective would look exclusively at the how, and as we do with the what, we also ask the question "why?" In fact, the question "why" is at the root of both what and how, if you think about it.

Now, there might be an objection here that good kickoffs do include the consideration of how and why. But seriously, when was the last time you experienced a truly good kickoff?... and one where there was an intentional pause to consider improvements to current process before the start? This part is key. It must be intentional and a pause, not just lip service or a jumping on the bandwagon of the latest improvement fad.

We are talking about the gap between where you are and where you want to be with your process -- your how. The prespective is a serious consideration of the improvements that you wish to see and why those improvements are important.

Knowing why facilitates buy-in

Now, to what difference a prespective might make... if you know why the improvements are important, it becomes easier to get buy-in from stakeholders. Stakeholder buy-in to the reason for a change is key to successful change. 

Vision creates focus

Knowing the reason for a change also makes it possible to paint a vision of a better future that will result from the change. This vision creates focus for the team.

Defining success helps us to know when we’ve reached the goal

A vision also helps in defining success. The definition of success answers the question, "How will we know when we've got there?"

Metrics facilitate inspection

A definition of success also helps in identification of meaningful metrics to measure progress toward and ultimate realization of the vision.

At its core, a prespective simplifies the problem space. By identifying a strategic improvement target, the prespective narrows the possible solution sets. The constraints on the problem space actually makes it easier for the team to construct experiments for improvement. In other words, guided by the prespective, the team designs experiments intended to move the needle in the right direction.

Hypotheses can be wrong!

Now for a word of caution: always remember that a prespective results in a hypothesis based on a perceived improvement opportunity. The perception of the "right" improvement target is colored by past experience, beliefs, bias, preferences... after all, the hypothesis is created by people, who by their nature, are imperfect. If we were perfect, we would already have an optimal process. So, the hypothesis might be... wrong!

Learning loops enable new hypotheses

Given that we might have picked the wrong target, we now need to make sure our process includes a couple of learning loops — one that assesses the impact of our experiment in process improvement on the intended target... and a second loop to reconsider the validity of the target hypothesis. These looks at the results of our experiments?... yes, you've got it, these looks are what we call... retrospectives. And so we begin. Are you ready?