Thursday 3 August 2017

Escaping the Velocity Trap

stevepb / Pixabay


I was speaking at a conference recently, and during the Q&A an audience member asked a question that I hear a lot: what should an Agile organization measure, and where should they start? My answer: start by measuring customer outcomes. I could tell this wasn’t quite what he expected, because he said, “well, sure, that’s nice, but that’s really hard. what about stuff like velocity?” My answer was still the same, but to shed a different light on the problem I asked: “What’s better? A hundred story points in the wrong direction, or one story point in the right direction?”


Output matters, but only when delivered outcomes are right


The truth is, worrying about velocity is a trap: it says “we don’t care where we end up, so long as we get there fast.” That’s just wrong. Teams who measure their velocity but don’t or can’t measure customer outcomes may, quite simply, be driving in the wrong direction. When I talk to teams about this, they have a lot of reasons why measuring customer outcomes is very hard, and they are right – but if you can’t tell whether you’re delivering something valuable, you might be wasting your time.


The root of the problem is that most requirements are wrong


Measuring velocity would be the right thing to do if you could be sure that you’re building the right thing. Most teams think they have sidestepped the problem by claiming that the Product Owner decides whether a Product Backlog Item is correct or not. And this is true – except that Product Owners are not somehow magically omniscient; they have the same confirmation biases the rest of us have.


The problem is nicely researched in a number of studies by Ronny Kohavi.[1] In his research group’s long-term study of ideas and their impact on business results, they found that only a third of the ideas produced positive results, another third resulted in no change, and a third of the ideas actually made things worse. Things get implemented but are never used, or when they are used they require substantial rework to get them right.


One of the philosophical ancestors of Agile delivery approaches was the Toyota Way, which identifies 8 types of waste; at the top of this list is overproduction, producing items for which there are no orders.[2] Requirements that are never used or don’t deliver a desirable result is a normally invisible form of unsellable inventory, a form of waste.


Focus on Outcomes, not Outputs


Velocity measures output, how much work a team produced. Except that it really doesn’t measure useful work, just that they did something. Relying on the Product Owner or Stakeholders to tell the development team that the work was useful might seem like a solution, but they are usually the source of the PBIs, and they wouldn’t have proposed them if they didn’t think they were useful. Sprint Reviews are necessary, but not sufficient.


Years ago I was in charge of a product development group consisting of many teams and several Products and Product Owners. The teams really worked hard on a release, and we aggressively worked to deliver what we thought was what customers wanted. Shortly after the release I was meeting with some of our larger customers, and I asked them what they thought of the release. The answers I got back varied, but the general theme was that they didn’t think we had done very much.


I was stunned, more than a little confused, and frustrated. We thought we had listened to our customers and delivered what they wanted. My experience in this is not unique – nearly everyone I know who has had responsibility for delivering products has had a similar experience. I thought about this a lot, and I finally realized what had happened – we had delivered a lot of features to those customers, lots of output, but we hadn’t really improved the outcomes that they experienced.


Making hypotheses explicit helps


As I learned when I talked to customers, you can’t see any of this until you start to measure customer experiences. When you do start to measure experiences, the light bulb goes on, and it changes the way you look at the world. Every Product Backlog Item is really just a theory about how you are going to make someone’s life better, and your life gets easier when you state the PBI as a hypothesis, not a statement of fact.


Jeff Gothelf and Josh Seiden’s book Lean UX is a great resource for shifting your mindset to think about outcomes and hypotheses.[3] Among other insights, they offer a different format from the typical user story for capturing PBIs:


We believe that we will achieve [business outcome] if this user [persona] can achieve [user outcome] with this feature [feature].


The important difference from the typical user story is that the belief is made explicit, as is the business outcome that will be achieved when the user has a particular experience. What’s missing from this statement, but captured along with the statement, is how you will measure the user outcome.


“Done” really has to mean “in use”


It’s not enough to say that something is “done” because it’s been tested, or even that the Product Owner has agreed that it’s done, or even because stakeholders agree that it’s done because everyone may be mistaken in their assumptions about what customers really need. It’s not even enough to say “done” is “deployed to production” because we can’t test our assumptions until someone uses it; no one receives any value until the product increment is being used. This is a very high standard for most teams, and it will take a lot of work to get there, but until everyone understands that only outcomes matter, nothing will change.


Why Lead Time matters


When development teams, Product Owners, and Stakeholders start seeing what’s being used and how, their behavior changes. They start focusing on forming and testing hypotheses and learning fast. I talked to a lot of organizations when I was an analyst with Forrester Research, and the pattern that emerged was that once the lead time (the time between when an idea is conceived and when you can measure the results) drops below 3-4 weeks, people start working more experimentally. Anything longer takes too long for them to think of working differently; by the time they get the answers, they have forgotten the question.


The smallest possible release: one outcome, one persona


Using Lean UX vernacular, the smallest release that is worth measuring, or even releasing, for that matter, is a single improved outcome for a single cohesive set of users (represented by a persona). Anything more than this increases your lead time by delaying the benefits that might be experienced by a more narrowly focused set of users. And anything less than this is simply a waste, a bunch of features that don’t improve anyone’s outcomes, and a lot of effort for nothing.


A lot of leaders in product deliver organizations want faster delivery, but they are still stuck in a “big release” mindset and they spend a lot of time and money trying to speed up the delivery of big releases. The fastest way to improve lead time is to reduce the size of the release, or the batch size, using Lean vernacular.


The smallest meaningful batch is one outcome for one persona. The sooner they do this, and the sooner they start measuring outcomes, the sooner they find out that most of what they thought they needed to deliver isn’t needed, but some things they didn’t even think of are really essential.


Their situation is much like the one faced by John Wannamaker (1838-1922), a successful merchant, who observed:


“Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”[4]


Reducing Lead Time and release size by focusing on one outcome for one persona gives them the rapid visibility that they need. One outcome, one persona is really the minimum viable release.


If you can’t maximize Outcomes, maximize Learning


Sometimes you can’t tell what the desired outcome is, but you can form hypotheses about what you think customers want and then test them. The faster you can do this, the faster you will learn and the sooner you will be on your way to delivering something of value.


In reality, we should worry less about continuous delivery and more about continuous learning. Delivery is essential to learning quickly, but learning is the real goal. Customer’s needs are always changing, and the need to learn, to inspect and adapt, never really goes away.


How to get started


It’s easy to say “measure outcomes, not outputs”, but doing so means lots of small, and even some dramatic changes, by lots of different people.


  • Product Owners must state PBIs in terms of outcomes and success measures. This helps everyone, development team and stakeholders, because it more clearly describes the goals that everyone needs to achieve. The discussions that result are healthy and invigorating, and they help everyone be more creative in coming up with solutions.

  • Product Owners must set Sprint Goals in terms of outcomes and learning. Achieving a particular customer outcome is best, but when you don’t have enough information for that, making the goal to learn specific things about the customer will help you to converge on delivering the right solution.

  • Product Owners must scope releases in terms of outcomes and personas, the fewer the better. This will reduce the complexity of releases, give everyone clear targets, and reduce batch size which reduces lead time.

  • The Development Team and Operations must work together to create a reliable delivery pipeline. Smaller and more targeted releases mean that organizations will need to release products more frequently. Developers and Ops need to get really good at releasing reliably all the time.

  • Developers need to create reliable automated regression tests for legacy applications. Every application today is connected to lots of other applications. Fear of breaking something often prevents organizations from releasing quickly because they know they can’t adequately test all applications manually. Automating regression tests takes time, but measuring outcomes helps. If you instrument your code to identify dead code that’s never run, you can identify a lot of regression testing that doesn’t need to be done.


[1] https://ai.stanford.edu/~ronnyk/2009controlledExperimentsOnTheWebSurvey.pdf


[2] Jeffrey Liker, The Toyota Way, 2004, McGraw-Hill, pp. 28-29.


[3] Jeff Gothelf and Josh Seiden, Lean UX, 2016, published by O’Reilly Media, Inc.


[4] https://www.b2bmarketing.net/en/resources/blog/half-money-i-spend-advertising-wasted-trouble-i-dont-know-which-half



Source: B2C

No comments:

Post a Comment