Sprint KPIs


Here’s the Jeff Sutherland and Scott Downey document on which we based our Backcountry.com version of Scrum KPIs (I worked on this with the PMO at Backcountry, including Michael Burleson and other project managers and scrum masters):

Overview

A set of Key Performance Indicators (KPIs) provides Product Teams with a standardized set of metrics to help gauge their success at delivering value and meeting commitments each Sprint.
What are the metrics?
  1. Focus Factor - Ratio of Story Points accepted to the team's work capacity in the Sprint
  2. Adopted Work % -  Ratio of work brought into the sprint after the original planning sessions to the work committed to in the original planning session.
  3. Found Work % - Ratio of incremental story points based on learning once in the sprint vs the original commitment for the Sprint.
  4. Commitment Accuracy % - Ratio of original commitment to total commitment
  5. Work Accepted % - Ratio of Accepted Story points at the end for the Sprint to the original commitment at the beginning of the Sprint.
What’s the purpose?
  • Contribute to team’s continuous improvement efforts to achieve higher levels of productivity
  • Provide a normalized set of metrics for assessment of enterprise Agile performance
How can a team use them?
  • At the beginning of the Sprint retrospective, summarize team performance and highlight areas of strong and weak performance.  Specifically:
    1. Enter the value of work completed on User Stories In Progress and Completed (but not Accepted) at the end of the Sprint, in the Effort Not Accepted field
  • *# Review Story Point estimates for all work (Accepted, Complete, In Progress) and re-estimate (revise up only, not down), if necessary, based on improved knowledge.
  • Provides input in the the discussion portion of the Sprint Retrospective .
How should they not be used?
  • Any way that violates Agile Principles and Scrum Values (top-down mandates; focus on individual instead of team performance; manipulating data, etc.)

How can these metrics facilitate discussion in the Retrospective?


First and foremost, these metrics are best used by looking at them over time.
  1. Is the team trending in a certain direction over time for each metric?
  2. Are there good explanations for abnormal "spikes" for a given metric?
Some value can be derived by discussing the "whys" behind a given iteration's numbers, but the aggregate of a number of iterations will tell a more complete story.
Focus Factor: If this ratio is low (below 80%) it indicates the team has work capacity that for some reason is not being translated into accepted stories.
  1. Are stores not granular enough? How might they be able to be broken down into smaller, but still testable increments?
  2. Is the team trying to work too many separate things at once? Are there opportunities to "swarm" stories with more of the team's resources at once.
Adopted Work %: If this ratio is consistently above 20% that means the team is struggling with changing requirements during the Sprint.
  1. Are items coming into the Sprint after the planning session agreed to by the entire team?
  2. Can the items coming into the Sprint after the planning session legitimately not wait until the next Sprint?
  3. For items that come into the Sprint after the planning session, are an equal number of Story Points removed from the Sprint to compensate for the added work?
  4. Is there anything that can be done to limit these disruptions?
Found Work %: If this ratio is consistently above 20% it is an indication that the team is having issues with reliably assigning accurate Story Point estimates.
  1. Do the stories that "grow" tend be of a certain size (typically on the large end of the spectrum)? If so, maybe they are too complex and need to be broken down into sub-Stories.
  2. Are the Stories appropriately groomed; meaning does the team really take the time to think through the requirements and high level steps necessary to properly assign a relative Story Point estimate?
  3. Is it your Done Criteria, or lack thereof, that is causing the team to underestimate what it will really take to complete a Story?
  4. Are you trying to equate story points to anticipated hours of effort rather than evaluating a story's complexity holistically against other "like" Stories?
Commitment Accuracy %: A target for this ratio is around 80%. You want your original commitments to routinely be representative of the total commitment for the iteration, however, if it is close to 100% on a consistent basis it could indicate that the team is actually over-investing in grooming/estimation (influencing "Found Work" or is being too rigid in not allowing an iteration to "bend" based on emergent business needs (adopted work). Typically, this is not a problem.)
  1. Is it Found Work, Adopted Work, or both that is throwing off your Commitment accuracy?
Work Accepted %: This is the primary measure on which most teams base the "success" of their sprint. Did you complete all (or most) of the Story Points that you committed to for the Sprint?

Here’s a snapshot of a Google doc used to keep track of a team’s performance over time:

Comments

Popular posts from this blog

Severity, Priority, Impact and Likelihood - Managing Defects and Risks

Enterprise Agile Framework: The Entrepreneurial Operating System (EOS)

Chatbot Code of Ethics