Technical Debt Ceiling

With all the yammering from politicians about the debt ceiling this seems like a good time to talk about technical debt.
Technical debt is a metaphor.  Just as each of us incurs financial debt in life (sometimes recklessly, other times prudently), software gets released to production with some corners cut.
The idea that no shortcuts will ever be taken is impractical.  Scrum teams own their solution decisions, including feature coverage and code quality.  Rather than semantic or dogmatic arguments, teams use tools, apply standards and work together (including acceptance criteria, pair programming and code reviews) to release the best quality at the right point in time.  Here's a matrix to help frame a view of technical debt:







We make deliberate, prudent decisions to ship software because we know it will provide value to the business and we want to deliver value as soon as possible. We introduce technical debt inadvertently because sometimes we just don't know what we don't know. (There's no excuse for recklessness; people who don't understand layering die miserable deaths in the cold and get eaten by wolves.)
The key to the metaphor is understanding that, just as financial debt burdens, limits and controls options in life, technical debt burdens, limits, and controls options for product managers and software engineers.  No responsible product manager, engineer, or business process owner would allow a fragile solution to persist for long knowing the associated risks.  A warehouse worker may use duct tape or wire to hold together a shelf or part of a conveyor for a short time (it's cheap and allows work to go on for a short time) but those are only stopgap solutions.

I grabbed this helpful graphic from a post on feature flag management that mentions technical debt in that context:

Rally's help section offers this advice on completing work in an iteration:
Why should all defects be closed before a story is accepted?
When creating a definition of Done for user stories, it is recommended that teams require all defects on those stories be closed before accepting them. Since an accepted story represents a completed slice of functionality, any defects attached to that story represent technical debt and should be resolved during the iteration.
It is good practice to test user stories as soon as they are ready. If additional time is needed to resolve a found defect, the team should consider doing so before starting other stories in the iteration. If the resolution of the defect requires a significant amount of time, the team should not release the associated user story, and consider the necessary work as part of the next iteration’s estimation. Overall, it is better to deliver a single completed and fully tested story, as opposed to delivering several incomplete and untested stories after an iteration.
Why should all tests pass in an iteration?
In addition to acceptance criteria, test cases represent the overall quality and acceptably of a completed user story or iteration. You can create many types of tests: acceptance, performance, regression, functional, usability, and user interface. All of these types of testing are highly valuable in defining Done for any agile development team. As your team integrates these different tests into its daily routine, it becomes increasingly important that all tests are passing before the end of the iteration. In addition, as many tests as possible should be passing when each story is completed as part of the acceptance process.
As you get closer to the end of an iteration, the fewer tests you have completed, the higher risk you have that your commitments will not be met. Incomplete tests represent the risk of hidden defects that will then lead to technical debt, and possibly disrupt future iterations.
At the end of the Sprint, as the team reviews the latest work its completed, they note areas of the code or product where risk persists or has been introduced, and discuss strategies for reducing risk and managing the overall defect and technical debt limit of the product.
If the team seems to be satisfying its current Definition of Done but still finds the defect level climbing after each iteration, perhaps its time to review and update the Definition of Done in the next retrospective. Agile supports the evolution of the Definition of Done to address latest learnings and and support continuous improvement.

Comments

Popular posts from this blog

Severity, Priority, Impact and Likelihood - Managing Defects and Risks

Enterprise Agile Framework: The Entrepreneurial Operating System (EOS)

Demand Management Using Agile + Lean