In this first of two articles exploring the link between quality and collaboration, we will look at how we define quality and explore its relationship with value. To start off, if we’re going to discuss quality, we’d do well to agree what it is, why we care about it, how we measure it, and then, how we assure it.
So, let’s look at how we define quality. One definition might be conformance to requirements and another, fit for purpose. But, conformance to which requirements? And, fit for whose purpose?
Quality and value
Maybe there is a requirement that a product must be robust. Clearly, the context will determine the degree of robustness that would be classified as of a high enough quality - crockery could probably afford to be more fragile than a pacemaker.
This brings us to who defines the requirement and who the quality is important to. In software development, the quality will be important to the business, customers of the business, system users within the business, legislators and probably others. That’s quite a variety of quality definitions but it’s important to meet the criteria demanded by each of these stakeholders because, in the end, it comes down to value.
It may be that high quality to technical people, for example, means that the product costs less of their time and budget to fix. For the business, it may be about preserving reputation or avoiding fines or building customer loyalty. In one way or another, all of this can normally be expressed in dollars. In other words, quality is connected in some way to value.
When it comes to perspectives on quality and value, the business will be driven by business goals and strategy while developers may look to the quality of the software against criteria determined by ISO (see ISO 25010). Meanwhile, customers are seeking fit for purpose at the right price (which again is an issue of value) and business system users want something that is correct, usable and timely.
The ISO product quality measures come under a number of different headings, such as functional suitability, compatibility, reliability and many more. But, you could meet all of these criteria and still not have a product that is of value to someone.
A quality product
To illustrate this issue of product’s quality and value, and the dynamic between them, let’s take a look at the yodelling pickle. The yodelling pickle is:
- Fit for purpose
- Easy to use
- Ergonomic design
- Beautifully crafted
- Best in class
- Highest safety standards
- Built to last
- Guaranteed for life
It seems to meet many common quality standards. It’s certainly fit for purpose assuming the purpose is to look like a pickle and yodel. It does look easy to use with a comfortable-grip shape and easy-access switch which I suppose is part of its ergonomic design, so I wouldn’t argue that it isn’t beautifully crafted and portable.
Best in class? It’s likely to be only in class. I don’t know if there are any other yodelling pickles - or similarly vocal vegetables for that matter.
Built to last? I not sure how long you’d want it to last but it is guaranteed for life. (I noticed a lot of products carry that guarantee nowadays but I’m never really sure whose life.) And finally, this pickle has been authenticated and has a certificate to prove it.
It does seem to be a high quality product when evaluated by many common criteria but I don’t know how much you’d be willing to pay for, or how much you value, a yodelling pickle. Maybe you’re a fan.
Either way, I hope I’ve made a convincing case for the link between what we call quality and value.
The cost of quality
Quality can be measured in a number of different ways and that can make it hard to define. But value may provide a more usable definition. Here’s a way of measuring value that I’m fond of:
Value equals, the perceived cost of the problem (or value of the opportunity) minus the perceived cost of the solution. For example, if the cost of the problem is $1000, and the cost of the solution is $500, then there is $500 of value for the organisation or whoever is trying to solve the problem.
Note that the calculation says the “perceived cost” - it doesn’t have to be money. It’s my perception of those two things that equals value. I may value how (I think) I’ll look in a new pair of jeans far higher than the $100 they cost.
We can also measure the value of quality and by the cost of not paying attention to quality. This can be demonstrated by a couple of real world examples; the Intelligent Deposit Machines (IDM) which allowed customers to deposit dollar bills into a bank account and, the cost of a slow response.
Example 1: IDMs
In Australia, financial legislation says that any cash deposit of $10k or more must be reported to an organisation called Austrac – it’s a part of the Anti-Money Laundering and Counter-Terrorism Financing laws.
However, the IDMs, introduced in 2012 by a leading bank, allowed deposits of up to 200 notes of any denomination - 200 $50 notes totals $10k and 200 $100 notes is $20k. Moreover, if you were into money laundering, you could have done this a number of times, at different machines, in the same day. And, you didn’t even need to identify yourself to deposit the money into the account of your choice.
Between 2012 and 2015, there had been 23,506 notifiable deposits ($10k or more) yet no one was notified because the IDM wasn’t intelligent when it came to informing Austrac. The fine per infraction is $18m. Multiply that by the 23,506 times it happened and you have a $423 billion (plus) cost of inadequate quality. Obviously, the bank that operated the IDMs was pretty happy when they managed to negotiate a final settlement of $700 million plus costs.
Example 2: Slow response
The second example of quality cost, is online shopping. Kissmetrics found that 47 per cent of people expect a site to load in under two seconds. And, 40 per cent will abandon it entirely if it takes more than three seconds. 85 per cent of Internet users expect a mobile site to load as fast or faster than on their desktop, which is quite an ask.
It is also interesting that over 70 per cent of people who do get fed up with waiting and leave a site are likely to tell their friends.
Hence, companies like Amazon pay attention to this. They calculated that $1.6 billon (revenue per year) is lost for every extra second that it takes a page to load.
What this shows, of course, is that quality, or the lack of it, can have a very direct relationship to costs.
Adjusting the scope
Which is why, in Agile, the quality is fixed. For traditional Waterfall projects which run a Specify, Design, Build, Test and Deliver lifecycle, the scope is fixed in the beginning. That is to say, the specification, which is the elicitation of requirements, is done until we have gathered all of the requirements and everyone has agreed that those are the things that they want. That spec becomes fixed. So later, if anyone wants to change their mind, they have to raise change requests and pay extra for the variation.
What is variable are the things that are estimated against that fixed scope. This includes quality. If we’re running out of time, we’ll speed up a bit (and reduce the quality) to get the scope done.
Agile turns that on its head. What it says is, the fixed items are; the time we’re going to take, the quality criteria we’re going to meet and the money we’ll spend. These are stable so it’s the scope that will vary. If the business changes its mind about the requirements, we don’t cut the quality to get the scope, we de-scope items to keep the quality. That is, we have a discussion, with the business, about removing some less important requirements to meet their changing needs.
We’re performing the same tasks in Agile that we were doing in the Waterfall approach, but they’re happening at a different time and cadence. We still have to define what’s required, design it, build it and test it, and we still have the business accept it.
The traditional method is to do each of those things once, as a single step in the process. In the Agile world, each of those things are done in short iterations - in Scrum they’re called Sprints. Coders and testers, product owners (PO), business analysts (BA), and the business get together to understand and define what the solution should look like and then the development team (coders, testers, POs, and BAs) meet to agree on the details and design.
When we start the build (producing small increments of software which slowly become integrated into the bigger product) everyone comes together again to look at the way it’s being built and decide if it’s doing what they want it to do.
Testers get involved in all of these conversations which has the effect of shifting quality earlier into the project and, because testers demand specificity, getting people to think (unambiguously) about what they want to build well before building it. This is known as “shift left”.
Finally, the business and POs accept what comes out in the end. Because they’ve been involved throughout the process, there shouldn’t be any of the big (unpleasant) surprises like the ones that (in my experience) often happen at the end of a 12 month traditional project where business sees what it’s getting and decides it doesn’t like it.
In Agile, the business stakeholders gets the things they want most first so they’re able to try it out and respond to it. If necessary, they can change their minds in collaboration and agreement with all the other stakeholders.
In part two of this article, we’ll have a look at that collaboration, what it is, what it isn’t and why it will impact quality and value in such a positive way.