Skip to main content
 
in

  • Increase Speed to Market
    Deliver quality quicker by optimising your delivery pipeline, removing bottlenecks, getting faster feedback from customers and iterating quickly.

  • Enhance Customer Experience
    Delight your customers in every digital interaction by optimising system quality and performance to provide a smooth, speedy and seamless user experience.

  • Maximise Your Investment
    Realise a positive ROI sooner and maximise your investment by focusing your energy on high-value features, reducing waste, and finding and fixing defects early.
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page


INSIGHTS / Articles

Quality Doesn’t Just Happen

 5 Feb 2017 
INSIGHTS / Articles

Quality Doesn’t Just Happen

 5 Feb 2017 
Abstract

All projects work within the confines of schedule and budget, but too many sacrifice quality practices with the assumption that the short cuts will allow them to release sooner and with less cost. This article discusses why this assumption is untrue. As software testers, we know there are many good practices and bad practices in software projects. This piece looks at these practices and boils them down to the do’s and don’ts of software development, with particular focus on developing a quality product. Through the list of 13 don’ts, common project problems are examined and the downstream effects are explained. It’s not all negative though! For each don’t, there’s at least one do. Good project management means limiting the don’ts, and concentrating on the do’s. This article also includes a cost of quality example that proves the value of early and good testing and the considerable savings of building a quality product, right from the start.

We all know there are lots of do’s and don’ts in software development projects. Yet, despite knowing this information, we still find projects running out of time and over budget, releasing with poor quality and not meeting the needs of the users. Why does this continue to happen? One reason is that there is still a large community of people who believe that quality will just occur as a natural by-product of the analysis, development and testing process. Or maybe because we use a different software lifecycle methodology. But it doesn’t. Quality has to be consciously built in and maintained, throughout the project, and by everyone involved in the project.

Some practices are known to lead to higher quality. Some are known to lead to quality shortcuts that ultimately result in a poor quality product that is difficult to maintain. Let’s take a closer look at the good and the bad practices, also known as the quality do’s and don’ts.

It’s always easier to identify poor practices because the result of these practices is usually obvious. Good practices are subtler and often go unnoticed because of the expectation that quality will just automatically happen. It’s an old joke among software testers that a good release is always credited to the developers, whereas a bad release is the fault of the testers. Why is this? It’s because quality is “expected”. If it’s there, then the developers obviously did a good job and everyone else was probably unnecessary. If it’s not there though, it’s the fault of the gate keepers; the testers, who let the poor quality escape to production. This is patently unfair, yet a common perception.

It's true that quality in software development projects doesn't just happen on its own. Quality usually doesn't happen when the project depends on a small group of heroes to ride in on their white horses and wave their shiny swords to vanquish the problems. Quality happens only when careful planning is done, when the entire project team maintains a quality-conscious approach every step of the way, and when problems don't escape from the phase in which they were introduced. A quality product is a team effort. It's planned and predictable. It's without heroes, and it's faster and cheaper than a low quality effort. So why is it so hard to produce a quality product?

Let’s look at the problems (the “don’ts”) that can put a project into a quality crisis and the lessons (the “do’s”) that should be learned from these bad practices.

Don’t #1 - Don’t reward for shipping on schedule. Anyone can ship garbage. Base rewards on quality measures. It is common to rush a project; to work Herculean hours and to push to ship on time. But shipping a product before it’s ready just defers the quality problem to the maintenance phases where it becomes more expensive and more time consuming to rectify the issues. In fact, it often happens that the project team has disbanded and the maintenance work is pushed onto an unsuspecting business-as-usual team.

Do - Base rewards on quality measures. Reward the team based on how the product performs in production. Are the customers happy with it? Does it deliver the proper functionality in the proper way? Are defects found in production that should have been caught during the development phases? Use realistic measures to determine the quality, and any subsequent team rewards.

Don’t #2 - Don’t require and certainly don’t reward heroes. This may sound counter-intuitive, but well planned and well executed projects should not need heroes. Heroes are needed when something has gone horribly wrong and a last-minute save is required. Just as a well-played rugby game should not depend on the 30 seconds, a good software project should not require Herculean efforts to get it over the line and ready for release. Needing heroes is a bad thing. Rewarding heroes encourages the wrong behaviour.

Do - Fix problems as they occur and work to a realistic schedule. Don’t expect heroes to step in and finally get the test environment configured – that configuration should have been planned and executed as a part of normal work, not as a crisis when everyone is blocked.  Plan ahead, allocate resources as needed, and execute to plan.

Don’t #3 - Don’t recover time by skipping important steps. If a project is running late, skipping the requirements analysis is not going to help. In fact, it’s going to hurt. A lot. The analysis steps in a project are there for a reason. Most projects still fail due to poor requirements (see the 2016 Planit Index), yet this is still a step that is commonly short cut.

Do - Good analysis, get a good understanding of the customer’s needs and ensure good documentation for every project. If you’re using an Agile model, fine, but you still need to understand what the customer wants and you still need to document epics and user stories, so that the team understands the goals and the product owner knows what to evaluate. Starting without a plan and without an understanding is likely to result in a meandering schedule that will burn time and money until someone pulls the plug. Effective time spent doing analysis will save time for the overall project and will result in a better, more maintainable product in the end. This has been proven over and over again, yet we still keep assuming our project is different and we can jump to coding.

Don’t #4 - Don’t limit stakeholder participation in reviews. Interactive reviews are critical in creating a mutual understanding of the requirements – in whatever form they are presented – whether individual user stories or a monolithic document. When requirements are sent out via email and questions/responses are requested, this chance for interaction and discussion is lost. It’s even worse if the development team has already started coding. It’s a lot easier to code the right thing from the start than to change code that was never designed to fulfil the user’s needs.

Do - Have an interactive session to review the requirements and solicit input. The test team needs to understand the requirements in order to plan the testing. The developers need to understand what the customer really wants. The customer or product owner needs to understand what they will be getting, but they all need to talk. It's easy to ignore documents that are sent by email for approval. Not responding doesn’t equal approval; it equals "I didn't have time to read it."

Do - Conduct a cross-functional requirements review. It will ALWAYS save more money by preventing defects than it costs in time and manpower. Think about project failures. How many were due to requirements issues? Most of them? That would be normal, sadly. Put the time into the reviews and end up with good, solid, understood requirements and project goals. Ensure the design will support the requirements. Plan up front from a project that will be successful for a long time. In addition to developers validating the design, let the testers try to develop their tests from the design. If they can’t, something is unclear or incorrect. Use the testers to do what they do best: test!  But testing isn’t limited to software, it includes documentation too.

Don’t #5 - Don’t rush to code. Coding should start when the stakeholders have agreed on what is to be developed. This doesn’t mean that a giant requirements document is needed that defines all possible nuances, but it does mean that a common agreement is set and the code that is being developed is based on an agreed approach. Starting coding from user stories lacking acceptance criteria means the developer will be defining what the software will do, not just how it will do it. Defining the requirements, understanding the interfaces, and identifying the unknowns will save developer rework and test blockers.

Do - Wait to start coding until the requirements are stable and understood, or else budget time for subsequent rework. The refactoring that is common in Agile projects should be scheduled, not assumed to not exist. It’s a realistic consideration that refactoring will be needed if there are some unknowns in the requirements. Schedule realistically, not optimistically.

Don’t #6 - Don’t assume the coding is done just because the developers have run out of time.

Do - Label code as "complete" when it works. Good unit testing is part of the development effort, not an optional item to be jettisoned when the schedule is tight. This is a critical step in the quality process – ensuring that the code is working and meets, at least, the developer’s expectations. This practice is at the root of DevOps and the “shift left” concept where code is continuously integrated, deployed and tested. This provides prompt feedback to the developer and ensures a continuous level of quality in all components. It also paves the way to smooth maintenance releases with minimum regressions.

Don’t #7 - Don’t wait to start testing until the code is written. If the testers are only engaged after the coding is complete, all they can do is assess the quality (or lack of); they can’t contribute to building a better product.

Do - Engage the testers early in the project. They need to see and understand the requirements and participate in the stakeholder reviews and discussions. Test analysis and design should occur very early in the project. As the testers investigate how to test the software, they delve into understanding the requirements for what the software should do. This analysis and design phase is one of the single most effective ways to build better code – use better requirements! Testable requirements will help ensure a testable product by eliminating ambiguous or missing requirements.

Do - Ensure requirements are testable. Testable requirements provide enough details for the developers to accurately implement the functionality. It doesn’t matter what form the requirements take – monolithic documents, user stories, emails… if they are testable, they can be developed and understood. Customers/product owners can review them and understand. Developers can proceed quickly and confidently. Understanding the goal helps to target the team toward the best path.

Don’t #8 - Don’t block testing. To maximise team efficiency, the project plan needs to consider testing efficiency as well. This may determine feature implementation order. If the code isn’t given to the testers in an order that allows testing, the test team may be unnecessarily blocked. 

Do - Make sure the project team works as a team. This means that the software is designed, developed and tested as efficiently as possible, meaning that everyone is able to work efficiently. This requires an honest discussion early in the design phases to discuss the order of implementation. By coordinating the implementation order, the test team can work effectively on both manual testing and test automation. Performance and security testing can also be conducted earlier in the testing cycles with a well-planned implementation order.

Don’t #9 - Don’t give in to schedule pressures and cut corners. It’s easy to feel the pressure and discard tests or minimise coverage in some areas. That practice leaves the project open to unidentified risk.

Do - Use risk identification and analysis to prioritise and combat schedule pressure. For each identified feature implementation item, the cross-functional team should assign two numerical risk ratings: technical risk and business risk. Technical risk is used to rate the risk that is inherent in the code or implementation due to complexity, traditional instability, difficulty in creating test data, and any other technical risk factor. Business risk is used to assign a value to the impact to the customer if this item doesn't work correctly. Generally, the developers supply or review the technical risk rating. The analysts and the customer do the same for the business risk rating.

The test team multiplies the risk values together to get one risk number for each testable component of the software. Testing (and sometimes development) can then be prioritised based on the risk factor, which allows the team to mitigate the highest risk items first. In the event that there isn't sufficient time at the end of the schedule, the test team can talk about risk mitigation achieved versus risk still known to exist in the product as input into the business decision of releasing the software.

Don’t #10 - Don’t allow the project to wallow in defects. Test team efficiency is severely impacted when they have to test around areas that aren’t working correctly. Fixes need to actually fix something, not break something else. Unfixes (as they are sometimes called) do more damage than good.

Do - Ensure defects are processed efficiently, throughout the development and testing cycles.  Buggy software takes longer to release, and backlogs of defects only slow down progress for the overall project.  Ignoring defect fixing until all the features are implemented is the equivalent of building on a unstable foundation. The more you build, the more difficult and time consuming the fixes become. Time is lost as people work around the known issues, report the backlog, review the backlog and grumble about the backlog.  To ensure an efficient workflow, the defects need to be processed efficiently – fast turnaround is easier for the developers and the testers and makes everyone more effective in their work. Pushing defects to maintenance releases or “refactoring” efforts only prolongs the pain.  Rip that plaster off, fix the problem and move on.

Don’t #11 - Don’t release the software before it’s ready. Common characteristics of projects that have released the software before it was really ready include:

  • Management didn't recognise that "on time" didn't equal "satisfied customers."
  • The entire project team was driven by schedule. Every decision showed schedule, rather than quality consciousness.
  • The shortcuts taken to improve schedule time (unfinished requirements, insufficient system design, no unit test) actually made the project take longer to complete.
  • The maintenance release was, in reality, still the primary release, but now the unhappy customer was involved too.

Do - Make sure every decision targets quality, not schedule. Then, all the answers become obvious.

Should you do a requirements review? Absolutely.

Should you resolve requirements issues as early as possible? Yes.

Should the developers take the time to do good, automated unit testing? Of course.

Should there be a continuous integration/deployment environment with test automation? Definitely, particularly if the software will be supported for more than a year.

Should testing proceed according to the risk analysis? Yes, particularly if we think there will be schedule pressure.

Should we delay the release if the defect count is higher than we want? Seems obvious, doesn’t it? But it’s amazing how many teams ignore the count and push on forward.

Don’t #12 - Don’t ignore quality costs. Quality is expensive. But lack of quality is a lot more expensive. 

Do - Use cost of quality calculations to determine “how much testing”. Legacy information on the exact cost of quality is always more credible, but if we use sample numbers drawn from a multitude of large projects, we get the following:

  • On average, 50% of the defects are introduced in the requirements. These are due to unclear and vague requirements, as well as functionality that was not defined and had to be introduced in a maintenance release. This also includes data issues and equipment issues where the test team didn't have the right data or equipment to reflect the customer's environment. Additionally, all defects associated with the unwanted features are counted here, since those defects would not have occurred if the features hadn't been implemented.
  • On average, 15% of the defects are due to design issues - particularly interfaces between code modules and other systems.
  • 25% of the problems are coding errors, both in new code and regressions introduced in the fixes.
  • 10% of the problems are system integration issues that are only visible in the fully integrated environment.

Of course, individual project numbers may differ, but probably not by much. This is easy data to track by recording the “phase introduced” and the “phase found” for each defect that gets reported. Across projects that do insufficient testing, or rely only on user acceptance testing (UAT) for the system test, it’s not uncommon to see that 50% of the defects are caught in system test and the other 50% are caught in production with real usage.

Employing widely used cost numbers, different values are assigned to a defect depending on where it was found. 

  • $1 for each bug found in the requirements review
  • $5 for each bug found in the design review
  • $10 for each bug found in unit test
  • $100 for each bug found in system test
  • $1000 for each bug found by the customer

It’s important to remember that no defect is free. The cheapest way to deal with a defect is to find it in the same phase where it was introduced. For example, a defect that was introduced during coding and found in unit testing should cost about $10. For a defect that was introduced in the requirements, but not found until production, the cost is $1000 but it should have been only $1.

With the previous example, if there were 1000 defects in the product and 50% of those were found in system testing (by the poor UAT testers!) and the other 50% were found in production, here’s what we see for the costs:

Found in requirements 0
Found in design: 0
Found in unit test: 0
Found in system test: 500 x $100 = $50,000
Found by the customer: 500 x $1000 = $500,000
Total cost of quality: $550,000

If good testing had been done throughout the lifecycle, and defects had been caught in the same phase where they were introduced, then the cost of quality would be quite different:

Found in requirements 500 x $1 = $500
Found in design: 150 x $5 = $750
Found in unit test: 250 x $10 = $2,500
Found in system test: 100 x $100 = $10,000
Found by the customer: 0
Total cost of quality: $13,750

Doing the proper quality assurance, building quality in, and verifying at each phase that there are no escapes can reduce the cost of quality by more than $500,000. This is an ideal model, of course, but even if it’s only half as good an effort, the reduction in cost is still staggering.

Don’t #13 - Don’t try to force the round peg into the square hole. Developers need to understand the need for quality and buy into the value of building quality. Project managers need to put quality first with the understanding that schedule and budget outcomes will also improve. Testers need to always drive for quality.

Do - Hire the right people. Build a strong test team from the start in order to create and maintain a strong quality-consciousness in your organisation. Encourage the developers to write good unit tests, provide them with the tools they need to have an automated test framework.  Engage the test team early to analyse the requirements for testability, define the test approach, apply the best testing techniques and be actively involved in the project. This quality-orientation will be infectious as the team sees the returns from their efforts. With a valid rewards system that supports the quality-first objective, the team will quickly learn to follow good practices for the best results.

Conclusion

A quality-focused team will produce a better project in a shorter amount of time, every time, but you have to have the right people to make it happen. You don’t need the heroes to ride in at the end to save the project if it's never in distress. A well-planned project with a quality focus won't be in crisis. There may still be trade-off decisions, which is why we still use risk-based testing to be sure we mitigate the highest risk first, but these can be informed decisions with measurable consequences.

People make our projects happen. Regular people who are doing their jobs, not heroes. The test team must have the skills, personalities, and capabilities to perform a high-quality function throughout the project’s lifecycle. It is important to get that team together, give them responsibilities, and integrate them into the project from the beginning. Building a quality team, like building a quality product, takes effort. Quality doesn't just happen; the right people make it happen.

Deliver Quality Quicker

In today’s competitive landscape, organisations expect to deliver more ambitious technical outcomes at improved efficiency. We can help you achieve these goals by embedding quality throughout the lifecycle, optimising your delivery to improve outcomes, accelerate speed, and decrease cost.
 
Find out how we can help you mature your quality engineering practices to consistently achieve better results with greater efficiency.

 

Find out more

Get updates

Get the latest articles, reports, and job alerts.