Skip to main content
 
nz

  • Increase Speed to Market
    Deliver quality quicker by optimising your delivery pipeline, removing bottlenecks, getting faster feedback from customers and iterating quickly.

  • Enhance Customer Experience
    Delight your customers in every digital interaction by optimising system quality and performance to provide a smooth, speedy and seamless user experience.

  • Maximise Your Investment
    Realise a positive ROI sooner and maximise your investment by focusing your energy on high-value features, reducing waste, and finding and fixing defects early.
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page


INSIGHTS / Articles

Traditional vs. Agile (Part 2)

 26 May 2014 
INSIGHTS / Articles

Traditional vs. Agile (Part 2)

 26 May 2014 

This paper discusses a study comparing Traditional and Agile methodologies and thus assisting the user to pick the most appropriate methodology for use in projects. Also two different toolsets were used in this study, IBM Rational Quality Manager and HP Agile Manager, building on the original study findings.

This whitepaper references the one published by Leanne Howard, Traditional vs. Agile in Testing Experience, March 2012 and expands the study. This study continues the comparison; adding more tools to the conversation between traditional and agile methodologies, and based on the study we have suggested some conclusions in this paper.

The timing of this follow-up study is highly appropriate, as industry research in the interim two years illustrates the growing presence and maturity of Agile processes. Currently 30% more companies are now utilising Agile in some capacity, reaching 80% (Planit Testing Index 2013). In fact, in total project applications, Agile now holds equal top spot with Waterfall, each methodology holding 33% of current software development projects.

Software Development Projects by Methodology

Starting Point

Like the initial study, this second part was conducted in a controlled environment using the same test input documentation and similarly qualified professional testers.
The initial environment designed as a starting point for the study was:

1. Business requirements document *
2. Team of professional testers *
3. Mybank – Banking application *
4. Tools: HP Agile Manager and IBM Rational Quality Manager
5. Teams:
Team 6: IBM Quality Manager Traditional
Team 7: IBM Quality Manager Agile
Team 8: HP Agile Manager

* These were same as the initial study

Staff were restricted to membership of one team only, with tool and project data access being exclusively available to those team members. Discussion of their experiences and results were prohibited, including how many defects they found or the amount of coverage they planned and were able to achieve.

Metrics

There were a number of metrics gathered:

  • The productivity of each team was measured based on the effort days taken for each team and the number of defects raised per day
  • Defects split by total number and severity
  • Coverage of requirements

Some observation techniques were used to gather data, quantitative techniques were also used to record how teams felt about the tools they were using.

Findings

There were a number of interesting findings which have been highlighted in the sections below.

Effort Days and Test Efficiency

Note: In calculating the effort days per team the number of test days and the number of team members were taken into account for each team. The standard working day of 8 hours effort has been taken to represent one man day.

Figure 1: Effort in Man Days

The Agile Team 7 and Team 8 took the least number of man day’s effort. The teams followed the Agile principle of working software over comprehensive documentation and did not write detailed test cases. Their test planning and coverage was however audited via the use of session sheets detailing charters and test ideas. Having just enough documentation helped them to save time and focus the testing effort more towards execution.

In the previous study the Agile Team and Traditional Team were not given any set timescales to finish the project. Through self-management and planning the Agile team completed their testing before the Traditional Team, which is similar to what has been found in this study even though the tools and people within the team were different.

Figure 2: Defects per day

The Traditional Team 6 identified slightly more high severity defects than Agile Team 7. But taking into account the total number of effort days and the total number of defects raised (Figure 3), by each team, this highlighted that the Agile Team 8 had been more efficient.

In the previous study more defects per day were found by the Agile Teams. They had more lightweight process and documentation but did not decrease the coverage and spent more time in test execution and therefore finding defects.

Defects

Figure 3: Defect Total and High Severity

The highest numbers of defects were found by Agile Team 8. The highest percentage of high severity defects were raised by Agile Team 7. It’s noteworthy that though Agile Team 7 has raised the least number of defects, they have raised the highest percentage of high severity defects. Agile Team 7 and Team 8 have spent less man days (Figure 1) than Traditional Team 6. This points to the fact that Agile teams are more efficient in finding the highest severity defects.

This would then relate back to the cost of quality being lower for the agile teams supporting the faster feedback loops expected within agile teams.

In the previous study both the Agile teams scored the highest percentage of high severity defects when compared to the total number of defects that were found by the Traditional teams, therefore indicating that Agile finds the higher percentage of high severity defects. This could be an indicator of the quality of testing when using an agile methodology, as they have measurable objectives in the form of acceptance criteria from which to focus their testing. Plus the faster feedback loops with agile and finding defects earlier would mean that they are cheaper to fix.

Figure 4: Number of Tests

There was a high variance in the number of tests that each team executed. The teams were not given any boundaries for tasks that they decided to perform. The individual teams defined the scope of their own testing. There were 12 major components to provide full coverage and most of the teams did recognise these although they did define them slightly differently, see (Figure 5).

Figure 5: Components Tested by Teams

The Traditional Team 6 performed system and usability testing.

The IBM Agile Team 7 performed system testing only.

The HP Agile Team 8 performed system, usability and exploratory testing. They used a collaborative approach and the inter team discussions identified the need for exploratory testing to identify maximum defects. This Team ran double the number of tests compared to the next best Traditional Team 6 (Figure 4). This was possible because the team produced just enough documentation and focussed the testing time on execution.

The team started testing based on session sheets created in word templates attached within the tool. In the daily stand up meeting, it was noted that the limitation of the HP Agile Manager tool added overhead while creating and updating session sheets thus was reducing the team’s productivity. The team decided to base further testing on acceptance criteria (named as acceptance tests in the tool). The teams’ agile approach of responding to change over following a plan helped them to use the test session time effectively.

Collaboration

It was observed that the teams that collaborated well together and with the Product Owner produced the highest quality of testing both in the form of coverage and the number of defects. Interestingly, this seems to be irrespective of the methodology. There were teams which did this better than others and it seemed more to do with the individuals than the methodology they were using.

In the previous study the two Agile teams, sat in two differently configured rooms. One team had a set of desks in the middle of the room facing each other, while the other team had their desks in a “U” shape pushed against the walls, so in effect they had their backs to each other when they were working. This second environment did not promote constant interaction and this was reflected in the results that the team achieved, which were not as good as the other agile team.

Tools Used in the Study

The teams’ high level impression of the tools was captured, see below:

IBM Rational Quality Manager

The IBM Rational Quality Manager tools have a comprehensive suite of test artefacts that may be used for test planning, preparation, execution, and reporting. The Traditional Team 6 used the IBM tool configured for traditional and the Agile Team 7 used the IBM tool configured for Agile. The original out of the box configuration for both above was used for this study. A new user could spend considerable time reading the user manual and then exploring the tool. But for this project the team grasped the basic functionalities quickly without using the manual. The team further developed their skills while using the tool for the project.

The features the team liked about the tool were:

  • The visual representation of the task board view
  • Ability to query work items
  • Drag and drop features
  • Reporting capabilities

The teams noted that they would like to see improvements on the following:

  • A manual refresh was required every time to view changes, and not in real time
  • There was no visual indication that a task had not loaded correctly, so any changes made to it were not saved
  • The Save functionality was not consistent across the application. When AutoSave was checked, the system would save a comment while the user is still in the middle of writing, so ended up with a half comment. The comments were not editable
  • Discussion field was not there as an attribute when selecting the required fields for the query
  • AutoSaving was problematic and selecting the auto save feature didn’t remain after navigating away from the main page
HP Agile Manager

Agile Team 8 used HP Agile Manager to organise, plan, and execute agile projects. The team took half a day to familiarise themselves with the basic functionality of the tool and working on the tool gave them further insights. The tool is highly configurable, but no configuration was done for this project. The team worked within the limitations of the out of the box functionality.

The features the team liked about the tool were:

  • The tool gave great visibility on the project work with real-time analytics and dashboards
  • All estimations of tasks were summed up and presented at the user story level
  • Work items (tasks), flow from left to right according to their completion state
  • Once all tasks of a user story were completed, the user story is automatically marked done
  • User stories can be added to the product backlog and from there subsequently associated to a specific release and sprint
  • Defects and testing can be added to the backlog as well
  • User stories can be linked to defects. As long as the corresponding defect isn’t fixed, the user story cannot be set to done. This represents good agile practices
  • Sprint closure page gives the overall picture of what has been done
  • Retrospective tasks and information is easy to record
  • There were views which reflected taskboards that you might see drawn on whiteboards with columns and sticky notes that could be dragged and dropped

The teams noted that they would like to see improvements on the following:

  • The major drawbacks were in defect management and session sheet creation
  • While creating a new defect the user could enter only limited fields. In order to add more details or even to assign it to someone the user had to edit the defect after addition
  • There was no option to create session sheets
  • The session sheets were created as Word documents and uploaded. The maintenance of session sheets generated overhead as the existing session sheet had to be downloaded, updated and then uploaded again. This also did not allow for easy monitoring of individual tests progress
  • There were no out of the box dashboards for the testers
  • Defects can’t be linked to acceptance criteria
  • The tool is very reflective of Scrum terminology, therefore it would be better to use generic terms such as ITERATION rather than Sprint
Conclusion 

The Agile teams found the most defects, including the highest number of high severity defects. The Agile teams certainly represented value for money with much shorter timeframes and therefore corresponding costs. Their focus was on more test coverage with less documentation. The testing was based on acceptance criteria rather than writing detailed steps. This helped them to reduce the man hours required for testing by focusing the available time on exploring the product, getting to know how it worked and test execution rather than on detailed documentation.

The Traditional team raised a high number of defects, but the effort days spent for raising the same number of defects as the Agile teams was greater. This would seem to imply that Agile teams were more efficient.

Irrespective of the methodology and tool used, the success of a project will depend on the teams’ willingness to work collaboratively and the skill level of the team members. There should be frequent team communication and transparency of work done by the team. These factors depend on the way the team organises itself, and are embraced within Agile as part of the disciplines such as the use of the task board, daily stand-ups and retrospectives. If the team is using a new tool or new methodology it is recommended to factor in the learning time in the estimates. This is again catered for by the use of Iteration Zero in agile projects.

Hence, the Agile manifesto’s core values individuals and interactions over process and tool; working software over comprehensive documentation; responding to change over following a plan and customer collaboration over contract negotiation, were proven effective and efficient ways to work in this study. Whilst this study was based primarily on quality and testing activities, as these are now conducted by the whole team in an Agile environment the conclusions can be seen as indicative of the delivery of products under the different use of tools and methodologies.

If anyone has conducted similar studies it would be interesting to hear from you. Please contact Leanne Howard.

A big thankyou to all the Planit teams that have taken part in this study, without you it would not have been possible. A particular thankyou to Princy Ninan and Cinthia Gongupalli who as well as being on Team 8 have helped me enormously crunching the numbers, identifying the trends and putting this document together.

Download Whitepaper

Leanne Howard

Business Agility Practice Director

Embed Quality in your Agile Delivery

No matter where you are on your Agile transformation journey, we offer a broad range of consultation, assessment, training, and coaching services that enable your teams to deliver better outcomes faster, embed quality in every iteration, and adopt 'shift left' practices.
 
Find out how our Agile Quality experts can help you understand your Agile maturity and fast-track your journey to Agile success with quality embedded throughout. .

 

Find out more

Get updates

Get the latest articles, reports, and job alerts.