Skip to main content
 
uk

  • Increase Speed to Market
    Deliver quality quicker by optimising your delivery pipeline, removing bottlenecks, getting faster feedback from customers and iterating quickly.

  • Enhance Customer Experience
    Delight your customers in every digital interaction by optimising system quality and performance to provide a smooth, speedy and seamless user experience.

  • Maximise Your Investment
    Realise a positive ROI sooner and maximise your investment by focusing your energy on high-value features, reducing waste, and finding and fixing defects early.
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page


INSIGHTS / Articles

Water Agile Fall in Practice

 1 Jun 2016 
INSIGHTS / Articles

Water Agile Fall in Practice

 1 Jun 2016 

Planit’s Agile Community has recently discussed the topic of combining Agile and Waterfall processes in line with delivery methods that some of Planit’s Client projects employ. The focus of this discussion is based around the pros and cons of merging Water and Agile into one hybrid model: Water Agile Fall.

This case study highlights how one Telecommunications company has achieved its delivery objectives using the Water Agile Fall model for a program of work, written from a test manager’s point of view.

The first column is labelled 'Water', containing the business case, functional specs, technical specs, multiple requirements and upfront architecture. The centre column is labelled "Agile", displaying the product backlog leading to the iteration backlog in 2-4 weeks and daily. The third column is labelled 'Fall', containing operations (BAU), systems integration testing, user acceptance testing and large regression packs. An arrow pointing right labelled 'add value' runs across all three columns at the bottom.

Figure 1: Water Agile Fall diagram

Background

The program in this case study was delivering the company’s new networking infrastructure and IT technology as part of a large network upgrade. The scope was complex, with multiple vendor teams, technologies, systems and work streams utilising a mixture of delivery methods across different locations.

This meant that some pieces of work were delivered using Agile and some were being delivered using Waterfall. There were a number of challenges which arose from having multiple, distributed teams building and testing using different methodologies for a common integrated solution.

This case study describes how the program was able to overcome challenges and succeed with a hybrid approach.

The Objectives

The objectives of the program were to:

  • deliver multiple systems into an end-to-end solution irrespective of development lifecycles
  • ensure that all ‘new’ technology in the network/IT (OSS/BSS) stack functioned as expected
  • ensure that ‘existing’ legacy technology is not adversely impacted
  • demonstrate end-to-end traceability from business requirements to test results
  • deliver test artefacts to ensure vendor compliance
How the Program was planned out

Business requirements were written by a business analyst at commencement of each program release. These requirements were provided to each software or network vendor to deliver upon. In line with the test strategy, requirement verification workshops were held up front with domain subject matter experts to determine how each requirement would be verifiable and in which project phases. Each vendor used a delivery model suited to their scope and client stakeholders to develop their system.

The IT system teams preferred Agile, because it enabled the teams to keep the product owners closely engaged and deliver incrementally. A series of sprints were planned out with a backlog, and product demonstrations were provided either per sprint, or at the end of all sprints. The customer’s product owner was involved throughout to provide clarifications and sign off on delivered use cases as part of product demonstrations. Any significant scope items or changes were fed back into the business requirements.

The network teams preferred Waterfall, as the product houses would provide network components and devices in line with global releases. The client’s local teams would customise the solution to meet their needs. Technical specifications were provided by each vendor, usually in draft form at end of the development lifecycle and signed off on implementation to Production with any changes that came from testing. The independent test team for each release would create test plans and cases for the E2E solution based on designs and specifications.

The program used a dependencies register to track dependencies between teams or systems, identifying when test environments were required by systems and when testing teams needed functionality provided by upstream or downstream system interfaces.

Overview of a Network Stack from the Service Management Layer (SML), leading down to the Network Management Layer (NML), followed by the Element Management Layer (EML).

Figure 2: Overview of an example Network Stack*

How the Middle Stage was completed

All development teams were responsible for their own unit and system testing in order to meet the exit criteria of Development phases. System tests were executed for each system, either with N+1 systems connected or the system in isolation where no other systems were connected. Test summary reports were provided by vendor teams to the client at conclusion to demonstrate how exit criteria were met.

Program stakeholders were regularly informed when each system ‘exited’ System Testing on the schedule, regardless of whether the development model consisted of Agile Sprints or a technical network product development phase. This kept stakeholders aligned on dependencies and schedules.

Test Readiness Reviews were held prior to commencement of System Integration Testing (SIT) windows. This ensured that any systems starting SIT had met their entry criteria. Entry criteria involved items such as test summary reports with no critical or high defects, identification of requirements that are not able to be met in previous phases and schedules showing when combined integration tests were planned out. If any systems were not scheduled to start on Day 1 of SIT, then mini test checkpoints were run later on in the test windows.

The end output of a Test Readiness Review was a checklist document, with either sign-off from project stakeholders to proceed, or a recommendation to close actions prior to proceeding.

How the Fall stage was achieved

SIT was executed against the solution interfaces, with testing targeting integration, functional and non-functional defects. The testing was performed by the IT vendors and by independent network test teams.

Example of an Integration Test Schedule displaying the Network Build at the top, follow by Shakeout and Topology Provisioning Testing, followed by Traffic and Service Testing, followed by Network Failure Mode Testing over Alarm Testing, followed by Inventory Testing, followed by Network Configuration Testing, followed by E2E Regression Testing over Other Systems Testing, ending with Data Warehouse Testing. An arrow representing 'Time' runs along the bottom.

Figure 3: Integration Test schedule

New topologies were built using logical provisioning systems, followed by traffic generation and service tests executed on the network layer to ensure topologies were functioning well. Then, tests such as ensuring alarms were correctly generated in network assurance alarm management systems were performed against topology ‘services’, allowing inventory management systems and network configuration management systems to be tested. Once all system integration tests had been completed for each system on the topologies, the team performed true E2E tests with full OSS/BSS stacks, such as activation of a new service from ‘front of house systems’, or raising trouble tickets against the topologies. These E2E tests demonstrated that the systems which were at the top of the OSS/BSS stack integrated well with those which were new or changed in the program solution.

Examples of Test Cases:

  • Can a virtual circuit be activated?
  • Did an alarm get generated on the device’s network management system and received in the alarm management system?
  • Could the user take an extract of the currently configured network topologies?
  • Can the ‘new’ modem connect into the topology and generate traffic correctly at a specified transaction rate?
  • Does the operator’s line test for the telephone number pass or fail?

Once each system had passed integration testing and any dependent testing had successfully completed, each was permitted to exit SIT. The key stakeholders for each system were invited to perform User Acceptance Testing.

The Challenges and Solutions

Challenge #1: Lack of understanding at times across the program around Agile-developed system functionality led to escalations from the Waterfall-centric independent test teams.

Even though there were business requirements and design documents, independent test teams had difficulty visualising how a system would work, as the details they wanted were almost always planned to be provided later than the date they scheduled their work to start. In addition, there was less documentation than the independent test teams were used to, as the IT systems were often developed using Agile. The Agile teams were often requested to provide documents not specified in their statements of work, however these were standard for the Waterfall-focused independent test teams. These requests added burden and confusion to the Agile development teams’ work, and the independent test teams could not write their test cases as early as their managers expected them to.

An example comment is: “if I was to do a line test, how would I write steps for that? I want the answer now, but you can’t tell me how it will work because you are still in Agile Sprints! I therefore have no choice but to raise an issue with my manager that the system is not ready.”

The solution: Involving independent test teams early, with expectations clarified between team members and managers.

Independent test teams were asked what they really needed to know and by when for each of the systems, including what was nice to have. For example, did they need to build a topology themselves, or did they simply want to understand how the software generally worked? The independent test teams were provided a report with each application and functionality available for testing (at dot point level) that each system would deliver as ‘done’ in each Agile Sprint.  They were invited to any Agile product demos that were being held so they could see screen layouts, absorb information and ask questions along with the Business Product Owner in a shared forum.

The Business Analyst was made available for the independent test team to ask a reasonable number of questions either in meetings or via email about how designs and requirements would be implemented in each system. This allowed the Agile system teams to execute Sprint work with less external ‘noise’.

The independent test team managers were provided sufficient information so escalations were raised much less frequently but were much more valid. This led to significantly less friction between teams as they knew how to work together.

Challenge #2: Overlapping areas of test responsibility between IT vendor and independent test teams.

Disagreement frequently arose as the network team had an independent test team of their own to ‘prove’ each critical IT system worked as per Network E2E requirements. The IT system team view was that as the IT vendor team had already performed N+1 testing during the Sprints, no other team needed to duplicate their testing. The view of the Network test team was that ‘their’ tests were the only ones that would demonstrate correctness of the solution in the E2E environment. In reality, both test areas provided useful information on the status of the solution from different perspectives.

In addition to this potential for scope overlap, the network test team felt it was their role to perform testing as soon as the systems were available so they could identify priority defects to report to program management. Defects such as ‘system is not working’ were often raised by the network test team on Day 1 of Sprint delivery into SIT when the IT system teams were still performing shakeout. The defects raised during shakeout were often logged without confirmation of validity to the frustration of the system vendor teams.

The solution: Network test and IT system teams agreed to compromise with an accepted protocol on how to test in a shared environment.

Both teams discussed up front how to collaboratively approach test types, such as Shakeout. The independent test teams were asked to respect the IT team by adhering to agreed test release dates. The independent network test teams became more empowered to raise defects as they had more buy-in to the process.

Network team would initially check by email/phone call with an onshore IT Test Lead if a system was successfully shaken out or had any critical bugs. The IT Test Lead had an informal Service Level Agreement (SLA) of confirming defects between 2-5pm onshore time. If the independent test team waited more than 2-4 hours with no response, they were free to raise any defects.

Any blocking defects, such as those which impacted the network test services they were using, were escalated for resolution as soon as possible. Defects were reviewed in detail by offshore teams which provided response on defects by close of business offshore time. If all else failed, the teams could still agree to disagree, with defects logged according to this process without having too much debate.

Defects were raised in integration testing in a much smoother fashion, IT Agile Sprint teams provided fixes or clarifications in a timely fashion, and the overall program had more reliable information on defects for morning status calls.

Challenge #3: It was difficult to perform non-functional testing during the Agile Sprints.

Development teams worked in 1-2 week Agile Sprints which made lengthy, complex, integrated technical testing infeasible. In addition, the system test environments were often not end-to-end connected, and hence not valid for most non-functional testing. There was usually a significant time gap between when this non-functional requirement testing was able to take place and when the defect fix team was available.

The solution: Early identification and static assessment of non-functional requirements where formal testing was not feasible.

Non-functional requirements were identified during the Water stage during requirement testability reviews. Static reviews of designs were done which included assessments of whether or not the end to end integrated solution could meet the requirements. Technical analysis documents were produced with detailed explanations and numbers. Where formal testing was possible, this testing was scheduled and performed. An example of a high level requirement was, ‘would the network equipment function end to end with 99.999% availability?’

Challenge #4: Independent test teams did not understand how the IT teams were using Agile processes.

Agile teams are fit for purpose and by design, lean. Independent test teams were not part of the Agile teams so had many questions about progress and processes. This created collective noise in the project, not all of it good.

The solution: Independent test teams were educated on the Agile systems development process.

Along the lines of Challenge #1, the independent test teams were provided with a schedule with the functionality being provided by each system. Managers of respective teams communicated frequently on progress, and independent test teams were walked through the IT systems development process at a high level so they were more familiar with terminology. This led to improved relationships and trust between the Agile development teams and the independent test teams. The independent team understood the Agile team roles and vice versa. Looking back retrospectively, the independent test team would have benefited from training in Agile fundamentals to complement their on-project knowledge.

Challenge #5: Configuration management was performed differently across all teams, resulting in regression that was too general with not enough focus on areas of risk.

Network teams tracked configuration of topologies and versions of software as part of their standard process. The Agile teams on the other hand did not find value in providing detailed version control information such as dates versions changed in their software, as it was additional documentation from their perspective. The independent test team therefore had a configuration management report which was incomplete, as it only showed a view of network information. This meant that in an environment when different systems entered testing and made changes at different times, it became difficult to identify what regression testing should be performed and against what system at any point in time. The independent test teams therefore performed full, exhaustive regression as a default and raised risks around possibility of defects in systems where insufficient configuration information was known.

The solution: Tracking of code drops, significant defect fixes and change requests in a commonly available, common format IT configuration management log.

The program test manager and workstream test managers trialled the implementation of an IT systems configuration management log which mirrored the format used by independent network test teams. After introducing the concept to the IT systems teams, each system test lead or manager was briefed on how to maintain their defect fix and change requests with dates in the log on the project SharePoint site. Initially, where teams objected to the new process, the value of logging configuration changes was discussed until the process was accepted.

When the IT change management process became more mature, the IT change log information was used as entry criteria for implementing new and updated software. This reliable source of information provided the independent network test teams with information which they could use to plan their regression suites at the end of release windows. In turn, the network test managers had confidence in what was being changed, which led to a smoother regression test phase for the IT systems themselves. The result of providing ‘just enough’ configuration documentation was a reduced number of regression tests and a cleaner exit with less gaps in the overall network configuration report.

Conclusion

While there were many challenges of using different development methodologies in the same test window, these challenges were able to be overcome when teams approached testing with the right attitude. The key to making Water Agile Fall work was having project leaders that acted as ‘bridges’ between different stakeholders while understanding what activities were critical to the success of the build and test effort. Regular communication of the ‘right’ information was critical so that teams from different ‘worlds’, such as those used to highly engineered Waterfall approaches versus those working Agile every day, could co-exist and achieve a common goal.

References

* Figure 2 in “Network Management Systems Architectural Leading Practice” from http://www.cisco.com/en/US/technologies/tk869/tk769/technologies_white_paper0900aecd806bfb4c.html

Embed Quality in your Agile Delivery

No matter where you are on your Agile transformation journey, we offer a broad range of consultation, assessment, training, and coaching services that enable your teams to deliver better outcomes faster, embed quality in every iteration, and adopt 'shift left' practices.
 
Find out how our Agile Quality experts can help you understand your Agile maturity and fast-track your journey to Agile success with quality embedded throughout. .

 

Find out more

Get updates

Get the latest articles, reports, and job alerts.