Skip to main content
 
nz

  • Increase Speed to Market
    Deliver quality quicker by optimising your delivery pipeline, removing bottlenecks, getting faster feedback from customers and iterating quickly.

  • Enhance Customer Experience
    Delight your customers in every digital interaction by optimising system quality and performance to provide a smooth, speedy and seamless user experience.

  • Maximise Your Investment
    Realise a positive ROI sooner and maximise your investment by focusing your energy on high-value features, reducing waste, and finding and fixing defects early.
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page


INSIGHTS / Articles

TMMi in the Agile World – PART II: Non-Functional and Peer Reviews

 8 Aug 2018 
INSIGHTS / Articles

TMMi in the Agile World – PART II: Non-Functional and Peer Reviews

 8 Aug 2018 

In this second of two articles looking at how TMMi Level 3 assessment can be gained in agile projects, we’re going to look at the role of Non-Functional Testing and Peer Reviews in terms of TMMi.

3.4 Process Area 3.4 Non-Functional Testing

The purpose of the Non-Functional Testing process area is to improve test process capabilities to include non-functional testing during test planning, test design and execution. The product being developed will dictate which non-functional aspects are relevant for testing.

A different approach to testing non-functional attributes is often needed in order to fit a least some non-functional testing within the iteration. This is one of the fundamental changes for the better with Agile, in that quality attributes are already tested early throughout iterations, and not left until the end. Not all of the components/functions may be available from iteration one to do a full non-functional test, however the results of this testing can be used to give early indications of quality problems.

The user stories should address both functional and non-functional elements to ensure that the right product is developed for users and customers. ISO 25010 quality characteristics can help structure the requirements and testers should consider these non-functional elements [ISO25010].

3.4.1 SG1 Perform a Non-Functional Product Risk Assessment

The product risk session(s) for Agile projects, as identified and described at the specific goal SG1 Perform Risk Assessment as part of the Test Planning process area, will now explicitly be extended to also include non-functional aspects and risks. At release level, the risk assessment, now also for non-functional testing, can be performed based on the product vision.

At iteration level, this is performed using the user stories that define non-functional requirements as a major input. Preferably all team members, including the product owner, and possibly some other stakeholders should participate in the product risk sessions.

For some non-functional areas, specialists may be needed to assist. The product risk sessions will typically result in a documented list of prioritised (non-) functional product risks.

3.4.2 SG 2 Establish a Non-Functional Test Approach

A test approach for the relevant non-functional quality characteristics is defined to mitigate the identified and prioritised non-functional product risks. For a specific iteration, the non-functional features to be tested are identified during iteration planning.

New non-functional product risks may become apparent during the iteration requiring additional testing. Issues like new non-functional product risks requiring additional testing are typically discussed at daily stand-up meetings.

The non-functional test approach will typically cover the identification of appropriate non-functional test methods and test technique(s) based on the level and type of non-functional risks. Usually it will also address the usage of supporting tools. Both the non-functional test approach at release and iteration level are part of the overall test approach and will held or displayed on the team/project wiki.

Non-functional exit criteria are part of the Definition of Done (DoD). It is important that the DoD has specific criteria related to non-functional testing e.g., Mean-Time-Between-Failures (MTBF) or “front-end web pages have been tested for the OWASP top 10 risk list”.

The iteration should result in the implementation of the agreed set of non-functional user stories (or acceptance criteria) and meet the non-functional (test) exit criteria as defined in the DoD. Note that non-functional attributes can also be part of the acceptance criteria for functional user stories and do necessarily need to be specified as separate non-functional user stories.

3.4.3 SG 3 Perform Non-functional Test Analysis and Design

This specific goal largely follows the same practices, but now from a non-functional perspective, as with the specific goal SG1 Perform Test Analysis and Design using Test Design Techniques from the Test Design and Execution process area. During test analysis and design, the test approach for non-functional testing is translated into test ideas.

In Agile, test analysis and design, and test execution are mutually supporting activities that typically run in parallel throughout an iteration. Non-functional test analysis is thereby not an explicit separate activity, but rather an implicit activity that testers perform as part of their role within collaborative user story development. The acceptance criteria are subsequently translated into non-functional tests.

With non-functional testing, it is often beneficial to perform test analysis at a higher level rather than just user stories. For example, analysing a feature, epic or a collection of stories to identify non-functional tests that are more abstracted than those at user story level, and also span multiple user stories.

With the test-first principle being applied with Agile, non-functional tests will be identified (and possibly automated) prior to, or at the least in parallel with, the development of the code. For most manual non-functional testing, tests will be identified/refined as the team progresses with non-functional test execution.

Tests are most often documented as test ideas when using exploratory testing. The prioritisation of non-functional tests typically follows the prioritisation of the user story they are covering.

However, the prioritisation may also be driven by the time needed to prepare and perform a certain non-functional test. Specific test data necessary to support the execution of non-functional tests is most often created instantly to allow a prompt start of the execution of non-functional manual tests.

Traceability between the non-functional requirements and tests needs to be established and maintained. Teams need to make clear that they have covered the various non-functional user stories and acceptance criteria as part of their testing.

3.4.4 SG 4 Perform Non-functional Test Implementation

Test implementation is about getting everything in place that is needed to start the execution of tests. Typically, the development of test documentation, e.g., test procedures to support test execution is minimised.

Rather automated (regression) test scripts are developed and prioritised. Non-functional test implementation will follow many of the practices already described at the specific goal SG2 Perform Test Implementation as part of the Test Design and Execution process area.

What specifically needs to be done, and how non-functional test implementation is done, largely depends on the approach defined, techniques being used, and which non-functional characteristics need to be tested and to what level. For some non-functional quality characteristics, the availability of test data is essential and needs to be created during test implementation.

Of course, test implementation and preparation will start as soon as possible in parallel to other (testing) activities. It’s not a separate phase, but rather a set of activities (listed on the task board) that need to be performed to allow for an efficient and effective test execution.

3.4.5 SG 5 Perform Non-functional Test Execution

As with the previous goal in this process area, we will largely refer back to the related specific goal SG 3 Perform Test Execution part of the Test Design and Execution process area. The practices for execution of non-functional tests, reporting test incidents and writing test log’s in an Agile environment are basically the same as for functional testing in an Agile environment.

It will typically be done with much less documentation. Often, no detailed test procedures and test logs are produced.

Non-functional test execution is done in line with the priorities defined during iteration planning. Typically, many non-functional tests will be executed using exploratory and session-based testing as their framework.

With iterative development, there is an increased need to organise and structure regression testing. This is preferably done using automated regression tests and supporting tools.

Regression testing, of course, also applies to the non-functional aspects of the system that have been identified as important to test. The non-functional incidents that are found during testing may be logged and reported by the team.

There is typically a discussion within Agile projects whether all incidents found should indeed be logged. Some teams only log incidents that escape iterations.

It is considered a good practice also in Agile teams to log data during non-functional test execution to determine whether the item(s) tested meet their defined acceptance criteria and can indeed be labelled as “done”.

The information logged should be captured and/or summarised into some form of status management tool, in a way that makes it easy for the team and stakeholders to understand the current status for all testing that was performed.

3.5 Process Area 3.2 Peer Reviews

The purpose of the Peer Review process area is to verify that work products meet their specified requirements and to remove defects from selected work products early and efficiently. An important corollary effect is to develop a better understanding of the work products and of defects that might be prevented.

3.5.1 SG 1 Establish a Peer Review Approach

Agile teams typically do not do formal peer reviews in the sense that they usually don’t have a single defined time when people meet up to provide feedback on a product. However, they do achieve the intent of Peer Reviews by doing continual less formal peer reviews throughout the development.

However, there still needs to be a discipline when conducting these activities. This TMMi goal covers the practices for establishing a peer review approach within a project.

A review approach defines how, where, and when review activities should take place, and whether those activities are formal or informal. Establishing a peer review approach is also applicable to Agile projects, however typically the review techniques applied and the way reviews are organised is very different.

Examples of peer reviews typically performed within Agile projects:

  • having refinement / grooming sessions on the specifications (e.g. user stories) with the team and business stakeholders on a regular basis throughout an iteration
  • daily meetings with other team members to discuss openly the work products, e.g., code or tests, being developed and providing feedback
  • the demonstration of products early and often to customers, at least at the end of an iteration during the iteration review

Poor specifications are often a major reason for project failure. Specification problems can result from the users’ lack of insight into their true needs, absence of a global vision for the system, redundant or contradictory features, and other miscommunications.

In Agile development, user stories are written to capture requirements from the perspectives of business representatives. This shared vision is accomplished through frequent informal reviews while the requirements are being established.

These informal review sessions are often referred to as backlog refinement or backlog grooming sessions. During refinement meetings, the business representative and the development team (and stakeholder if available) use review techniques to find the needed level of detail for implementation and to clarify open issues. Almost continuous reviewing by the team on work products being developed is part of the whole team approach.

They are performed with the intent of identifying defects early and also to identify opportunities for improvement. Agile methods and techniques such as pairing also include peer reviews as a core practice to create feedback loops for the teams.

Of course, in case of high complexity or risk the team can opt to apply a semi-formal or formal review technique, e.g. inspection. In these cases, there is a clear rationale for spending effort on a more formal review and applying a more disciplined way of working.

Agile methods rather strive to validate requirements through often and early feedback to quickly implement valuable product increments. The need for early formal validation is reduced by showing quick results in the form of integrated product increments. If the increment does not completely meet the requirements of the stakeholders, the delta is put back into the product backlog in the form of new requirements and prioritised with all other backlog items.

A demonstration of what actually got built during an iteration is simply a very efficient way to energise a validation-based conversation around something concrete. Nothing provides focus to the conversation like being able to actually see how something works.

The demonstration is an activity that is performed during the iteration review. Features are demonstrated to and discussed with stakeholders, and necessary adaptations are made to the product backlog or release plan to reflect new learnings from the discussion.

3.5.2 SG 2 Perform Peer Reviews

Of course, like any approach, it should not only exist as a defined and agreed upon approach, it should be adhered to. The team needs to spend a substantial amount of time on backlog refinement sessions during an iteration, applying informal (and possibly) formal reviews as part of their daily routine, and have stakeholder demos on a regular basis.

Stakeholder/customer demonstrations are expected at least at the end of each iteration. Entry criteria are typically in the form of “Definition of Ready” criteria within Agile projects.

INVEST [Wake] is an example of a set of criteria that are commonly referred to in Agile project when reviewing and updating user stories. INVEST is an acronym which encompasses the following concepts to make up a good user story:

  • Independent
  • Negotiable
  • Valuable
  • Estimable
  • Small
  • Testable

Of course, other quality related criteria exist and may be used beneficially as well. It’s important that the tester, being part of the Agile team, participates in review sessions.

This is especially true for sessions in which work products are discussed and reviewed which are used as a basis for testing throughout the iteration, e.g. at backlog refinement sessions. It is recommended to have at least one developer and one tester present when refining the backlog to ensure alternate viewpoints of the system are present.

Typically, the tester’s unique perspective will improve the user story by identifying missing details or non-functional requirements. A tester can especially support the identification and definition of acceptance criteria for a certain user story. A tester contributes by asking business representatives open-ended “what if?” questions about the user story, proposing ways to test the user story, and confirming the acceptance criteria.

Specific practice 2.3 Analyse peer review data is especially relevant with formal reviews. Much effort is spent in formal reviews. To ensure the effort is spent both efficiently and effectively, peer review data is gathered and communicated to the team in order to learn and tune the review process.

As stated, formal reviews (e.g. inspection) are less common in Agile projects. Whereas gathering data is an essential part of formal reviews, it’s much less common with informal review.

As a result, one may argue that detailed peer review data gathering, analyses and communication is less or even not relevant in an Agile context. In the context of deciding to collect data when doing peer reviews, ask the following questions:

  • Who is going to use this data if it is being collected?
  • How does this data relate to our business objectives?

If no one is going to use the data meaningfully, then don’t waste valuable resources collecting it. As a conclusion, for most Agile project Specific practice 2.3 Analyse, peer review data will be considered as being not applicable.

Note that some basic valuable review data will still be collected and used in the context of the generic practices, e.g., GP 2.8 Monitor and Control the Process.

Conclusion

Most Agile methods/frameworks are lightweight and simple to understand, but often difficult to master. The most common is based on empirical process control theory, as are many, which puts learning and adapting to the forefront.

One of the most important ceremonies is of course the Retrospective, where we reflect and make improvements. Therefore, this aligns with TMMi and other improvements practices, such as Planit’s own Agile Process Optimisation review.

Whilst the existing TMMi methodology and assessment can equally be applied to Agile projects and organisations as is, “TMMi in the Agile World” explains how to do this using Agile terminology and the ways to apply TMMi within Agile practices.

If you want help in optimising your organisation and projects, assessing their maturity against industry standards or get a baseline then roadmap to improve, then contact us. In the fast pace of today’s world, can you afford not to?

Reference

Leanne Howard

Business Agility Practice Director

Elevate Your Testing Maturity

TMMi is an internationally recognised model designed to help organisations optimise their testing. Our expert insights will enable you to achieve better testing outcomes with less effort.
 
Find out how we can help you build your understanding of the TMMi model, identify your testing maturity and systematically implement improvements and attain TMMi certification.

 

Find out more

Get updates

Get the latest articles, reports, and job alerts.