Amongst the ten strategic technology trends for 2018 listed by Gartner, at least half are based on nondeterministic behaviour, three have the word “intelligent” or “intelligence” in their names, and at least two assume adaptive behaviour. The challenge for the majority of today’s testers will be to provide a reliable assessment despite of the non-predetermined test outcomes.
This paper lists technology trends and briefly offers the corresponding testing approach. You will find that new test approaches are often based on expectations different from classical testing assumptions. Testers will have to learn that the same input may not always produce the same output for the application to work correctly.
Forthcoming technology trends and the test approach
Artificial intelligence foundation
Although the subject of Artificial Intelligence (AI) is much wider, the current focus on AI reduces the scope to the specific problems within machine learning. As a consequence, modern AI is creating applications capable of modifying their behaviour as a reaction to the environmental change and in line with what they learned. Progressively such systems will, during time, respond differently to the same input and the same preconditions.
Testing such applications will require an approach where each new oracle has to be built on the previous one. As the system learns, the internal state continuously changes with each test run, even if the input set does not change. The test outcome will then be judged by observing the trend exhibited by the system, rather than by validating a single expected output.
In essence, each solution will grow from a “prototype” to the final system, and whose behaviour will be constantly tested along the way. Another testing technique is to apply metamorphic testing by slightly modifying test cases before each subsequent run.
Testers will be required to understand the algorithms applied and be proficient in machine learning techniques. The input test data set will be replaced with a training data set and then with simulated “real life” test data. Finally, testing will have to be extended to production.
Intelligent apps and analytics
This category of applications, apart from learning capabilities, will autonomously run business and other tasks. The applications will base decisions on analytics, applying statistical and fuzzy logic methods, in resolving marketing, ERP, security and potentially simple legal issues. By their nature, such applications may include chatbots as a part of the solution.
Mathematically speaking, the applications will search for local maxima/minima and provide optimal solutions.
Ideally, people involved in the validation of intelligent applications have an extensive testing background and a broad understanding of data analytics, statistics, usability, and customer service. Testing will include building large environmental and situational models. I.e. building test data sets, monitoring application behaviour, and evaluation of the environment post-conditions.
As the applications deliver “the optimal solution”, testing will have to confirm that the set of rules has not been broken, and that “conversational” interfaces deliver unambiguous and “correct” messages. In such instances, the test outcome may become tester dependent.
A physical entity that can interpret and react to external stimuli is considered to possess some level of intelligence. In its simplest form, an intelligent thing is a sensor with sufficient computational power so that it is able to understand its surroundings and generate its own action.
Intelligent things utilise AI technologies to adapt their behaviour to the changes in their environment. This class of objects includes intelligent sensors, autonomous vehicles, drones, and robotic devices across factories, offices, medical facilities and smart homes. Intelligent things require specialised hardware capable of signal processing, environment understanding, and which is able to control itself.
In addition to the testing approach applied to machine learning and intelligent applications, testing will expand to mechanical areas across pseudo production environments, and finish with alpha and beta testing in production. Testing must confirm cooperative behaviour as the response is appropriate to the applied situational models within the test environment. A significant part of testing should be devoted to the security of the solution and the security of the environment surrounding the “Intelligent Thing”.
A computer model that represents a complex physical object and its behaviour is called a digital twin of a real physical thing. It might be a vehicle, an industrial system, a building, a drone or similar. Using advanced simulation, a number of Internet connected sensors, augmented and virtual reality, and other relevant data, the digital twin exhibits the behaviour of a real object in real time.
Confirming that such a virtual object entirely resembles and behaves in the same way as the real physical object it represents is done by testers that are subject matter experts in the area of interest.
Testing will utilise a virtual environment that mimics the real world. Testers will, initially, build numerous data feed models to create a simulation of sensor data.
When satisfied with the behaviour of the twin, testing will replace the models with “real” sensor data. In this way, the test environment will become the production environment.
Testing will stop when it confirms that there is no behavioural difference between the “twin” in the virtual environment and the behaviour of the physical object in the “real” production environment.
For example, during F1 race the car sensor data are sent back to the digital twin in the company cloud. The digital twin then uses the data to run a selected part of the race in an environment that simulates the race track and environmental conditions.
The twin is instructed to apply different actions. The actions that provide the best outcome are relayed back to the racing team. The driver and the team, then, follow the actions of the digital twin to achieve the desired outcome.
Cloud to the edge
Cloud to the edge is an approach where company infrastructure, that generates data, is in relatively close proximity to cloud datacentres. Such an approach will reduce bandwidth and latency issues, and still utilise scalability provided by cloud services. Processing is performed close to the data source with increased efficiency, especially if the data come from internet of things (IoT) based devices.
Service providers must allow users to interact “instantaneously” with Mixed Reality or Virtual Reality while still or while moving. Processing, thus, becomes distributed, always running at the location closest to the user, and shifting to the next closest location as the user moves.
Cars, drones or devices will have to be wirelessly connected to the cloud to be able to utilise “edge computing”., As a consequence, wireless technologies like 4G LTE or 5G will become a part of the solution. All this will require test environment designs to include physical distribution of access points.
Apart from validating the functionality of the application or the solution, testing will have to concentrate on measuring integrated performance and latency. In addition, testing will need to identify the impact solution has on hardware, scalability and capacity. Testing will include other details, such as the battery discharge rate for the targeted devices.
Testers working for service providers will have to validate wireless coverage and latency for all hot-spots that enable connectivity to the targeted datacentre. They will be required, while moving across the test environment, to confirm that there is no performance degradation or latency increase as the connectivity is switching from one “edge processing centre” to another “edge processing centre”.
The core technology behind such platforms is a natural language processing capability. It must be able to perform semantic analysis and have some level of comprehension.
Platforms offer a number of services with chatbots sitting at the front end and performing conversation using buttons, cards, text or spoken words. Apart from utilising chatbots for communication with users, companies build bots to study user behaviour. Some examples include Microsoft Chat Framework or services like Alexa and Siri.
Testing must create and run various usage scenarios to validate the end-to-end test behaviour. It often starts with testing the user on-boarding, targeting those with different levels of conversational platform usage experience.
It proceeds with validation of the platform, thus the chatbot is able to interpret user intent and generate the appropriate response. As the input is free-form, the platform must be able to understand any type of input, including curse words. If chatbots are to be multilingual, then localisation testing will be required.
A set of test scenarios will target platform error messages to validate if those messages are understandable and polite. Other test scenarios will require the chatbot to jump back and modify previous topics or completed requests.
The platform must be able to process repeated requests as per the user’s expectation. Throughout testing, the speed at which the platform returns replies must be monitored to ensure the conversation runs at a constant pace.
Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) technologies expand traditional computer interfaces to individual environments, either by creating new surroundings, or by overlaying the real and virtual words. The technology already creates experiences beyond visual space (3D) by addressing tactile (haptic) and other senses. Current applications of the technology include virtual teachers, training simulators, medical (and surgical) applications, advertising, etc.
From the testing perspective, functional testing, performance, compatibility and accessibility will be the major testing areas, with an approach similar to the one when testing computer games. Testing will include hardware, software and their integration, and therefore will require a specialised environment, such as a dedicated server, VR headset, or a VR simulator room.
Usability and accessibility testing will become mandatory for these kinds of applications. It will require the same tests to be rerun by a number of “testers”, potentially of various age and abilities, to validate intuitiveness and simplicity of the application.
Validating potential ergonomic and health issues may become a standard, confirming the application, for example, does not cause “simulator (motion) sickness” or anything similar. Considering VR may create an altered mental state, such testing might become a regulatory requirement.
A contract or a contract relationship is an agreement between parties regarding the exchange of goods, assets and services. Often, parties require an intermediary or a central authority to complete transactions between them.
Digital contracts make use of blockchain technology to get around trusted intermediaries, as the terms of the contract relationship are built into the code and enforced by the code. These types of contracts are known as smart contracts. They exist across a distributed and decentralised blockchain network which allows anonymous parties to execute transparent and traceable transactions, protected from deletion and revision.
Modern blockchain development tools are specifically built for creating smart contracts. Most of them can simulate the related blockchain network and, therefore, can be used for testing without affecting production.
Testing then must validate transmission and processing of smart contracts. I.e. validate an individual agreement for each of the transactions and confirm that distributed ledgers are updated randomly across a number of simulated nodes. Building a functional test suite requires a good understanding of the business process for each of the involved parties.
In contrast to the classical applications, the blockchain network, once in production, must remain forever, so rolling back an update is not possible. Installing a new version of the contract requires the data of the previous contract version to be completely retained, or migrated without any loss of information about any historical code execution and without change in transaction order. This is what testing must confirm, so building a highly automated test suite becomes a must.
In addition, testing must verify desired performance, security elements, and potentially may have to execute mandatory regulatory tests.
Event-Driven Business Solutions
Event-Driven Business Solutions are a special type of service-oriented architecture where a digital business continuously monitors event streams assess and respond to business opportunities quickly. It feeds itself from hundreds of sensors embedded in IoTs, reading news and analysing e-mail messages.
As a contrast to traditional batch-oriented data integration and later business intelligence reporting, the event-driven solution reacts almost immediately, either by providing a report or recommending a business decision.
Businesses, within a smart city, are permanently on standby with their own event-driven infrastructure to react in real-time to traffic accidents, weather related emergencies, and even to sports events. For example, if it suddenly starts raining, a retailer may adjust the price of umbrellas to increase sales and make a profit.
They must be able to, in line with business type and practice, simulate the appropriate “business events” and validate the “business action” proposed or triggered by the “event-driven solution”. Such approach would require experience with pattern recognition behaviour and context-aware computing, combined with a deep understanding of the business itself.
Testing should be performed end-to-end across the whole event-driven IT infrastructure and, therefore, must spread to production where it may also assess responsiveness and accuracy. Testing is performed much faster without waiting for batch jobs to complete, as the relevant events are propagated and processed in near real-time. Testing efficacy will be delivered through utilisation of Service Virtualisation and test automation.
Senior management should be equally involved in defining the relevant business conditions during the learning phase by the intelligent decision-making module of the solution, and specifically, in validation of the solution’s responses to the test scenarios.
Continuous adaptive risk and trust assessment (CARTA)
The CARTA model calls for security to be adaptive everywhere. This means utilising behavioural analytics and machine learning to identify anomalies when making security decisions.
The risk and the corresponding response are judged on a series and succession of actions rather than on a single occurrence. The aim is to identify security risks as they arise and act promptly. Continuous authentication becomes more important with a response generated automatically.
This is where DevSecOps plays a major part. It involves all staff, including the business, to define and help build the appropriate security action.
With every deployment, including one to the test environment, DevSecOps process mandates deployment of the security toolchain and security configurations to the environment. A set of security tests is then run to ensure security is built-in.
The security testing objective becomes to confirm that continuous monitoring and continuous assessment is able to identify and suppress actions that represent a security risk. Validation function (i.e. testing) has to create a number of attack models and execute those as scenarios.
The test cases, created as variations of each other, need to be executed in a particular order to provide material for the adaptive risk framework to learn, allowing, at the same time, for the framework to validate the test scenarios as potential security-threatening events.
The system, after the test run, has to be reset. I.e. re-installed and tests run again, this time in a different order. This is to assess the efficacy of the adaptive solution. The test approach is similar to the one described when discussing the validation of a machine learning application.
The traditional approach, based on the assumption that the test oracle always provides the same outcome for the same input, is not always appropriate or feasible in the case of upcoming technologies. The validation team will be required to constantly modify the expected result in accordance with what the machine has learned during testing.
A new kind of test strategist and test manager will be required, those who understand the “soft” engineering side and the new business models found within digitally-enabled ecosystems. This will bring testers much closer to the business process.
Gartner Top 10 Strategic Technology Trends for 2018, (retrieved 14 Nov 2017)