Skip to main content
 
uk

  • Increase Speed to Market
    Deliver quality quicker by optimising your delivery pipeline, removing bottlenecks, getting faster feedback from customers and iterating quickly.

  • Enhance Customer Experience
    Delight your customers in every digital interaction by optimising system quality and performance to provide a smooth, speedy and seamless user experience.

  • Maximise Your Investment
    Realise a positive ROI sooner and maximise your investment by focusing your energy on high-value features, reducing waste, and finding and fixing defects early.
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page


INSIGHTS / Articles

Improving Quality Through Test Automation and Containerisation

 10 Aug 2020 
Improving Quality Through Test Automation and Containerisation Improving Quality Through Test Automation and Containerisation
Improving Quality Through Test Automation and Containerisation
INSIGHTS / Articles

Improving Quality Through Test Automation and Containerisation

 10 Aug 2020 

One of the key goals in the DevOps movement was to have developers take on the operations role of supporting their code in production. This meant if the system crashed at 2:00AM in the morning, the developer who wrote the code was the one getting woken up.

Unsurprisingly, this made them think twice before pushing any new code to production. They wanted to be confident it wasn’t going to fail during the night, which means, they started to care even more about quality.

In effect, DevOps was improving results by aligning these teams and drawing attention to the natural triangle of concerns between development, operations and quality. A term I like to call “QualityDevOps”. The better quality the code or system they release, the easier it is to support. Conversely, the worse the quality of product developed, the more difficult it is to support.

By allowing us to ‘shift left’ or test earlier, DevOps enables us to reduce risks, decrease costs, increase agility and improve outcomes by providing faster detection of defects during development and faster feedback in production.

With more thorough testing throughout the development stage, less testing is needed after development. This way, we can reduce the time taken to release new code, which means more frequent releases and, therefore, more business opportunities.

In this article, we will look at 4 ways good DevOps practices can enable quality.

1. Establish your build pipeline

The build pipeline is your starting point. Whatever tool you use as a build pipeline, having your test automation code automatically run on your key defined trigger points is the bread and butter of using DevOps to help your testing.

This includes all layers of testing, from unit up to UI tests. What’s important is how and where you trigger your tests.

First off, in the development environments, when new code is pushed into the environment, the unit tests are run, and possibly a smoke test of the UI if the environment is fully featured, functioning, and stable enough.

Then in your staging or pre-prod environment, the same applies, with unit and smoke tests on every code push. A full suite of automation tests is either run on the code push or perhaps overnight depending on size.

It should also be triggered when you push a new version of the test automation code as well. A more advanced approach is to only run the subset of regression tests needed, based on the impact areas of the development code changes using tags.

When the code is in production, a smoke test suite should be run, as the final point of integration and configuration testing. It’s likely the test suite is adapted here, since you may not have the test systems available for items such as payment gateways, so it might be hard to complete the full test suite without impacting real customer data or company metrics. It can definitely be done - it just takes some thought.

There are some benefits to gain from implementing these practices, but there are still some gaps and opportunities to improve further. What if we want to run a full set of tests against our development branch? For that, we need a fully functional, stable environment.

What if we want to run our UI tests before the developer even commits their code? To do this, the developer would need to have every required service running on their machine and have any necessary test data setup.

If their machine is powerful enough, it might work, but then you also have the ‘it worked on my machine’ problem. Each developer has a different machine, there may be different levels of memory, CPU power, and background tasks.

If something breaks or even passes on their machine, how do we know that will also happen in our shared environments or in production? What about the time it might take to run the tests and the developer is limited from doing anything else?

It’s simply not practical to test that way. This is where DevOps can help.

2. Containerise your environment

We also need to be testing effectively to ensure that the testing produces useful and meaningful results.

Have you ever found a defect in a pre-prod environment, or perhaps while testing on your local machine, only to determine that it’s not a real issue, because a service wasn’t running, configuration was missing, or the amount of memory available wasn’t enough? The real issue is that your test environment wasn’t set up correctly.

That is not effective testing and the effort is wasted. What I suggest is to create a containerised environment to run your tests against so you have a consistent and reproducible place to run your tests.

This is a big topic, and there are many variables to consider based on your situation. These include which languages your development is done with, is the development done by an external vendor, what services are needed, what system resources, etc.

The high-level approach is to start by gathering a list of everything we need to create a test environment on demand which is ready to run tests whenever we need it. This is either to test on our local machine before we commit any code, or using it to create our shared test environments.

With the development team, you gather a list of everything that needs to be installed on a blank server to ensure an accurate test environment. This includes a list of the applications and programs that need to be installed, which services need to be set up and running, any pre-loaded any database content, setup configuration files, install development packages, configure the minimum hardware specs of the machine, etc.

When we have that list, we can use tools like Docker and AWS, Chef, Puppet or Ansible to create our environment by writing a script of installation instructions for each requirement. For example, we specify that we want a Windows 10 machine with 64GB of RAM, and on that system, we want to install Java. We then want to upload and build our Java development packages that create our website. We also install MySQL and pre-load some test data, and then we’re done.

3. Use your container

There are two ways you might use this environment created inside a container. You can run tests from your local machine against the environment by opening ports between you and the environment. Or, if you don’t want to have the tests running from your machine and preventing you from doing some other work, you can also upload the tests into the container and run them from there.

At this point you will have the test environment up and running inside a Docker instance on your machine, or perhaps on an EC2 instance on AWS. You can now point your tests to run against that environment, knowing it is a fully functional and stable environment, because no one else is using it and it’s freshly set up with exactly the required state for your tests.

This is effective testing. It’s also efficient, because as soon as you are finished testing, you can tear down that environment, and you are no longer paying for it until next time you need it.

Because your environment exists entirely as an output of a set of instructions, you can use these instructions to create any of your environments, even production if you wanted to, and you just need to cater for keeping the existing state of your data instead of creating a new set each time. Then you have complete confidence that, when you are testing from your development machine before committing any code, you are not going to run into environmental issues.

This environment installation script needs to be kept up to date, so I recommend including it into your development source code, and ensure it is updated with any relevant changes as part of your code reviews. One of the acceptance criteria then might be to check if the code changes require an update to your environment build script.

We want to be able to test development code before a developer has even committed it. They can do this by running their tests against their code in their own private environment, created in a container using Docker or AWS.

This environment can be created on-demand, with all required systems and services installed, all test data set up, with known and consistent specifications, that anyone can use. Then it can be torn down when we are finished with it.

That is a great place to be in, and it’s one of the most powerful benefits that can come from DevOps to assist testing. It’s also one that many companies are yet to jump on board with.

4. Virtualize it

An area that might not technically fall under the DevOps umbrella, but brings a related benefit to testing, is service virtualization. It enables us to avoid relying on tests having to run through services that are slow, expensive, or outside our testing scope, when it doesn’t benefit the testing.

For example, if your system uses PayPal to handle all payments, then there is no need to write a test that verifies the payment was successful - that’s PayPal’s job. All you need to do is make sure you send the right details to PayPal, and that your system can handle whatever information comes back from PayPal.

Do you want to set up scenarios to generate all the possible response types from PayPal to test if your system can handle it? Create a virtual service instead of PayPal that can mimic sending you each response option and run your tests that way.

Similarly, maybe your system uses a data lake, or ETL process, or some other processing system, where your test focus does not include what happens in that system, but it is a key part of the end-to-end flow. Just like we might use stubs and mocks in development of unit tests, service virtualization can help with our integration and UI tests.

In summary

DevOps is a practice designed to embed quality, and these are some of the ways this goal can be achieved through quality engineering and continuous testing. Outcomes and benefits will depend on how you establish your build pipeline and environment, your choice of container, and your decision to virtualise.

In my second article, I will explore the reverse. Namely, how quality engineering can enable DevOps by thoroughly testing practices, tools and scripts.

Unlock Efficiencies

We help our clients leverage the power of DevOps to deliver a great customer experience faster and realise business value sooner. We do this through quality engineering, technology-driven CI/CT/CD, agile methods and frameworks, lean processes, continuous improvement and industry-leading innovations.
 
Find out how our consultants can provide the advice you need to realise DevOps success and deliver your project’s quality goals quicker.

 

Find out more

Get updates

Get the latest articles, reports, and job alerts.