The bank selected Planit to deliver quality assurance for its new platform based on our expert knowledge and experience in building performance quality into the lifecycle of products and large programs of work. The reputation of our quality engineers, earned from successfully delivering projects for other customers, also enabled us to stand out from the competition.
Most competitors’ solutions are built on testing performance late in the release cycle using non-developer friendly tools and solutions. The bank preferred Planit’s approach for being congruous with the module and component-driven nature of their technology stack to capture performance related feedback of components early and continuously. Our approach was also built on the tooling and frameworks being developer and change friendly, as well as being platform agnostic.
To deliver the required level of quality for the new platform, we understood the importance of embedding performance into the software development lifecycle - the tools and frameworks would need to work properly with the technology stack and Cloud platform where services are deployed as containers.
To deliver upon the goals set for the new platform, a team of highly skilled technical engineers with the ability of quickly learn and adapt were carefully selected from Planit. They went on to:
- create a process to identify performance risks associated with new feature or change,
- define an approach to map performance risks to testing required at a component level and integrated level with integration boundaries defined,
- conduct performance testing at a component level,
- capture early performance feedback by embedded performance tests into the automated continuous integration (CI) process,
- and build a service virtualization solution to support component and compartmentalised integrated testing.
This approach was tailored based on the bank’s team structures and specific processes for the new platform were developed. The key focus was on pushing the responsibility of capturing performance feedback early and regularly to the component teams.
We carried out component testing coverage across three focus areas that directly impact customers. 26 services were also virtualized, including the asynchronous streaming/messaging layer. The tooling and frameworks were also built to have less maintenance overhead and be easily adopted by existing and new developers.
Integrated performance testing was built on top for added peace of mind, with the expectation of only finding performance issues related to integration between the components under load. Integrated performance testing capability was also set up for six key customer journeys using the service virtualization framework to decouple testing from legacy systems.
Since the technology stack used for the platform was new, it meant existing toolsets did not support them out of the box. The large number of moving pieces, data dependencies across components and legacy systems, and event-driven nature of the architecture also created unprecedented patterns associated with measuring performance.
The use of containerised deployment in Kubernetes clusters on Google Cloud, including the performance testing workloads, came with its own learning curve. On top of this, there were environments with different code release frequencies for different components to contend with.
These technical hurdles had the risk of negatively affecting the delivery of the new platform if not properly managed. Our quality engineers continually assessed any risks they uncovered and identified the necessary work required to mitigate them, as well as regularly customising our solutions to closely measure and accurately visualise performance.