Skip to main content
 
uk

  • Increase Speed to Market
    Deliver quality quicker by optimising your delivery pipeline, removing bottlenecks, getting faster feedback from customers and iterating quickly.

  • Enhance Customer Experience
    Delight your customers in every digital interaction by optimising system quality and performance to provide a smooth, speedy and seamless user experience.

  • Maximise Your Investment
    Realise a positive ROI sooner and maximise your investment by focusing your energy on high-value features, reducing waste, and finding and fixing defects early.
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page


INSIGHTS / Articles

How to Manage Application Performance: Q&A

 24 Oct 2016 
INSIGHTS / Articles

How to Manage Application Performance: Q&A

 24 Oct 2016 

After Planit's How to Manage Application Performance Ask the Expert webinar with Joel Deutscher and Will Erskine on 20 October 2016, attendees had several performance-related questions to ask our experts. Here are Will and Joel's answers to all audience questions following their webinar.

Q: How do you eliminate the environment factors (i.e. they are normally different between Dev, Test, UAT)?

A: There will always be differences between the test environments, and more often than not there will be significant differences between Production and the Pre-production test environment most commonly used for performance testing. The key to successful testing is to clearly define your scope and objective. If the scope is fully end-to-end, then a full-scale production like environment will be required.

However, in most instances we can gain value from running in smaller environments. This will exercise the code and find any performance degradation from previous code versions. By combining the smaller tests with the conventional full scale performance tests and application performance monitoring, you can avoid reliance on a specific environment. Service Virtualisation can also assist with specific dependence on certain systems and eliminate requirements against specific under scaled components of an application.

Q: My company produces a couple of mobile apps and I'm looking at putting together some performance testing capability. What are some of the challenges faced with testing for different personas, network capabilities, et cetera?

A: It is all about understanding real use-cases and how your users will use your application. If I’m launching a sports app for example, I need to make sure the app will function on a highly contended mobile network because people may use my app immediately before, after or during a major sporting event when the local mobile network experiences increased contention. If I create a navigation app, it is important that it reacts well to changes in network condition that are likely to happen as my customer travels. It is also important to note that the network performance may also have an impact on back end performance, not just for that single users.

Q: Can performance testing be integrated with CI tools?

A: Yes, absolutely. We have experience integrating all major tools with CI solutions. We also believe that a dashboard component adds enormous value when running performance tests on every build. A recent client integrated JMeter with Bamboo to run performance tests every evening and display the results in an executive dashboard for easy monitoring of potential performance issues.

Q: I'm also looking at device farms (i.e. Test Object) to test our app on. What tools integrate with device farms to allow me to run tests across a range of device and operating system combinations?

A: Most of the major performance test providers have a solution for this, including HP LoadRunner, Silk Performer and Neoload. It would be worth checking each tool individually to confirm the exact requirements. In terms of open source tools most of the device specific tools focus on test automation rather than performance, good examples include Kalabash and Appium.

Q: There is a great deal of change in the performance test tools that are being used in the market these days. People are going away from traditional HP tools and have started using tools like Github. What do you think about this change?

A: The change is good – introducing more development rigour and source control into the performance test process is a great thing, it leads to more reuse and therefore less ongoing cost. Increasing competition for the performance test tools helps drive the market forward and allow more innovative tooling and less prohibitive license costs.

Q: Performance testing clearly depends on the load that has been tested. This load is usually an input to the test requirements. As an organisation what information or business analytics data should be captured in order to provide correct requirements for performance testing and producing more realistic tests and results?

A: It depends on the organisation and the project under test. For a greenfield project we may not be able to leverage existing usage patterns, as there may be no current usage models, in this instance we may have to rely on business predictions for usage. Where we are testing an upgrade, we should be using analytics tools such as Google Analytics or Splunk in order to determine usage patterns. We can also look at the database or server logs to understand what transactions are happening.

Q: How safe is it to use cloud infrastructure for applications that use customer data?

A: It depends on the sensitivity of the data; in most instances, I’d recommend against using genuine production data in a test environment without obfuscation. Desensitised data shouldn’t contain any traceable information and therefore should present no issue storing in the cloud. It is often no less safe to store data in the cloud than in an on-premise data centre.

Q: What are some performance metrics other than speed?

A: At a minimum I generally recommend monitoring memory, CPU, disk and network usage on each of the servers involved in the system under test, as this will help identify the root cause of any failures or poor performance identified. In terms of user experience, we need to consider metrics like concurrency, both in terms of active and passive users, transactions per second and stability under load (number of errors).

Q: With IoT being talk of the town, how will this change the way P&V is done?

A: I think IoT expands the scope of performance testing, and testing in general enormously. If you imagine in the past, we were mostly concerned with user response time and user concurrency contending for services we were providing. Now we also have to consider smart devices contending for server resources, and as the number of devices grows, so will the contention.

Q: Where does continuous performance monitoring fit into the overall performance testing?

A: Continuous performance testing should play a big role in overall performance testing, and in many instances would reduce the need to large scale performance tests at the end of a project. The idea is to find defects earlier so they can be fixed earlier in the project leaving your large scale performance tests to just be a final validation.

Q: From a web perspective, is it beneficial to test different browsers or is it assumed that if it's OK in one, it will be OK in the rest?

A: This never used to be a major problem, however with increasing technical divergence and different methodologies, we are actually seeing applications perform very differently on different web browsers. In most instances this isn’t a major concern, but it depends on the technology used to develop the website.

Q: Which performance testing tools in particular are compatible with CI tools like Bamboo?

A: JMeter, LoadRunner and most others are possible, it depends on setting up a batch file.  

Q: We have tried implementing performance testing in the initial development phase, but what we get back from developers is that the environment is not prod like and it will show issues which will not be the case in prod. How can we take care of such situations?

A: This needs to be dealt with at a company-wide level. It is a culture change to move away from the idea that testers and developers are working against each other. We all need to move forward with the understanding that developers, testers and everyone else is working together with one goal in mind - a functional performant system.

Thanks for this - very informative [from David].

A: Thanks David – glad you enjoyed the session ☺

Speed as a Key Asset

At Planit, we can help you make performance an asset, not a liability. Our expert consultants can provide testing, assessments and advice to mitigate performance risks and achieve peak results.
 
Find out how we can help you navigate these challenges, achieve your performance goals, and deliver a rapid, responsive, and reliable experience that delights your customers.

 

Find out more

Get updates

Get the latest articles, reports, and job alerts.