So what's performance testing?
Performance testing is testing that focuses on revealing information related to performance risks. When performance testing, we are looking at different variables that can affect a system such as the amount of users using the software at one time, the amount of data used in a system and how it is accessed, the amount of transactions hitting the servers at one time, or how the system is architected and the resources it uses. When people talk about performance testing, the most common meaning behind that term tends to be related to testing that focuses on user load risks.
Have you got any examples of performance testing?
We've got load testing where you have an expectation on the anticipated load expected for the software, then testing the software with a load of that amount for a duration of time to see how the system handles that. Stress testing where you continuously increase the load on the software until the software reaches a point of failure, which tells you the maximum load in which the software can reach before breaking, and SOAP testing, where you run the load for a much longer duration of time to see if there are any extended problems focused around memory leaks.
And what's the value of performance testing?
It enables teams to understand the underlying performance of their system and troubleshoot any issues. The information identified may be used to mitigate performance related issues and improve a product's quality.
And what are the pitfalls?
Often, performance work is pushed to the end of the delivery. Fixes found at that stage often require fundamental changes to the implementation, which is often unfeasible for many teams and organizations. Your performance testing has to be comparable in execution and set up to how you expect your product to be used in production. Otherwise, you'll get false information.