Monday, April 09, 2018

It's never too early to think about performance

Automated Performance Testing
Business users specify their needs primarily through functional requirements. The non-functional aspects of the systems, like performance, responsiveness, up-time, support needs, and so on, are left up to the development team.

Testing of these non-functional requirements is left until very late in the development cycle, and is sometimes delegated completely to the operations team. This is a big mistake that is made far too often. Having separate development and operations team is already a mistake by itself, but I will leave that discussion for another article.

I was recently part of two large software development projects were performance was addressed to late, and the costs and time necessary fixing it was a magnitude larger as it would have to address performance early in the project. Not to mention the bad reputation the teams and systems got after going live with such a bad performance that users could hardly do their daily work with the system. 

Besides knowing before you go live that users are not going to be happy (and therefore should NOT go live) there is another big advantage of early performance testing. If you aren't looking at performance until late in the project cycle, you have lost an incredible amount of information as to when performance changed. If performance is going to be an important architectural and design criterion, then performance testing should begin as soon as possible. If you are using an Agile methodology based on two-week iterations, I'd say performance testing should be included in the process no later than the third iteration.

Why is this so important? The biggest reason is that at the very least you know the kinds of changes that made performance fall off a cliff. Instead of having to think about the entire architecture when you encounter performance problems, you can focus on the most recent changes. 

Doing performance testing early and often provides you with a narrow range of changes on which to focus. In early testing, you may not even try to diagnose performance, but you do have a baseline of performance figures to work from. This trend data provides vital information in diagnosing the source of performance issues and resolving them.

This approach also allows for the architectural and design choices to be validated against the actual performance requirements. Particularly for systems with hard performance requirements, early validation is crucial to delivering the system in a timely fashion.

“Fast” is not a requirement 

"Fast" is not a requirement. Neither is "responsive". Nor "extensible". The main reason why not is that you have no objective way to tell if they're met. 

Some simple questions to ask: How many? In what period? How often? How soon? Increasing or decreasing? At what rate? If these questions cannot be answered then the need is not understood. The answers should be in the business case for the system and if they are not, then some hard thinking needs to be done. If you work as an architect and the business hasn't (or won't) tell you these numbers ask yourself why not. Then go get them. The next time someone tells you that a system needs to be "scalable" ask them where new users are going to come from and why. Ask how many and by when? Reject "lots" and "soon" as answers.

Uncertain quantitative criteria must be given as a range: the least, the nominal, and the most. If this range cannot be given, then the required behavior is not understood. As an architecture unfolds it can be checked against these criteria to see if it is (still) in tolerance. As the performance against some criteria drifts over time, valuable feedback is obtained. Finding these ranges and checking against them is a time-consuming and expensive business. 

If no one cares enough about the system being "performant" (neither a requirement nor a word) to pay for performance tests, then more than likely performance doesn't matter. quote

You are then free to focus your efforts on aspects of the system that are worth paying for.

Automated Performance Testing

In order to keep costs and spend time on performance testing in check, I advise you to automate this as much as possible. Tools like Taurus simplifies the automation of performance testing, is built for developers and DevOps, and relies on JMeter, Selenium, Gatling and Grinder as underlying engines. It also enables parallel testing, its configuration format is readable and can be parsed by your version control system, it’s tool friendly and tests can be expressed using YAML or JSON.

Here are some types of tests you can run automated:
- Load Tests are conducted to understand the behavior of the system under a specific expected load.
- Stress Tests are used to understand the upper limits of capacity within the system.
- Soak Tests determine if the system can sustain the continuous expected load.
- Spike Tests determine if the system can sustain a suddenly increasing load generated by a large number of users.
- Isolation Tests determine if a previously detected system issue has been fixed by repeating a test execution that resulted in a system problem.

Technical testing is notoriously difficult to get going. Setting up the appropriate environments, generating the proper data sets, and defining the necessary test cases all take a lot of time. By addressing performance testing early you can establish your test environment incrementally avoiding much more expensive efforts once after you discover performance issues.

Test Automation & Code Quality Workshop

One very big pain point for software development projects is trying to implement a more or less agile project delivery method without implementing automated tests and test automation. Automated performance testing is part of this.  

That is why Falko Schmidt and myself decided to design a two-day workshop to address this pain point and learn you how to solve it. We have worked together on a number of software development projects for clients like PwC and Helsana. Falko in the role of the lead developer or test automation specialist, myself as project coach or project recovery manager.

We both see test automation as an essential part of modern software development and project success and designed this workshop to spread the word and teach what we have learned over the years.

So ...

- Are you a company that wants to release faster and in a better quality?
- Are you managing a software development effort and your team does not have the skills to automate your testing?
- Are you a developer who would like to improve the quality of his own work?
- Are you tired of fixing the bugs in the code of your colleagues?
- Does your test automation efforts do not bring the expected benefits?
- Do you want to increase your market value and employability as an allround software developer?

Then this workshop is for you!

Have a look at www.test-automation.ch to see in detail what we offer and sign up. The next dates for our public workshop in Zurich are May 24/25 and July 5/6.

When you are interested in an in-house workshop for your team just contact us.

Posted on Monday, April 09, 2018 by Henrico Dolfing

0 comments: