Skip to main content

Basics about Performance Testing Tools

I had an opportunity to give a speech about how we can automate performance testing in CI/CD pipeline in a conference held in Istanbul for the first time "Test automation and Digital QA Submit Istanbul, #TAS19". My subject was "Automated Performance Testing", the first part of the speech is to explain what is performance tests and the second part is to explain performance tools, and the last section is to explain what is automated performance testing and how to implement it to CI/CD. 

In this post, I want to explain my thoughts about the basics of performance testing tools. Next posts will focus on automated performance testing and using K6 as a performance testing tool.

How to Evaluate Performance Testing Tools

Before diving into the performance testing tools and technologies we have in the industry, we should define the criteria for evaluation. Each tool has its own advantages or disadvantages and similarity between ancestors or others. When we start to use the tools and try to learn it, most of the time the learning curve is having a critical factor. It is fine since the learning is and using it efficiently is a huge impact for tools but after this period the performance of the tool during running it becomes more critical and having support from open-source community or consultancy companies is another very important factor. Let's list the factors for evaluating performance testing tools:

Criteria for Performance Testing Tools Evaluation: 

  • Learning curve
    • Using a tool, in the beginning, can be difficult so a good tool should have well-defined documents and a better one ready project to run
    • Tools can have many features so running simple scripts doesn't mean that the tool is efficiently using for complex test scenarios. A good tool also support complex cases to handle real-world scenarios
  • Price
    • None of us wants to spend many if there is a free version of the tools. If the case is the money then you should go with the open-source tools but don't forget to search that the tool has a good community so that you get some support from there.
    • Some tools are created by the community so you can use without paying money, however if you want to get some enterprise-level support, some companies are giving this supports for the tools
  • Supports
    • Be sure that tool has a supportive community if it is open-source
    • Good company support for commercial tools
  • Maturity
    • Tools may have defects, so having a buggy tool integration in your environment may cause maintenance costs. Before investing in a tools check the known issues about the tools.
  • Languages supported for performance testing scripts
    • One can not have a profession in many languages so the supported language should include your favorite one
    • In general, performance scripts are easy in the beginning but if you want to cover complex scenarios with scalable test environment then the language may be a barrier to handle these situations
  • Performance of the tool itself
    • Testing tools require resources to create virtual users and requests. This resource may increase exponentially if you want to create lots of users for your stress tests so be sure that how the performance of the tools itself
    • Generally, the performance is evaluating by AB - Apache Benchmark tool. The response time and resource which is consuming by the tools are comparing with the result of the AB. To get the idea about the performance of some well-known performance testing tools, read this blog.
  • Integration to other necessary tools
    • How you can integrate it with databases, reporting tools, CI/CD and so on
  • Embed features
    • Separated load creation
    • Load distribution
    • Writing scenarios
    • Grouping the scenarios
    • Setting thresholds for automated performance tests
  • CLI supports
    • Supporting only GUI or CLI
    • Working headlessly
    • Dockerized or can be dockerized easily

Performance Testing Tools

We have a variety of tools in the industry, they change from open source to commercial and/or commercial support; newly started to 20+ years old performance testing tools. Some of the well-known performance tools, that I have experienced and I can suggest you, are as follows


  • Jmeter - Apache project, v1.0 1998
  • Gatling - Enterprise support by Frontline
  • K6 - Enterprise support by LoadImpact
  • Locust
  • and many others


  • LoadRunner by Microfocus (previously Mercury - HP)
  • Blazemeter - support multiple opensource tools
  • LoadUI Pro by Smartbear
  • and many others

Popular posts for software testing and automation

Selenium Error "Element is not currently interactable and may not be manipulated"

Selenium webdriver can drive different browsers like as Firefox, Chrome or Internet Explorer. These browsers actually cover the majority of internet users, so testing these browsers possibly covers the 90% of the internet users. However, there is no guaranty that the same automation scripts can work without a failure on these three browsers. For this reason, automation code should be error-prone for the browsers you want to cover. The following error is caught when the test script run for Chrome and Internet Explorer, but surprisingly there is no error for the Firefox. Selenium gives an error like below: Traceback (most recent call last):   File "D:\workspace\sample_project\", line 10, in <module>     m.login()   File "D:\workspace\ sample_project \", line 335, in login     driver.find_element_by_id("id_username").clear()   File "C:\Python27\lib\site-packages\selenium-2.35.0-py2.7.egg\selenium\webdriver\r

Change Default Timeout and Wait Time of Capybara

One of the biggest challenge for automation is handling timeout problem. Most of the time, timeout is 60 seconds but it may sometimes not enough if you have badly designed asynchronous calls or the third party ajax calls. This makes handling timeout more complex. set large enough to tolerate network related problems. For Selenium based automation frameworks, like Capybara, default Webdriver timeout is set to Net::ReadTimeout (Net::ReadTimeout) Changing ReadTimeout If you have timeout problem for Capybara, it gives an error like above. This means that the page is not fully loaded in given timeout period. Even you can see that page is loaded correctly but webdriver wait until the Ajax calls finish. class BufferedIO #:nodoc: internal use only def initialize (io) @io = io @read_timeout = 60 @continue_timeout = nil @debug_output = nil @rbuf = '' end . . . . . def rbuf_fill beg

Create an Alias for Interactive Console Work: Selenium and Capybara

If you are working on shell most of the time Aliases are very helpfull and time saving. For testing purposes you can use Alias for getting ready your test suites. In this post, I want to explain both running Selenium and Capybara on console and creating aliases for each.  This post is for Windows machines, if you are using Unix-like see   this post . Creating Scripts for Selenium and Capybara First of all, it is assumed that you have installed Selenium and Capybara correctly and they work on your machines. If you haven't installed, you can see my previous posts. I am using the Selenium with Python and the Capybara with Ruby. You can use several different language for Selenium but Capybara works only with Ruby.  Create scripts in a directory called scripts (in your home folder, like as  ~/scripts ) for your automation tool as following, save them as capybara.rb, :  Creating Aliases Depends on your favourite shell, you need to add the alias to .bashrc bash

Page-Object Pattern for Selenium Test Automation with Python

Page-object model is a pattern that you can apply it to develop efficient automation framework. With the page-model, it is possible to minimize maintenance cost. Basically page-object means that your every page is inherited from a base class which includes basic functionalities for every page. If you have some new functionalities that every page should have, you can simple add it to the base class. Base class is like the following: In this part we are creating pages which are inherited from base page. Every page has its own functionalities written as python functions. Some functions return to a new page, it means that these functions leave the current page and produce a new page. You should write as much as functions you need in the assertion part because this is the only part you can use the webdriver functions to interact with web pages . This part can be evaluate as providing data to assertion part.   The last part is related to asserting your test cases against to the

Performance Testing on CI: Locust is running on Jenkins

For a successful Continuous Integration pipeline, there should be jobs for testing the performance of the application. It is necessary if the application is still performing well. Generally performance testing is thought as kinds of activities performed one step before going to live. In general approach it is true but don't forget to test your application's performance as soon as there is an testable software, such as an api end point, functions, and etc. For CI it is a good approach to testing performance after functional testing and just before the deployment of next stage. In this post, I want to share some info about Jenkins and Locust. In my previous post you can find some information about Locust and Jenkins. Jenkins operates the CI environment and Locust is a tool for performance testing. To run the Locust on Jenkins you need command line arguments which control the number of clients ,   hatch rate,  running locust without web interface and there should be so