Skip to main content

New Modern Open Source Performance Testing Tool : Locust



If you are bored with using old fashioned Java GUI to test the performance of your applications, then you should try Locust. Jmeter, which is old fashioned as I mentioned, is very mature and used by a big community but you have create your test case with an ugly GUI as much features as allows you. For me the best part of Locust is that you can write your test case with Python which is my favourite language and it is very simple to learn. The installation is very simple, you just need to type:

> pip install locustio

Some feature / deficiency of Locust: 

  • You can write your test case in Python
  • You can define the user time consumption on a page with min_wait and max_wait as milliseconds in locust class so that get a real user simulation. 
  • You can give weight for each test cases so you can simulate real user behaviours like during a specific period of time: 1 person signs up, 10 persons login, 60 persons visit pages, ... 
  • You can make http request as you do in your code such as get, post, put, delete, head, patch and options. So you can send request directly to your api, like: self.client.post("/login", {"username":"testuser", "password":"secret"}) 
  • While Jmeter is thread based so it uses a separate thread to create a user but Locust id co-routine and work asynchronously as you see after running the number of user for each case. This means that you can create more user in Locust than in Jmeter.  
  • With the PyQuery library, you can query throughout your interface. For example you can find all the links in html and then send requests to respondent pages.
  • There is some problem with single page application like if there is "#" in your url it behaves as there is no url.
  • There is no graphic option but you can hack everything.
You can check the script below which is written for www.myhabit.com to run performance testing for the link on "/my-account" page. With the script, we are loging on the page and then we can open the my account page. The result may not be different when you want to test because it depends on many factors. 

parser.add_option(
'-H', '--host',
dest="host",
default=None,
help="Host to load test in the following format: http://10.21.32.33"
)
parser.add_option(
'--web-host',
dest="web_host",
default="",
help="Host to bind the web interface to. Defaults to '' (all interfaces)"
)
parser.add_option(
'-P', '--port', '--web-port',
type="int",
dest="port",
default=8089,
help="Port on which to run web host"
)
parser.add_option(
'-f', '--locustfile',
dest='locustfile',
default='locustfile',
help="Python module file to import, e.g. '../other.py'. Default: locustfile"
)
# if locust should be run in distributed mode as master
parser.add_option(
'--master',
action='store_true',
dest='master',
default=False,
help="Set locust to run in distributed mode with this process as master"
)
# if locust should be run in distributed mode as slave
parser.add_option(
'--slave',
action='store_true',
dest='slave',
default=False,
help="Set locust to run in distributed mode with this process as slave"
)
# master host options
parser.add_option(
'--master-host',
action='store',
type='str',
dest='master_host',
default="127.0.0.1",
help="Host or IP address of locust master for distributed load testing. Only used when running with --slave. Defaults to 127.0.0.1."
)
parser.add_option(
'--master-port',
action='store',
type='int',
dest='master_port',
default=5557,
help="The port to connect to that is used by the locust master for distributed load testing. Only used when running with --slave. Defaults to 5557. Note that slaves will also connect to the master node on this port + 1."
)
parser.add_option(
'--master-bind-host',
action='store',
type='str',
dest='master_bind_host',
default="*",
help="Interfaces (hostname, ip) that locust master should bind to. Only used when running with --master. Defaults to * (all available interfaces)."
)
parser.add_option(
'--master-bind-port',
action='store',
type='int',
dest='master_bind_port',
default=5557,
help="Port that locust master should bind to. Only used when running with --master. Defaults to 5557. Note that Locust will also use this port + 1, so by default the master node will bind to 5557 and 5558."
)
# if we should print stats in the console
parser.add_option(
'--no-web',
action='store_true',
dest='no_web',
default=False,
help="Disable the web interface, and instead start running the test immediately. Requires -c and -r to be specified."
)
# Number of clients
parser.add_option(
'-c', '--clients',
action='store',
type='int',
dest='num_clients',
default=1,
help="Number of concurrent clients. Only used together with --no-web"
)
# Client hatch rate
parser.add_option(
'-r', '--hatch-rate',
action='store',
type='float',
dest='hatch_rate',
default=1,
help="The rate per second in which clients are spawned. Only used together with --no-web"
)
# Number of requests
parser.add_option(
'-n', '--num-request',
action='store',
type='int',
dest='num_requests',
default=None,
help="Number of requests to perform. Only used together with --no-web"
)
# log level
parser.add_option(
'--loglevel', '-L',
action='store',
type='str',
dest='loglevel',
default='INFO',
help="Choose between DEBUG/INFO/WARNING/ERROR/CRITICAL. Default is INFO.",
)
# log file
parser.add_option(
'--logfile',
action='store',
type='str',
dest='logfile',
default=None,
help="Path to log file. If not set, log will go to stdout/stderr",
)
# if we should print stats in the console
parser.add_option(
'--print-stats',
action='store_true',
dest='print_stats',
default=False,
help="Print stats in the console"
)
# only print summary stats
parser.add_option(
'--only-summary',
action='store_true',
dest='only_summary',
default=False,
help='Only print the summary stats'
)
# List locust commands found in loaded locust files/source files
parser.add_option(
'-l', '--list',
action='store_true',
dest='list_commands',
default=False,
help="Show list of possible locust classes and exit"
)
# Display ratio table of all tasks
parser.add_option(
'--show-task-ratio',
action='store_true',
dest='show_task_ratio',
default=False,
help="print table of the locust classes' task execution ratio"
)
# Display ratio table of all tasks in JSON format
parser.add_option(
'--show-task-ratio-json',
action='store_true',
dest='show_task_ratio_json',
default=False,
help="print json data of the locust classes' task execution ratio"
)
# Version number (optparse gives you --version but we have to do it
# ourselves to get -V too. sigh)
parser.add_option(
'-V', '--version',
action='store_true',
dest='show_version',
default=False,
help="show program's version number and exit"
)
from locust import HttpLocust, TaskSet, task
from pyquery import PyQuery
import random
class UserBehaviour(TaskSet):
urls_from_page = []
def on_start(self):
# on starting, we will login and get session then
# we wil be able to send request to login required page "/my-account"
self.client.post("/", {"email": "test@testrisk.com", "password": "Passw0rd"})
request = self.client.get("/my-account")
# from content of the page, we will take all links and then store them to a variable
pq = PyQuery(request.content)
# PyQuery can optain data from the page by jQuery
link_elements = pq(".link > a")
for url in link_elements:
self.urls_from_page.append(url.attrib["href"])
@task
def load_page(self):
# this is a task to run performance testing
# we will send http request the url taken from "my-account" page
try:
url = random.choice(self.urls_from_page)
self.client.get(url)
except IndexError:
print "... something wrong check, pq!"
pass
class User(HttpLocust):
host = "http://www.myhabit.com"
task_set = UserBehaviour
# stop the test after 120 seconds
stop_timeout = 120
# time for user behaviour
# we can assume that they wait 0.5 to 6 seconds on a page
min_wait = 500
max_wait = 6000
Percentage of the requests completed within given times
Name # reqs 50% 66% 75% 80% 90% 95% 98% 99% 100%
-----------------------------------------------------------------------------------------------------------------------------
/ 92 720 820 1000 1300 3800 7500 18000 20000 903
/help/200644950 14 500 510 510 520 530 940 940 940 936
/my-account 92 310 320 360 360 640 1000 4400 5600 5609
/orc 14 860 910 920 930 1300 4500 4500 4500 4453
/yacontactus 20 850 930 980 1100 1300 1300 1300 1300 1349
--------------------------------------------------------------------------------------------------------------------------------------------
view raw result.py hosted with ❤ by GitHub




For more information about the Locust click here.

Popular posts for software testing and automation

Selenium Error "Element is not currently interactable and may not be manipulated"

Selenium webdriver can drive different browsers like as Firefox, Chrome or Internet Explorer. These browsers actually cover the majority of internet users, so testing these browsers possibly covers the 90% of the internet users. However, there is no guaranty that the same automation scripts can work without a failure on these three browsers. For this reason, automation code should be error-prone for the browsers you want to cover. The following error is caught when the test script run for Chrome and Internet Explorer, but surprisingly there is no error for the Firefox. Selenium gives an error like below: Traceback (most recent call last):   File "D:\workspace\sample_project\sample_run.py", line 10, in <module>     m.login()   File "D:\workspace\ sample_project \test_case_imps.py", line 335, in login     driver.find_element_by_id("id_username").clear()   File "C:\Python27\lib\site-packages\selenium-2.35.0-py2.7.egg\seleni...

Change Default Timeout and Wait Time of Capybara

One of the biggest challenge for automation is handling timeout problem. Most of the time, timeout is 60 seconds but it may sometimes not enough if you have badly designed asynchronous calls or the third party ajax calls. This makes handling timeout more complex. set large enough to tolerate network related problems. For Selenium based automation frameworks, like Capybara, default Webdriver timeout is set to Net::ReadTimeout (Net::ReadTimeout) Changing ReadTimeout If you have timeout problem for Capybara, it gives an error like above. This means that the page is not fully loaded in given timeout period. Even you can see that page is loaded correctly but webdriver wait until the Ajax calls finish. class BufferedIO #:nodoc: internal use only def initialize (io) @io = io @read_timeout = 60 @continue_timeout = nil @debug_output = nil @rbuf = '' end . . . . . def rbuf_fill beg...

Create an Alias for Interactive Console Work: Selenium and Capybara

If you are working on shell most of the time Aliases are very helpfull and time saving. For testing purposes you can use Alias for getting ready your test suites. In this post, I want to explain both running Selenium and Capybara on console and creating aliases for each.  This post is for Windows machines, if you are using Unix-like see   this post . Creating Scripts for Selenium and Capybara First of all, it is assumed that you have installed Selenium and Capybara correctly and they work on your machines. If you haven't installed, you can see my previous posts. I am using the Selenium with Python and the Capybara with Ruby. You can use several different language for Selenium but Capybara works only with Ruby.  Create scripts in a directory called scripts (in your home folder, like as  ~/scripts ) for your automation tool as following, save them as capybara.rb, sel.py :  Creating Aliases Depends on your favourite shell, you need to add the al...

Page-Object Pattern for Selenium Test Automation with Python

Page-object model is a pattern that you can apply it to develop efficient automation framework. With the page-model, it is possible to minimize maintenance cost. Basically page-object means that your every page is inherited from a base class which includes basic functionalities for every page. If you have some new functionalities that every page should have, you can simple add it to the base class. Base class is like the following: In this part we are creating pages which are inherited from base page. Every page has its own functionalities written as python functions. Some functions return to a new page, it means that these functions leave the current page and produce a new page. You should write as much as functions you need in the assertion part because this is the only part you can use the webdriver functions to interact with web pages . This part can be evaluate as providing data to assertion part.   The last part is related to...

Performance Testing on CI: Locust is running on Jenkins

For a successful Continuous Integration pipeline, there should be jobs for testing the performance of the application. It is necessary if the application is still performing well. Generally performance testing is thought as kinds of activities performed one step before going to live. In general approach it is true but don't forget to test your application's performance as soon as there is an testable software, such as an api end point, functions, and etc. For CI it is a good approach to testing performance after functional testing and just before the deployment of next stage. In this post, I want to share some info about Jenkins and Locust. In my previous post you can find some information about Locust and Jenkins. Jenkins operates the CI environment and Locust is a tool for performance testing. To run the Locust on Jenkins you need command line arguments which control the number of clients ,   hatch rate,  running locust without web interface and there s...