not a feature, but the future of an app is under testing.

Tuesday, December 31, 2019

Automated Performance Testing with K6

Automated testing is running the tests and reporting results by tools with scripts. Automated performance testing is designing the performance test scenarios to run them spontaneously by tools and evaluation the result by tools to decide to go or not to go further.

Automated Performance Testing is
  • A subset of the performance test scenarios
  • Designed to run spontaneously by tools
  • Designed to evaluate the result by tools
  • Designed not to break the system
  • Having decision metrics, ex: max response time, max-average resp time, ...
  • Designed to create reports for the concurrent and historical run

What Types of Performance Testing Should be Automated

Automated performance testing can be applied to all types of performance testing. However, each type needs a different level of maturity and sanity.

Automated Performance Testing is

  • Easily applied to load testing
  • Hard to apply stress test, spike test, but have benefits
  • Very hard to apply Soak-Endurance test

Automated Load Tests

Since the load test is aiming to the response of the system under a specific load and it is a success belongs to the pre-defined metrics, load testing is the best-suited performance testing type.

Automated Load Testing is
  • Needing a specific load level
  • Needing pre-defined metrics
  • Needing evaluation of the result
  • Needing exactly the same environment - only one thing can be testable
  • Needing tools and script knowledge
  • Easy to integrate CI/CD pipeline - everything is scripted
  • Reporting the performance of the system when something changes

Tools Needed for Automated Performance Testing

Performance testing depends on two main parts. The first part is creating loads and the second part is using these loads for test scenarios. When it comes to doing automated performance testing, we need more tools

Tools Needed for Automated Performance Testing are

  • Performance Testing tools
    • Open-source
      • Jmeter - Apache project, v1.0 1998
      • Gatling - Enterprise support by Frontline
      • K6 - Enterprise support by LoadImpact
      • Locust
      • and many others
    • Commercial
      • LoadRunner by Microfocus (previously Mercury - HP)
      • Blazemeter - support multiple opensource tools
      • LoadUI Pro by Smartbear
  • Databases
    • InfluxDB - Opensourced times series database
    • MongoDB - Opensourced document-based database
  • Reporting tools
    • Grafana
    • Kibana
    • Graphite
  • Orchestration Tools
    • Docker Compose
    • Kubernetes
    • Prometheus
    • Mesosphere DC/OS
    • And others
  • Automation Tools
    • And others
    • Jenkins
    • TravisCI
    • TeamCity
    • CircleCI
    • Code Ship
    • Gitlab CI"

A Way of Implementation of K6 for Automated Performance Testing

k6 Load Testing ResultsK6 is an open-source performance testing tool. It is designed for developers, it gives us many good features to run tests in a safer, faster, easier integrated way. Very easy to integrate it to CI/CD.

Some of the Good Features of K6 are
  • Precious in performance, among the top 5 in comparison
  • Written in Golang, allows writing test script in Javascript
  • Already dockerized
  • Easy to connect to DB (such as InfluxDB)
  • Easy to create reports (Grafana)
  • Has build-in Thresholds
  • And many others

K6 has rich libraries that you can use in the script. The request is sending by HTTP from k6/http. Grouping is letting you run the same kind of request in a group. Check the simple test scripts which have get requests to the three URLs and post requests to register endpoint with dynamic data. 

Command-line supports many parameters to run your scripts. For a load test, we have fixed virtual user and fixed test duration so the run script should look like

For a spike test, we should load the system with a nominal load and the load should increase suddenly. Let’s focus on at the --stage option for this.

K6 can be easily integrated with an InfluxDB instance so in the setup we are also running the docker-compose to create an InfluxDB instance with will use for feeding Grafana dashboard to visualize all the data.

The K6 sends the results to the InfluxDB, we can virtualize this data with Grafana. You can use the dashboard created by the LoadImpact

Add these scripts to your CI/CD pipeline to run an automated performance test and see the result in Jenkins. To see all the scripts and data check this repo on Github

Basics about Performance Testing Tools

I had an opportunity to give a speech about how we can automate performance testing in CI/CD pipeline in a conference held in Istanbul for the first time "Test automation and Digital QA Submit Istanbul, #TAS19". My subject was "Automated Performance Testing", the first part of the speech is to explain what is performance tests and the second part is to explain performance tools, and the last section is to explain what is automated performance testing and how to implement it to CI/CD. 

In this post, I want to explain my thoughts about the basics of performance testing tools. Next posts will focus on automated performance testing and using K6 as a performance testing tool.

How to Evaluate Performance Testing Tools

Before diving into the performance testing tools and technologies we have in the industry, we should define the criteria for evaluation. Each tool has its own advantages or disadvantages and similarity between ancestors or others. When we start to use the tools and try to learn it, most of the time the learning curve is having a critical factor. It is fine since the learning is and using it efficiently is a huge impact for tools but after this period the performance of the tool during running it becomes more critical and having support from open-source community or consultancy companies is another very important factor. Let's list the factors for evaluating performance testing tools:

Criteria for Performance Testing Tools Evaluation: 

  • Learning curve
    • Using a tool, in the beginning, can be difficult so a good tool should have well-defined documents and a better one ready project to run
    • Tools can have many features so running simple scripts doesn't mean that the tool is efficiently using for complex test scenarios. A good tool also support complex cases to handle real-world scenarios
  • Price
    • None of us wants to spend many if there is a free version of the tools. If the case is the money then you should go with the open-source tools but don't forget to search that the tool has a good community so that you get some support from there.
    • Some tools are created by the community so you can use without paying money, however if you want to get some enterprise-level support, some companies are giving this supports for the tools
  • Supports
    • Be sure that tool has a supportive community if it is open-source
    • Good company support for commercial tools
  • Maturity
    • Tools may have defects, so having a buggy tool integration in your environment may cause maintenance costs. Before investing in a tools check the known issues about the tools.
  • Languages supported for performance testing scripts
    • One can not have a profession in many languages so the supported language should include your favorite one
    • In general, performance scripts are easy in the beginning but if you want to cover complex scenarios with scalable test environment then the language may be a barrier to handle these situations
  • Performance of the tool itself
    • Testing tools require resources to create virtual users and requests. This resource may increase exponentially if you want to create lots of users for your stress tests so be sure that how the performance of the tools itself
    • Generally, the performance is evaluating by AB - Apache Benchmark tool. The response time and resource which is consuming by the tools are comparing with the result of the AB. To get the idea about the performance of some well-known performance testing tools, read this blog.
  • Integration to other necessary tools
    • How you can integrate it with databases, reporting tools, CI/CD and so on
  • Embed features
    • Separated load creation
    • Load distribution
    • Writing scenarios
    • Grouping the scenarios
    • Setting thresholds for automated performance tests
  • CLI supports
    • Supporting only GUI or CLI
    • Working headlessly
    • Dockerized or can be dockerized easily

Performance Testing Tools

We have a variety of tools in the industry, they change from open source to commercial and/or commercial support; newly started to 20+ years old performance testing tools. Some of the well-known performance tools, that I have experienced and I can suggest you, are as follows


  • Jmeter - Apache project, v1.0 1998
  • Gatling - Enterprise support by Frontline
  • K6 - Enterprise support by LoadImpact
  • Locust
  • and many others


  • LoadRunner by Microfocus (previously Mercury - HP)
  • Blazemeter - support multiple opensource tools
  • LoadUI Pro by Smartbear
  • and many others

Monday, December 16, 2019

Performance Testing

I had an opportunity to give a speech about how we can automate performance testing in CI/CD pipeline in a conference held in Istanbul for the first time "Test automation and Digital QA Submit Istanbul, #TAS19". My subject was "Automated Performance Testing", the first part of the speech is to explain what is performance tests. 

In this post, I want to explain my thoughts about performance testing. Next posts will focus on automated performance testing and using K6 as a performance testing tool.

What is Performance Testing By Definition

Defining the performance of the system by testing the functionalities of the system in terms of the non-functionalities 

Performance Testing is
  • A non-functional testing
  • Testing the functionalities of the system
  • Both black-box and white-box testing
  • Defining how the system works under any loads
  • Applied under different load levels
  • Applied under different load increments
  • Applied under different durations

What is Performance Testing in Terms of Process

Performance testing should be also executed in a process. Performance engineers should ensure that they collected requirements, created the correct amount of virtual users, created test scenarios and they can execute them. As a final, they should check the result and the system, if required after fixing, re-execution may be needed.

Performance testing should start with planning, in this stage, you should align with customer/user needs. Without understanding the requirements any test can not be evaluated. Two main parts of the testing are verification and validation. By verification, we are checking the requirements and by validation, we are checking these requirements are valid with the global expectation at least the targetted users. Therefore this stage is very important.

In the next steps, you should create performance test scripts and create test environment and in the execution stage, you can run test scripts against the requirements in the reserved/isolated performance testing environment.   

In the monitoring stage, you should check that the execution is happening as expected by checking the system. The performance scripts create requests which are reached to the related part of the system and they create related data on the databases/caches. Which means that your scripts run as expected. 

In the analyzing stage, you can check the result and if necessary you can fix some configurative, network-related or database-related problems. After that if necessary, you can get ready for the next iteration. These iterations should keep going until the desired level of confidence is gathered.

Types of Performance Testing

Types of performance testing depend on the strategy that applied during the test by means of the loads, the duration and increment of the load.

Load Test

The load test is to understand the behavior of the system under a specific load. It is performing under a specific amount of the load to check if the system gives the expected responses with keeping it stable for the long term

Load Test is
  • Basic form
  • Widely known-form
  • A specific amount of load
  • Checking the expected response time
  • Giving a result that users are facing 

Stress Test

The stress test is to test the upper boundary of the system. When the load is increased to the level of the upper boundary, the stress test aims to get the response of the system. 

Stress Test is
  • Testing the upper boundary of the system
  • Needing more resources
  • Needing more investigation for boundaries
  • Needing more attention
  • Prone to break the system, aka: soak, endurance testing
  • Preparation for promotion like black friday, 11.11 

Soak - Endurance Test

Soak test is to test the upper boundary of the system for a prolonged time. Soak is good if you are not sure the expected amount of the user for some occasions.

Soak Test is
  • Testing the upper boundary of the system
  • Testing the system for a prolonged time
  • Needing more resources
  • Needing more investigation for boundaries
  • Needing more attention
  • Aims to break the system
  • Preparation for campaigns like black Friday, 11.11 

Spike Test

Spike test is to test the behavior of the system when the amount of load is suddenly increased and decreased. It aims to find system failures when loads is changing to an unexpected amount. 

Spike Test is
  • Testing the sudden increase and decrease of loads
  • Requiring more load generations
    • depends on the tools (load the system, then spike it)
  • Good for some occasion like simulating 
    • push notifications for e-coms
    • announcing critical news

Capacity Test

Capacity testing is aiming to find the maximum amount of the user that the system can handle. By this test we can find the how many user can use the system.

Capacity Test is
  • Testing the max user
  • Finding the behavior when more users join
  • Providing info about the loss of user data

Volume Test

The volume test is testing the database directly instead of the whole system. With the volume testing, we can find if there is any bottleneck with the database system. 

Spike Test is
  • Testing the database 
  • Testing the with a large amount of data

Friday, December 6, 2019

Getting the text of elements in Espresso

Espresso is not designed to play with UI objects. When you are considering it in terms of capability for testing, it needs to be improved more. It is not as mature as Android SDK itself so you need to customize somethings to handle your requirements. Even you may need to use other tools like UIAutomator integrated into your test suite.

In this post, I want to show how we can handle getting the text of ViewInteraction view. 

ViewInteraction view does not have .text function for getting the text of the object since it is designed to interact with the object. Another method which may solve the problem of the assertion is to use the .check for matching the text, as follows:

However, this method is not a good way to assert that a certain field text equals a value. For this purpose, we need to get the text of the element and assert it. To get the text of the ViewInteraction element, we need to cast it to TextView with getting the assignable form of it then extract the text that the view holder has. You can use the following function for this assertion: 

Using it simple now, just call it whatever you want:

Friday, May 3, 2019

How to Set Shared Preferences in Espresso Test for Kotlin and Java

I have experienced Espresso and needed to deep dive into Shared Preferences
just because it is one of the main parameters used in the application we developed. As a long search in the online sources but there are some pretty old documents for Espresso with Java and very few documents about Espresso with Kotlin. In this post, I want to share my experiences with setting Shared Preferences with Kotlin and Java and how you can use it in your test design. You can follow up the steps for your test project.

Shared Preferences is a way to store user data in local devices so it has been supported since the very early version of Android. Shared Preferences can be stored in the default file or custom file. 

Using Default File for Shared Preferences

If your application uses the default file it should stores the shared data in the default file provide by Android as in the following path in the device:
This is the source code for getDefaultSharedPreferencesName

For this case, you can set/clear/get shared preferences by the following method:

Using Customized File for Shared Preferences

The same way. your application can use a custom file to store shared preferences. This time you need to get the name of the .xml file for storing shared preferences. You can find the file in the same path with the default preference file. If the file name is `MyAppSharedPreferences`, the file path should be like the following:
For this case, you can set/clear/get shared preferences by the following method:

When there is an update in shared preference/application data,  we need to launch the application after this update. To do this, launchActivity should be set to false during initialization.

See the full code in Kotlin:
See the full code in Java:

Thursday, January 3, 2019

(Micro) Service Testing with Postman - Newman - Docker

Postman seems to become a defacto tool for service testing because the Postman is very user-friendly, easy-to-learn, all-in-one, lightweight and collaborating tool. Postman has been used for a long time but recently it has growing popularity because of a stable native application, collaboration feature after version 6.2, sharing of collections for team, interactive working with the team, mocks for isolated testing, environments for running the test for different test environments such as local, development, stage ... and many more features. For me, one of the biggest features is easy-to-use for everyone in a team so everyone in a team can use and update a postman collection easily. In this post, I want to explain how postman can be used efficiently.

Testing a Service and Writing Tests

With postman testing service is simple. Postman supports many methods like POST, GET, PUT, PATCH. Just select the correct method and hit the service URL you want to test. Postman also has everything that you need for complex requests headers, body, and parameters list. If you need something to prepare before sending the test request you can do it in the 'Pre-request Script'. If you need authorization you can add it 'Authorization' part which is an alternative to adding 'Authorization' parameter in the 'Header' section. If you want to check the not only the status code but also returning data you can add some assertions in the 'Tests' section. Postman supports Chai Assertion Library as BDD you can add self-descriptive assertion functions to your tests.

For complex scenario, you can write some javascript in the tab to simulate real behavior of your users. In the following scenario, a basic payment process is started with a post request and then the status of the request becomes `processing` but after that client needs to check the backend for each second until the status of the process become `completed`. For this scenario, in the `payment/payment-processing` request, until the status of this request is `processing` it calls itself recursively. If the status is not `processing` then it calls the next requests with another great feature of postman `setNextRequest` as  `postman.setNextRequest("payment/payment-complete")` see the example:

Pre-Request Script For Creating Data Dynamically

The pre-request script is designed to run something before the test so we can use it creating/updating/deleting test data for comprehensive test cases. In this section, you can use the embed node packages like `atob` and `btoa` for base encryption or `cheerio` is a form JQuery library to get value from a web object or `CriptoJs` cryptographic library to encrypt data with AES, DES or SHA. You can check the full list of embed libraries from postman document.

In the following example, for RSA encryption with base64, I use an external web application. In the pre-requests tab of the request, `myData` is encrypted by sending a request to this web application. This is another important feature of the postman.

Running Postman Tests With Postman

You can run a collection with Postman collection runner easily by setting environment if necessary. Firs you can get my test collection which is written against to the blog project on GitHub. You can check the `service-test` section since it is a full project including every test that you need in your CI/CD. To run the tests, just click the play button next to the collection name, it opens the run panel then you click the `Run` button. See the following images:

Running Postman Tests With Newman

Newman is the CLI companion of postman so that you can run your tests with your environment by CLI. This is developed for integrating your test to CI/CD environment easily. In this example test data is created before the test run for a comprehensive test result. The full command is as follows:

You need node first, check this, then install the required packages, newman and newman-report-html:

npm install -g newman
npm install -g newman-reporter-html

to run with collection json:
newman run blog-sample-service.postman_collection.json -e blog-local.postman_environment.json --reporters cli,html --reporter-html-template report-template.hbs

to run with collection url:
newman run -e blog-local.postman_environment.json --reporters cli,html --reporter-html-template report-template.hbs

Running Postman Tests With Newman in Docker

For a better approach to run test on CI/CD pipeline is to use Docker so that we will not need to install any test related tools/languages/technologies to the client machine. This is becoming one of the industry standards. 

Check the Dockerfile in the project. Basically, the base image is node installed Debian, we are installing the required node package which is newman and newman-reporter-html. Then we are running the test inside newman folder and the report will be created inside /newman folder. This Dockerfile gives us a newman entry point which can run it with newman commands. I have create Newman image in hub.docker, so we need to pull it first then we use it.
docker run --network host -v $PWD:/newman gunesmes/newman-postman-html-report run -e blog-local.postman_environment.json --reporters cli,html --reporter-html-template report-template.hbs

We need to restore data during the test some data need to be ready. For each run, the database should be restored with the script provided in restore_database.sql. This file can be run by

Get the  Postman Collection and the project.

Thursday, June 14, 2018

Scalable Tests for Responsive Web: Running Cucumber and Capybara with Tags in Dockers

If you are using Capybara with Cucumber, tagging is a very efficient feature of the
Cucumber for manageable test cases. Tagging can be also used for managing test environment, like @test may mean that these tests run only in the test environment so set the environment as the test for me. Likely @live may mean that these test can run on the live environment, assume that you are applying continuous testing. You can also give the scenarios for device-specific tags, like @ihphone6_v may say these tests are for iphone6 with the vertical mode.

Moreover, with tagging, you can also make isolation strategy for running your test in parallel. Each parallel suits should have its own data like users, active records, address, credit cards info and ext. I have used the tagging for running tests in dockers. In this post, you can find some practical way of running Capybara with Cucumber in Dockers.

Creating Docker Image: Dockerfile

I am using Ruby version 2.3 so in Dockerfile getting the image of FROM Ruby:2.3 which is the first layer of the image comes as a Debian os. Then installing required libraries with RUN command. Changing the working directory with WORKDIR which we will use this path by mounting project folder into the running Docker. COPY Gemfile to WORKDIR so we can RUN bundle install to install required gems. The last thing, we need to install Chromedriver and Chrome by RUN commands. These steps executed by the Dockerfile, check it below:

Running Capybara in a Docker

Running the Capybara with my image is pretty simple if you run it without docker, just try other options which most suited for your case.
docker run --shm-size 256M --rm --name capybara_m -v $PWD:/usr/src/app gunesmes/docker-capybara-chrome:latest bash -c "cucumber features/G001_basket_product_checks.feature DRIVER=driver_desktop"

the meaning of the parameters in the command as follows:

  • run: running docker 
  • --shm-size: Size of /dev/shm. Shared memory size, if you do not set it Chromedriver may be unreachable if there is no allocated memory.
  • --rmAutomatically remove the container when it exits so we can rerun if fails
  • --name: Assign a name to the container
  • -v: Bind mount a volume, we are mounting the local files, inside the container.  $PWD:/usr/src/app means to mount the path present working directory to the path called /app inside the container
  • gunesmes/docker-capybara-chrome: name of the images which simulate the local machine works as Capybara machine. For the first time you run, it downloads the images from, but for the later run, it will just download the updated container layers if there are.
  • :latestversion of the image, if you tagged your image when building it, you can use it.
  •  . . .: and the rest of commands are for the Cucumber specific. You can see them in Cucumber docs.

Visual Run for iPhone6

cucumber features/G001_basket_product_checks.feature DRIVER=driver_mobile_iphone6_v_visible

Docker run for a single tag with single platform

docker run --shm-size 128M --rm --name capybara_m -v $PWD:/usr/src/app gunesmes/docker-capybara-chrome:latest bash -c "cucumber features/G001_basket_product_checks.feature DRIVER=driver_desktop"

Docker run for parallel execution of smoke test


check the reports inside the ./html_report

Docker run for parallel execution of all tags with all platforms


When we run the, there is only two parallel run executed against the desktop version of chrome and iPhone 6 version of chrome with the vertical mode. Mobile for better experiences we both set the dimension of the mobile devices and the user-agent for mobile device operating system. However, there is still a technical problem that we don't run the test in real devices. Check the browser object in the env.rb.  You can run the in a dedicated environment, it just requires more resources than an ordinary machine. 

At the end, it creates JSON report files in the ./report folder so that you can create nice HTML reports with cucumber-jvm report plugin for Jenkins. It includes screenshots on error and the logs of the browser. 

Check the result of the log, you can the benefit of the parallel run. Each test takes approximately 1 minute when we run the, there are 7 parallel run but all test takes approximately 1minute 20seconds.

Thursday, May 31, 2018

Isolated - Scalable Performance Testing: Locust in Dockers

I have shared some posts how to run Locust in Local or in Cloud, as slave or master. At this time I want to share how you can run it in a Docker. To fully get the benefits of Locust I am using it with Python 3 so I created a Docker file and images upload to the Docker Hub, and the project is on the GitHub

The Dockerfile has the minimum available requirements but we also have a new file called `requirements.txt` which you can add required python libraries to install inside the container by pip install lib. The Dockerfile is:

When you got the docker-locust image, you can now run your script. At the end of the Dockerfile you can see that ENTRYPOINT [ "/usr/local/bin/locust" ] this enable us to use the image as service which means that you can directly call it same as using Locust installed locally. See the run command below:

Running Locust in a Docker

Running the Locust with my image is pretty simple if you used without docker, just try other options which most suited for your case.
docker run --rm --name locust -v $PWD:/locust gunesmes/docker-locust -f /locust/ --host= --num-request=100 --clients=10 --hatch-rate=1 --only-summary --no-web

the meaning of the parameters in the command as follows:

  • run: running docker 
  • --rmAutomatically remove the container when it exits so we can rerun if fails
  • --name: Assign a name to the container
  • -v: Bind mount a volume, we are mounting the local files,, inside the container.  $PWD:/locust means to mount the path present working directory to the path called /local inside the container
  • gunesmes/docker-locust: name of the images which works as Locust service so it is a synonym of the locust . For the first time you run it downloads the images from, but later it will just download the update container layer if there is.
  •  . . .: and the rest of commands are for the locust specific. You can see them at the end of the post.

See what is happening when you run the command for the first time:

List of Locust command, note that Locust will remove the --num-request in favour of stopping it by a timer parameter.

The little but powerfull is just collecting the link inside the hostname home page, and starting to attack by Locust when it is ready. See the