Skip to main content

Posts

Automated Performance Testing with K6

Automated testing is running the tests and reporting results by tools with scripts. Automated performance testing is designing the performance test scenarios to run them spontaneously by tools and evaluating the result by tools to decide to go or not to go further. Automated Performance Testing is A subset of the performance test scenarios Designed to run spontaneously by tools Designed to evaluate the result by tools Designed not to break the system Having decision metrics, ex: max response time, max-average resp time, ... Designed to create reports for the concurrent and historical run What Types of Performance Testing Should be Automated Automated performance testing can be applied to all types of performance testing. However, each type needs a different level of maturity and sanity. Automated Performance Testing is Easily applied to load testing Hard to apply stress test, spike test, but have benefits Very hard to apply Soak-Endura

Basics about Performance Testing Tools

I had an opportunity to give a speech about how we can automate performance testing in CI/CD pipeline in a conference held in Istanbul for the first time " Test automation and Digital QA Submit Istanbul, #TAS19 ". My subject was "Automated Performance Testing", the first part of the speech is to explain what is performance tests and the second part is to explain performance tools, and the last section is to explain what is automated performance testing and how to implement it to CI/CD.  In this post, I want to explain my thoughts about the basics of performance testing tools. Next posts will focus on automated performance testing and using K6 as a performance testing tool. How to Evaluate Performance Testing Tools Before diving into the performance testing tools and technologies we have in the industry, we should define the criteria for evaluation. Each tool has its own advantages or disadvantages and similarity between ancestors or others. When we star

Performance Testing

I had an opportunity to give a speech about how we can automate performance testing in CI/CD pipeline in a conference held in Istanbul for the first time " Test automation and Digital QA Submit Istanbul, #TAS19 ". My subject was "Automated Performance Testing", the first part of the speech is to explain what is performance tests.  In this post, I want to explain my thoughts about performance testing. Next posts will focus on automated performance testing and using K6 as a performance testing tool. What is Performance Testing By Definition Defining the performance of the system by testing the functionalities of the system in terms of the non-functionalities   Performance Testing is A non-functional testing Testing the functionalities of the system Both black-box and white-box testing Defining how the system works under any loads Applied under different load levels Applied under different load increments Applied under different dur

Getting the text of elements in Espresso

Espresso is not designed to play with UI objects. When you are considering it in terms of capability for testing, it needs to be improved more. It is not as mature as Android SDK itself so you need to customize somethings to handle your requirements. Even you may need to use other tools like UIAutomator integrated into your test suite. In this post, I want to show how we can handle getting the text of ViewInteraction view.  ViewInteraction view does not have .text function for getting the text of the object since it is designed to interact with the object. Another method which may solve the problem of the assertion is to use the .check for matching the text, as follows: However, this method is not a good way to assert that a certain field text equals a value. For this purpose, we need to get the text of the element and assert it. To get the text of the ViewInteraction element, we need to cast it to TextView with getting the assignable form of it then

How to Set Shared Preferences in Espresso Test for Kotlin and Java

I have experienced Espresso and needed to deep dive into Shared Preferences just because it is one of the main parameters used in the application we developed. As a long search in the online sources but there are some pretty old documents for Espresso with Java and very few documents about Espresso with Kotlin. In this post, I want to share my experiences with setting Shared Preferences with Kotlin and Java and how you can use it in your test design. You can follow up the steps for your test project. Shared Preferences is a way to store user data in local devices so it has been supported since the very early version of Android. Shared Preferences can be stored in the default file or custom file.  Using Default File for Shared Preferences If your application uses the default file it should stores the shared data in the default file provide by Android as in the following path in the device: /data/data/com.package.name/shared_prefs/com.package.name_preferences.xml This

(Micro) Service Testing with Postman - Newman - Docker

Postman seems to become a defacto tool for service testing because the Postman is very user-friendly, easy-to-learn, all-in-one, lightweight and collaborating tool. Postman has been used for a long time but recently it has growing popularity because of a stable native application, collaboration feature after version 6.2, sharing of collections for team, interactive working with the team, mocks for isolated testing, environments for running the test for different test environments such as local, development, stage ... and many more features. For me, one of the biggest features is easy-to-use for everyone in a team so everyone in a team can use and update a postman collection easily. In this post, I want to explain how postman can be used efficiently. Testing a Service and Writing Tests With postman testing service is simple. Postman supports many methods like POST , GET , PUT , PATCH . Just select the correct method and hit the service URL you want to test. Postman also has everyth

Scalable Tests for Responsive Web: Running Cucumber and Capybara with Tags in Dockers

If you are using Capybara with Cucumber, tagging is a very efficient feature of the Cucumber for manageable test cases. Tagging can be also used for managing test environments, like @test may mean that these tests run only in the test environment so set the environment as the test for me. Likely @live may mean that these tests can run on the live environment, assume that you are applying continuous testing. You can also give the scenarios for device-specific tags, like @ihphone6_v may say these tests are for iphone6 with the vertical mode. Moreover, with tagging, you can also make an isolation strategy for running your test in parallel. Each parallel suits should have its own data like users, active records, address, credit card info, and ext. I have used the tagging for running tests in dockers. In this post, you can find some practical way of running Capybara with Cucumber in Dockers. Creating Docker Image: Dockerfile I am using Ruby version 2.3 so in Dockerfile gett

Isolated - Scalable Performance Testing: Locust in Dockers

I have shared some posts how to run Locust in Local or in Cloud, as slave or master. At this time I want to share how you can run it in a Docker. To fully get the benefits of Locust I am using it with Python 3 so I created a Docker file and images upload to the Docker Hub , and the project is on the GitHub .  The Dockerfile has the minimum available requirements but we also have a new file called `requirements.txt` which you can add required python libraries to install inside the container by pip install lib . The Dockerfile is: When you got the docker-locust image, you can now run your script. At the end of the Dockerfile you can see that ENTRYPOINT [ "/usr/local/bin/locust" ] this enable us to use the image as service  which means that you can directly call it same as using Locust installed locally. See the run command below: Running Locust in a Docker Running the Locust with my image is pretty simple if you used without docker, just try other options w

Headless Miracles: Chromedriver Headless VS Chromdriver

You may have heard that we are running the cases in the headless mode so that we could accelerate the execution of the test cases.  So is this true all the time? In this post, I have a little test to compare the headless mode in Chromedriver with version 2.33 and Chromedriver. The tests were run in Windows. I am using Capybara , I have around 200 test cases written in Cucumber . Tests are running parallel with 15 execution lines. This execution is controlled by tags so we can get the execution time when a tag finished. With this way, we can compare the tag specific time differences and the total time difference. I am using the following Chromedriver instances written in env.rb  file in the project. TAGS Chrome Headless DIFF signup 90.0009999275 70.003000021 22.22% login 100.000999928 80.003000021 20.00% basket_a 120.000999928 120.003000021 0.00% order_d 160.001999855 150.003999949 6.25%