not a feature, but the future of an app is under testing.

Friday, May 3, 2019

How to Set Shared Preferences in Espresso Test for Kotlin and Java




I have experienced Espresso and needed to deep dive into Shared Preferences
just because it is one of the main parameters used in the application we developed. As a long search in the online sources but there are some pretty old documents for Espresso with Java and very few documents about Espresso with Kotlin. In this post, I want to share my experiences with setting Shared Preferences with Kotlin and Java and how you can use it in your test design. You can follow up the steps for your test project.

Shared Preferences is a way to store user data in local devices so it has been supported since the very early version of Android. Shared Preferences can be stored in the default file or custom file. 

Using Default File for Shared Preferences

If your application uses the default file it should stores the shared data in the default file provide by Android as in the following path in the device:
/data/data/com.package.name/shared_prefs/com.package.name_preferences.xml
This is the source code for getDefaultSharedPreferencesName

For this case, you can set/clear/get shared preferences by the following method:


Using Customized File for Shared Preferences

The same way. your application can use a custom file to store shared preferences. This time you need to get the name of the .xml file for storing shared preferences. You can find the file in the same path with the default preference file. If the file name is `MyAppSharedPreferences`, the file path should be like the following:
/data/data/com.package.name/shared_prefs/MyAppSharedPreferences.xml
For this case, you can set/clear/get shared preferences by the following method:


When there is an update in shared preference/application data,  we need to launch the application after this update. To do this, launchActivity should be set to false during initialization.

See the full code in Kotlin:
See the full code in Java:

Thursday, January 3, 2019

(Micro) Service Testing with Postman - Newman - Docker


Postman seems to become a defacto tool for service testing because the Postman is very user-friendly, easy-to-learn, all-in-one, lightweight and collaborating tool. Postman has been used for a long time but recently it has growing popularity because of a stable native application, collaboration feature after version 6.2, sharing of collections for team, interactive working with the team, mocks for isolated testing, environments for running the test for different test environments such as local, development, stage ... and many more features. For me, one of the biggest features is easy-to-use for everyone in a team so everyone in a team can use and update a postman collection easily. In this post, I want to explain how postman can be used efficiently.

Testing a Service and Writing Tests

With postman testing service is simple. Postman supports many methods like POST, GET, PUT, PATCH. Just select the correct method and hit the service URL you want to test. Postman also has everything that you need for complex requests headers, body, and parameters list. If you need something to prepare before sending the test request you can do it in the 'Pre-request Script'. If you need authorization you can add it 'Authorization' part which is an alternative to adding 'Authorization' parameter in the 'Header' section. If you want to check the not only the status code but also returning data you can add some assertions in the 'Tests' section. Postman supports Chai Assertion Library as BDD you can add self-descriptive assertion functions to your tests.




For complex scenario, you can write some javascript in the tab to simulate real behavior of your users. In the following scenario, a basic payment process is started with a post request and then the status of the request becomes `processing` but after that client needs to check the backend for each second until the status of the process become `completed`. For this scenario, in the `payment/payment-processing` request, until the status of this request is `processing` it calls itself recursively. If the status is not `processing` then it calls the next requests with another great feature of postman `setNextRequest` as  `postman.setNextRequest("payment/payment-complete")` see the example:

Pre-Request Script For Creating Data Dynamically

The pre-request script is designed to run something before the test so we can use it creating/updating/deleting test data for comprehensive test cases. In this section, you can use the embed node packages like `atob` and `btoa` for base encryption or `cheerio` is a form JQuery library to get value from a web object or `CriptoJs` cryptographic library to encrypt data with AES, DES or SHA. You can check the full list of embed libraries from postman document.

In the following example, for RSA encryption with base64, I use an external web application. In the pre-requests tab of the request, `myData` is encrypted by sending a request to this web application. This is another important feature of the postman.

Running Postman Tests With Postman

You can run a collection with Postman collection runner easily by setting environment if necessary. Firs you can get my test collection which is written against to the blog project on GitHub. You can check the `service-test` section since it is a full project including every test that you need in your CI/CD. To run the tests, just click the play button next to the collection name, it opens the run panel then you click the `Run` button. See the following images:





Running Postman Tests With Newman

Newman is the CLI companion of postman so that you can run your tests with your environment by CLI. This is developed for integrating your test to CI/CD environment easily. In this example test data is created before the test run for a comprehensive test result. The full command is as follows:

You need node first, check this https://nodejs.org/en/download, then install the required packages, newman and newman-report-html:

npm install -g newman
npm install -g newman-reporter-html

to run with collection json:
newman run blog-sample-service.postman_collection.json -e blog-local.postman_environment.json --reporters cli,html --reporter-html-template report-template.hbs

to run with collection url:
newman run https://www.getpostman.com/collections/ac3d0d9bbd8ae1bcfe5d -e blog-local.postman_environment.json --reporters cli,html --reporter-html-template report-template.hbs


Running Postman Tests With Newman in Docker

For a better approach to run test on CI/CD pipeline is to use Docker so that we will not need to install any test related tools/languages/technologies to the client machine. This is becoming one of the industry standards. 

Check the Dockerfile in the project. Basically, the base image is node installed Debian, we are installing the required node package which is newman and newman-reporter-html. Then we are running the test inside newman folder and the report will be created inside /newman folder. This Dockerfile gives us a newman entry point which can run it with newman commands. I have create Newman image in hub.docker, so we need to pull it first then we use it.
docker run --network host -v $PWD:/newman gunesmes/newman-postman-html-report run https://www.getpostman.com/collections/ac3d0d9bbd8ae1bcfe5d -e blog-local.postman_environment.json --reporters cli,html --reporter-html-template report-template.hbs

We need to restore data during the test some data need to be ready. For each run, the database should be restored with the script provided in restore_database.sql. This file can be run by
bash restore_db.sh
bash run_service_test.sh



Get the  Postman Collection and the project.

Thursday, June 14, 2018

Scalable Tests for Responsive Web: Running Cucumber and Capybara with Tags in Dockers


If you are using Capybara with Cucumber, tagging is a very efficient feature of the
Cucumber for manageable test cases. Tagging can be also used for managing test environment, like @test may mean that these tests run only in the test environment so set the environment as the test for me. Likely @live may mean that these test can run on the live environment, assume that you are applying continuous testing. You can also give the scenarios for device-specific tags, like @ihphone6_v may say these tests are for iphone6 with the vertical mode.

Moreover, with tagging, you can also make isolation strategy for running your test in parallel. Each parallel suits should have its own data like users, active records, address, credit cards info and ext. I have used the tagging for running tests in dockers. In this post, you can find some practical way of running Capybara with Cucumber in Dockers.

Creating Docker Image: Dockerfile

I am using Ruby version 2.3 so in Dockerfile getting the image of FROM Ruby:2.3 which is the first layer of the image comes as a Debian os. Then installing required libraries with RUN command. Changing the working directory with WORKDIR which we will use this path by mounting project folder into the running Docker. COPY Gemfile to WORKDIR so we can RUN bundle install to install required gems. The last thing, we need to install Chromedriver and Chrome by RUN commands. These steps executed by the Dockerfile, check it below:

Running Capybara in a Docker

Running the Capybara with my image is pretty simple if you run it without docker, just try other options which most suited for your case.
docker run --shm-size 256M --rm --name capybara_m -v $PWD:/usr/src/app gunesmes/docker-capybara-chrome:latest bash -c "cucumber features/G001_basket_product_checks.feature DRIVER=driver_desktop"

the meaning of the parameters in the command as follows:


  • run: running docker 
  • --shm-size: Size of /dev/shm. Shared memory size, if you do not set it Chromedriver may be unreachable if there is no allocated memory.
  • --rmAutomatically remove the container when it exits so we can rerun if fails
  • --name: Assign a name to the container
  • -v: Bind mount a volume, we are mounting the local files, inside the container.  $PWD:/usr/src/app means to mount the path present working directory to the path called /app inside the container
  • gunesmes/docker-capybara-chrome: name of the images which simulate the local machine works as Capybara machine. For the first time you run, it downloads the images from hub.docker.com, but for the later run, it will just download the updated container layers if there are.
  • :latestversion of the image, if you tagged your image when building it, you can use it.
  •  . . .: and the rest of commands are for the Cucumber specific. You can see them in Cucumber docs.

Visual Run for iPhone6

cucumber features/G001_basket_product_checks.feature DRIVER=driver_mobile_iphone6_v_visible


Docker run for a single tag with single platform

docker run --shm-size 128M --rm --name capybara_m -v $PWD:/usr/src/app gunesmes/docker-capybara-chrome:latest bash -c "cucumber features/G001_basket_product_checks.feature DRIVER=driver_desktop"

Docker run for parallel execution of smoke test

bash run.sh 

check the reports inside the ./html_report

Docker run for parallel execution of all tags with all platforms

bash run_full.sh

When we run the run_smoke.sh, there is only two parallel run executed against the desktop version of chrome and iPhone 6 version of chrome with the vertical mode. Mobile for better experiences we both set the dimension of the mobile devices and the user-agent for mobile device operating system. However, there is still a technical problem that we don't run the test in real devices. Check the browser object in the env.rb.  You can run the run_full.sh in a dedicated environment, it just requires more resources than an ordinary machine. 


At the end, it creates JSON report files in the ./report folder so that you can create nice HTML reports with cucumber-jvm report plugin for Jenkins. It includes screenshots on error and the logs of the browser. 






Check the result of the log, you can the benefit of the parallel run. Each test takes approximately 1 minute when we run the run_full.sh, there are 7 parallel run but all test takes approximately 1minute 20seconds.

Thursday, May 31, 2018

Isolated - Scalable Performance Testing: Locust in Dockers


I have shared some posts how to run Locust in Local or in Cloud, as slave or master. At this time I want to share how you can run it in a Docker. To fully get the benefits of Locust I am using it with Python 3 so I created a Docker file and images upload to the Docker Hub, and the project is on the GitHub

The Dockerfile has the minimum available requirements but we also have a new file called `requirements.txt` which you can add required python libraries to install inside the container by pip install lib. The Dockerfile is:

When you got the docker-locust image, you can now run your script. At the end of the Dockerfile you can see that ENTRYPOINT [ "/usr/local/bin/locust" ] this enable us to use the image as service which means that you can directly call it same as using Locust installed locally. See the run command below:


Running Locust in a Docker

Running the Locust with my image is pretty simple if you used without docker, just try other options which most suited for your case.
docker run --rm --name locust -v $PWD:/locust gunesmes/docker-locust -f /locust/run.py --host=http://www.github.com --num-request=100 --clients=10 --hatch-rate=1 --only-summary --no-web

the meaning of the parameters in the command as follows:

  • run: running docker 
  • --rmAutomatically remove the container when it exits so we can rerun if fails
  • --name: Assign a name to the container
  • -v: Bind mount a volume, we are mounting the local files, run.py, inside the container.  $PWD:/locust means to mount the path present working directory to the path called /local inside the container
  • gunesmes/docker-locust: name of the images which works as Locust service so it is a synonym of the locust . For the first time you run it downloads the images from hub.docker.com, but later it will just download the update container layer if there is.
  •  . . .: and the rest of commands are for the locust specific. You can see them at the end of the post.


See what is happening when you run the command for the first time:

List of Locust command, note that Locust will remove the --num-request in favour of stopping it by a timer parameter.

The little but powerfull run.py is just collecting the link inside the hostname home page, and starting to attack by Locust when it is ready. See the run.py:

Saturday, November 4, 2017

Headless Miracles: Chromedriver Headless VS Chromdriver



You may have heard that we are running the cases in the headless mode so that we could accelerate the execution of the test cases.  So is this true all the time? In this post, I have a little test to compare the headless mode in Chromedriver with version 2.33 and Chromedriver. The tests were run in Windows.

I am using Capybara, I have around 200 test cases written in Cucumber. Tests are running parallel with 15 execution lines. This execution is controlled by tags so we can get the execution time when a tag finished. With this way, we can compare the tag specific time differences and the total time difference. I am using the following Chromedriver instances written in env.rb file in the project.



TAGS Chrome Headless DIFF
signup 90.0009999275 70.003000021 22.22%
login 100.000999928 80.003000021 20.00%
basket_a 120.000999928 120.003000021 0.00%
order_d 160.001999855 150.003999949 6.25%
search 180.003000021 180.003999949 0.00%
filter_ps 180.003000021 190.003999949 -5.56%
order 290.004999876 280.006000042 3.45%
view 310.004999876 290.006000042 6.45%
address 330.006000042 310.006000042 6.06%
guest_checkout 380.006000042 350.006999969 7.89%
order_3d 380.006000042 360.006999969 5.26%
basket 390.006999969 370.006999969 5.13%
filter 410.006999969 430.007999897 -4.88%
order_credit 450.007999897 450.009000063 0.00%
member 520.009000063 500.013999939 3.85%
TOTAL 530.009000063 510.015000105 3.77%

You can see the result of the test in the table below. In total, we have got 3.77% gain in time with running the chromedriver in headless mode. What about the required resources for running the test. The following images show that the performance indicators of the machine that the tests run. Check the memory usage, processor usage, and network usage over time.

Chromedriver in Headless Mode Windows Perfmon 
Chromedriver Windows Perfmon 

To sum up, the headless miracle is not true for chromedriver since getting 3,77% gain in time and almost the same resource requiring is not that good enough reason for calling it a miracle. If you say You are using Phantomjs, so it is better than Chromedriver, read this post. I am planning to test this for my case, when I do it I will add the result of the test.



Friday, May 12, 2017

Mobile Test Automation: Calabash on Xamarin Test-Cloud




If you are testing your application against emulator / simulator, you will still have risks that the expected features may possible not work on several real devices. To handle the real device testing you should have many devices to run your tests. In the current market, we have lots of possibilities for device and operating system versions combinations. Best approach to find most used devices, you should use some statistical data. Most of the cases Pareto analysis can help for selecting high coverage devices. However if you want to find more bug before release, you can focus on the newest device with latest OS version and the oldest devices with the oldest OS version. Any case you should have at least 5-10 android devices and 3-5 iOS devices for a good level of coverage in the beginning. Therefore managing the devices is another problem if you want to have your own local test suites. As a good alternative you can use the cloud services. In this post I want to share some information about running Calabash test on Xamarin Test-Cloud.

Xamarin Test-Cloud supports X-platform frameworks so you can use same feature files for both iOs and Android project. However you should submit the code with providing the profile and the config files in your command. If you have your own Calabash project then you are ready to submit your code to test-cloud.


Go to Xamarin Test-Cloud and open and account, then you will get 3hours X 30days free usage. Click "New Test Run" and create platform specific devices group. At the end of the creation select "Calabash" then it gives you complete command to submit test-cloud. To run test-cloud you need to install `xamarin-test-cloud` gem by the following command:
gem install xamarin-test-cloud
If you are not using X-platform, you submit the code with the following command:
test-cloud submit yourAppFile.apk f957b60sd2322wddwd1f6140c760c2204a --devices 481d761b --series "AndroidMostUsed" --locale "en_US" --app-name "ProjectName" --user gunesmes@gmail.com
For X-platform example, you submit the code with the following command:
test-cloud submit yourAppFile.apk f957b60sd2322wddwd1f6140c760c2204a --devices 481d761b --series "AndroidMostUsed" --locale "en_US" --app-name "ProjectName" --user gunesmes@gmail.com --profile android  --config=config/cucumber.yml
As I have explained in my previous post Calabash-Android and Calabash-iOs is differentiated by architecture becauseof the platform dependency. These commands are directly applied but for iOs project you need to have a new target as -cal and it should be built for device. Then you should produce .ipa files from this target. To produce .ipa file you can run the following command:
 /usr/bin/xcrun -sdk iphoneos PackageApplication -v ~/Library/Developer/Xcode/DerivedData/ModacruzV2-hdlgquuxyftvplepqknjiywdnclj/Build/Products/Debug-iphoneos/IOSProject-cal.app -o ~/Projects/mobile_app_automation/IOSProject.ipa

Then you can submit your iOs project with newly created .ipa file:
test-cloud submit ~/Projects/mobile_app_automation/IOSProject.ipa f957b60sd2322wddwd1f6140c760c2204a --devices 21d1d61b --series "IosMostUsed" --locale "en_US" --app-name "ProjectName" --user gunesmes@gmail.com --profile ios  --config=config/cucumber.yml  --profile ios --config=config/cucumber.yml
Then you see the progress in the console or web, on the console it gives the result and the url to reach the project on Xamarin Test-Cloud. The Xamarin very friendly user interface to see the failing test cases. 



One of my favourite features of cucumber is to handle test cases with tag, but Xamarin has not implemented the tags options but they suggesting the the using categorisation. By adding the command  `--include CATEGORY-NAME` for Nunit test. However I could not satisfy with this feature, hope to solve it. 


Update:

Answer for running with tag option came from stackoverflow, you can add the tag you want to run at the config file like `--tag @regression`
android: RESET_BETWEEN_SCENARIOS=1 PLATFORM=android -r features/support -r features/android/support -r features/android/helpers -r features/step_definitions -r features/android/pages --tag @regression

The final word, this is where the cloud testing takes advantages, you can run the same scripts with many devices in parallel.





Sunday, April 9, 2017

Why "Test Risk"




Recently I have had lots of interviews for QA Analyst roles in the company. We called engineers from the level of beginner to senior test engineers for these roles. One of the most important questions in our interview is for understanding why we are testing. Actually this question is more about the philosophy of the testing, as we think the philosophy is generally not liked by people, but everyone who does something about a subject everyday from 9 to 6 should think about why he does this job. I wonder whether the testing job is just only to feed himself or he has some other passion about his profession. That's why this question is very important for me. One of the training given by Ståle Amland which is "Exploratory Testing – Risk-Based Agile Testing", he explains the philosophy of testing as epistemology of testing and he is saying that "all good testers should  practice Epistemology", see these slides in his training:

What is epistemology is according to Amland, "Epistemology is the study of how we know
what we know. The philosophy of science
belongs to Epistemology. 

All good testers practice Epistemology.and the basic skills of epistemology:
  • Ability to pose useful questions.
  • Ability to observe what’s going on.
  • Ability to describe what you perceive.
  • Ability to think critically about what you know.
  • Ability to recognize and manage bias.
  • Ability to form and test conjectures.
  • Ability to keep thinking despite already knowing.
  • Ability to analyze someone else’s thinking.

In this post I want to explain my thought about test and risk.

Why we are testing? or why companies need testing? or why someone pays us money for testing? Have you ever asked this question yourself?

If you have not tried to answer this question, it is time for it. The simplest answer to this question is “because there are risks”. Everything started in favour of reducing the risks. You never know how much risk you would take if you took a production to live in front of real users. Therefore we should always consider the Risk while we are testing anything. Risk Based Testing helps us to construct an approach for including risks into testing effort so risk based testing is not an method or technic, it is an approach that you use it while applying any testing technics.

What is Risk

Risk is general terms for the probability of failure, see the definition here. However, in terms of testing in software industry, Risk is the probability of the damage that can be occurred by a failure in live. Therefore it has two parameters: the first one is the probability of a defect, what steps produces the defect, how much percentage of the user might possibly see it; and the second parameters is the damage, when a user see it how the system responses for further action. So let's define the formulation of risk:

Risk = (Likelihood of Failure) X (Damage of Failure)

Let's look at an example for calculating risks. For e-commerce web application, We have the following cases:


Practicing the Risk Calculation

Let's assume that the following defects we have in the production, these are most probably the major defect so they should be solved asap but they are good examples for calculation:

Defect 1: User can not add an address when he goes to the step of selecting address
Defect 2: User can not use the bonus of X type credit card on check-out step


Analysing the Risk Factors for Defect 1:

Likelihood: We need statistical data that can helps us to predict user behaviours, let's so the data converting from analytical tools.  1/10 users adds new address during order; 1/20 newly registered user who doesn't have address so they have enter new address; 1/100 users add new address from his membership page, that functionality works good. Let's 1000 users use the system and 1/10 users order products. 


Affected User Description Effect to System
1000 * 1/10 100 users order products +
1000 * 1/100 10 users add new addresses from account page, these user will be happy and doesn't affect the probability no-effect
100 * 1/20 5 new users -
100 * 1/10 10 users want to add a new address -

  • 15/100 users, in ordering process, are possibly affected
  • 15/1000 users, in the system, are possibly affected
the likelihood is 15/1000 * 100 = 1.5%, which means 1.5% of the users might be affected by defect 1.


Damage: The cost of failure changes depends on the sector that the application used by. In general it could be money lost, reputation decrease, causing injuries or even killing people. For e-commerce we can think about money and reputation lost. This defect stops the purchasing process, so we can give the highest point of damage. If scale the damage from 1 to 10, this should be 10. However this classification should be agreed with all stack-holders, for more about this look at about test strategy in this post.  

Risk of Defect 1:  1.5 X 10 =>  15 

Analysing the Risk Factors for Defect 2:

Likelihood: We have 1000 users, 1/10 users go to order, 1/50 users use X-Type credit card, 1/10 of that user use bonuses

Affected User Description Effect to System
1000 * 1/10 100 users order products +
100 * 1/50 2 users use X-type Credit cards no-effect
2 * 1/10 0.2 users use bonuses -

  • 0.2/100 users, in ordering process, are possibly affected
  • 0.2/1000 users, in the system, are possibly affected
the likelihood is 0.2/1000 * 100 = 0.02%, which means 0.02% of the users might be affected by defect 2.

Damage: cost of failure should be same with defect 1 by assumption for the damage of defect 1. Damage should be 10.

Risk of Defect 2:  0.02 X 10 => 0.2

This is just an introductory post and the next post will be related to "Risk Based Testing".