Skip to main content

Scalable Tests for Responsive Web: Running Cucumber and Capybara with Tags in Dockers

If you are using Capybara with Cucumber, tagging is a very efficient feature of the
Cucumber for manageable test cases. Tagging can be also used for managing test environments, like @test may mean that these tests run only in the test environment so set the environment as the test for me. Likely @live may mean that these tests can run on the live environment, assume that you are applying continuous testing. You can also give the scenarios for device-specific tags, like @ihphone6_v may say these tests are for iphone6 with the vertical mode.

Moreover, with tagging, you can also make an isolation strategy for running your test in parallel. Each parallel suits should have its own data like users, active records, address, credit card info, and ext. I have used the tagging for running tests in dockers. In this post, you can find some practical way of running Capybara with Cucumber in Dockers.

Creating Docker Image: Dockerfile

I am using Ruby version 2.3 so in Dockerfile getting the image of FROM Ruby:2.3 which is the first layer of the image comes as a Debian os. Then installing required libraries with the RUN command. Changing the working directory with WORKDIR which we will use this path by mounting the project folder into the running Docker. COPY Gemfile to WORKDIR so we can RUN bundle install to install required gems. The last thing, we need to install Chromedriver and Chrome by RUN commands. These steps executed by the Dockerfile, check it below:

Running Capybara in a Docker

Running the Capybara with my image is pretty simple if you run it without docker, just try other options which most suited for your case.

docker run --shm-size 256M --rm --name capybara_m \
 -v $PWD:/usr/src/app gunesmes/docker-capybara-chrome:latest bash \
 -c "cucumber features/G001_basket_product_checks.feature DRIVER=driver_desktop"

the meaning of the parameters in the command as follows:
  • run: running docker 
  • --shm-size: Size of /dev/shm. Shared memory size, if you do not set it Chromedriver may be unreachable if there is no allocated memory.
  • --rmAutomatically remove the container when it exits so we can rerun if fails
  • --name: Assign a name to the container
  • -v: Bind mount a volume, we are mounting the local files, inside the container.  $PWD:/usr/src/app means to mount the path present working directory to the path called /app inside the container
  • gunesmes/docker-capybara-chrome: name of the images which simulate the local machine works as Capybara machine. For the first time you run, it downloads the images from hub.docker.com, but for the later run, it will just download the updated container layers if there are.
  • :latestversion of the image, if you tagged your image when building it, you can use it.
  •  . . .: and the rest of commands are for the Cucumber specific. You can see them in Cucumber docs.

Visual Run for iPhone6

For debugging purposes you may need to run the test visually to see if everything happens as expected. For this purpose set the visibility of the driver as visible by selecting the one in the list of env.rb file. Therefore the following command simply runs the test via iPhone 6, vertical mode visible. You can also see the video for this run.
cucumber features/G001_basket_product_checks.feature DRIVER=driver_mobile_iphone6_v_visible

Docker run for a single tag with a single platform

You can run a feature for a single platform by defining the DRIVER option with the desired one. In the following run command we can run one feature file for the driver running on the desktop.
docker run --shm-size 256M --rm --name capybara_m \
-v $PWD:/usr/src/app gunesmes/docker-capybara-chrome:latest bash \ -c "cucumber features/G001_basket_product_checks.feature DRIVER=driver_desktop"

Docker run for parallel execution of smoke test

bash run.sh 
check the reports inside the ./html_report

Docker run for parallel execution of all tags with all platforms

bash run_full.sh

When we run the run_smoke.sh, there is only two parallel run executed against the desktop version of chrome and iPhone 6 version of chrome with the vertical mode. Mobile for better experiences we both set the dimension of the mobile devices and the user-agent for mobile device operating system. However, there is still a technical problem that we don't run the test in real devices. Check the browser object in the env.rb.  You can run the run_full.sh in a dedicated environment, it just requires more resources than an ordinary machine. 


In the end, it creates JSON report files in the ./report folder so that you can create nice HTML reports with cucumber-JVM report plugin for Jenkins. It includes screenshots on error and the logs of the browser. 






Check the result of the log, you can the benefit of the parallel run. Each test takes approximately 1 minute when we run the run_full.sh, there are 7 parallel runs but all test takes approximately 1minute 20seconds.

Popular posts for software testing and automation

Selenium Error "Element is not currently interactable and may not be manipulated"

Selenium webdriver can drive different browsers like as Firefox, Chrome or Internet Explorer. These browsers actually cover the majority of internet users, so testing these browsers possibly covers the 90% of the internet users. However, there is no guaranty that the same automation scripts can work without a failure on these three browsers. For this reason, automation code should be error-prone for the browsers you want to cover. The following error is caught when the test script run for Chrome and Internet Explorer, but surprisingly there is no error for the Firefox. Selenium gives an error like below: Traceback (most recent call last):   File "D:\workspace\sample_project\sample_run.py", line 10, in <module>     m.login()   File "D:\workspace\ sample_project \test_case_imps.py", line 335, in login     driver.find_element_by_id("id_username").clear()   File "C:\Python27\lib\site-packages\selenium-2.35.0-py2.7.egg\selenium\webdriver\r

Change Default Timeout and Wait Time of Capybara

One of the biggest challenge for automation is handling timeout problem. Most of the time, timeout is 60 seconds but it may sometimes not enough if you have badly designed asynchronous calls or the third party ajax calls. This makes handling timeout more complex. set large enough to tolerate network related problems. For Selenium based automation frameworks, like Capybara, default Webdriver timeout is set to Net::ReadTimeout (Net::ReadTimeout) Changing ReadTimeout If you have timeout problem for Capybara, it gives an error like above. This means that the page is not fully loaded in given timeout period. Even you can see that page is loaded correctly but webdriver wait until the Ajax calls finish. class BufferedIO #:nodoc: internal use only def initialize (io) @io = io @read_timeout = 60 @continue_timeout = nil @debug_output = nil @rbuf = '' end . . . . . def rbuf_fill beg

Create an Alias for Interactive Console Work: Selenium and Capybara

If you are working on shell most of the time Aliases are very helpfull and time saving. For testing purposes you can use Alias for getting ready your test suites. In this post, I want to explain both running Selenium and Capybara on console and creating aliases for each.  This post is for Windows machines, if you are using Unix-like see   this post . Creating Scripts for Selenium and Capybara First of all, it is assumed that you have installed Selenium and Capybara correctly and they work on your machines. If you haven't installed, you can see my previous posts. I am using the Selenium with Python and the Capybara with Ruby. You can use several different language for Selenium but Capybara works only with Ruby.  Create scripts in a directory called scripts (in your home folder, like as  ~/scripts ) for your automation tool as following, save them as capybara.rb, sel.py :  Creating Aliases Depends on your favourite shell, you need to add the alias to .bashrc bash

Page-Object Pattern for Selenium Test Automation with Python

Page-object model is a pattern that you can apply it to develop efficient automation framework. With the page-model, it is possible to minimize maintenance cost. Basically page-object means that your every page is inherited from a base class which includes basic functionalities for every page. If you have some new functionalities that every page should have, you can simple add it to the base class. Base class is like the following: In this part we are creating pages which are inherited from base page. Every page has its own functionalities written as python functions. Some functions return to a new page, it means that these functions leave the current page and produce a new page. You should write as much as functions you need in the assertion part because this is the only part you can use the webdriver functions to interact with web pages . This part can be evaluate as providing data to assertion part.   The last part is related to asserting your test cases against to the

Performance Testing on CI: Locust is running on Jenkins

For a successful Continuous Integration pipeline, there should be jobs for testing the performance of the application. It is necessary if the application is still performing well. Generally performance testing is thought as kinds of activities performed one step before going to live. In general approach it is true but don't forget to test your application's performance as soon as there is an testable software, such as an api end point, functions, and etc. For CI it is a good approach to testing performance after functional testing and just before the deployment of next stage. In this post, I want to share some info about Jenkins and Locust. In my previous post you can find some information about Locust and Jenkins. Jenkins operates the CI environment and Locust is a tool for performance testing. To run the Locust on Jenkins you need command line arguments which control the number of clients ,   hatch rate,  running locust without web interface and there should be so