Friday, August 12, 2016

Using ChromeOptions for Capybara: Disable Images and More


Chromedriver lets us set preferences via capabilities when it is initiated by Webdriver. These preferences can be used for specific purposes for example disabling images can reduce network needs and fasten the test runs; and disabling popups can make your cases more stable by reducing non-deterministic cases. In this post I want to explain how to use these feature of Chromedriver for Capybara with Ruby, you can also apply the same method for other languages with their bindings.

Sometimes bandwidth is crucial if you have parallel pipelines for test runs. Think that you have 10 parallel pipes and very pipeline consume bandwidth to complete its tasks, the most heavy part of website generally images. If we disable the images we can fasten the overall test runs. To disable the images you should set the images option to 2 in preferences. For Capybara you can disable the images by following Chrome instance:

Sometimes you need to disable popups which intermittently occurs and make your cases flacky. These kind of popups generally controls by another team of the company such instant campaign popups by CRM teams, and etc. To disable the popups, you should set the popups option to 2 in preferences. For Capybara you can disable the popups by following Chrome instance:

Sometimes you need to make chrome maximize for UI related problems. You can set chrome instance as maximized by passing the argument "--start-maximized" in args so it will open full page as default by the following Chrome instance:

Sure we can use combination of these features together. The important point is that you need to know which features are under preferences and which features are arguments. For maximized windows, disabled images and popups check the following Chrome instance:

To check whole list of args click this link and the whole list of preferences check this link or see the list of preferences below:

Friday, July 29, 2016

How to Add Screenshot to Jenkins Cucumber Reports for Capybara

If you are using Capybara and Cucumber for test automation and Jenkins for CI/CD process you can run your test over Jenkins and see the result by Cucumber Result Plugin, for more information check the plugin page. By this plugin you can also see the screenshots taken when test cases fail. 

To produce the result you need to save the result of the test as .json file, you can use the following command: 

   cucumber features -f json -o cucumber_result.json

To enable to take screenshot if a case fails you need to set the application hook  of after case. The following configuration makes it automatically. See it and change screenshot path with your path. The key point is embedding the image after taking it then reports plugin make it visible on the report.


Cucumber Result Plugin

Screenshot for the failed test case

Friday, July 22, 2016

Appium vs Calabash


If you want to test your mobile application and looking for an open source testing tools then most probably you will face Calabash and Appium. These are the top two most popular tools for mobile automation. At first glance, they seem to be learnt easily but in detail you would have to deal with many challenges and must spend little bit longer time than you expected. Since these tools are open-sourced and still not mature level of quality there would be some problems when you want to use them for your special needs.  In this post I want to write down my experience about Calabash and Appium. This post may help you to choose which one is suitable for your needs. I used following versions:

  • Calabash-Android: 0.7.3
  • Calabash-iOS: 0.19.1
  • Appium: 1.5.2

Appium Support Many Languages But Calabash is RUBY!

In the background Appium uses Selenium which was created 14 year ago by Thougthwork, check thoughts about Selenium for mobile testing by Thougthwork. Since Selenium supports Java, C#, Ruby, Python, JavaScript, you can select one of your favourite language and just focus on what Appium does is. 

In terms of Calabash, it only supports Ruby but one of the speech I learnt that they want to support Java too. 

Appium Doesn't Need Building but Calabash Needs Building iOS App

Appium can interact with the applications (both Android and iOS) directly if you set-up them correctly. However Calabash can interact with .apk for Android application only but iOS application needs to have new target called -cal for injecting Calabash-iOS server into the application. This doesn't mean to change the application source code but building it for test server. For more explanation see the following title for the differences of Calabash-iOS architecture. 

Appium Uses Selenium Server But Calabash Uses Calabash Server

Calabash has developed his own server to handle client/server interaction.  The protocol used for communication is called as JSON over HTTP which sends the commands via Calabash server as JSON by using standard HTTP protocol. During the interaction, server should be up and running. 

For this Calabash-Android creates a /test_servers directory inside the project when you running the application for the first time. The server is also an application which is installed and executed in device or simulator. If the signature of the application is changed, the server is also updating.   
Calabash-Android Architecture

For Calabash-iOs project, things are little bit different than Calabash-Android, it is composed of two part: calabash.framework written in Ruby and a Calabash-iOS-Server written in Objective-C. We need a new target which called -cal in XCode project to link the calabash.framework to application. This doesn't mean to change the source code of the application but there should be a new build. The server is a part of the application, so when we run the application the server in the device/simulator listens the commands from calabash.framewok for interaction.
Calabash-iOS Architecture

For Appium, it is using Selenium server, standard Webdriver Rest API, so you must run it before starting automation code.

Appium Has Inspector But Calabash Has Console

If you have install Calabash-console you can use it by typing `calabash-ios console` for iOS and `calabash-android console` for Android testing. These consoles give you opportunity to work interactively with the application you want to test. During my experiences, it gives you very nice features which you need to code your tests such as querying all the elements in the screen, touching, flashing for locating the correct element, entering text for text fields. The console has limited features of Calabash but good enough for interactive working. 


Inspector of Appium is also created for interactive working with application but it is more suitable for getting the xpath of an element you want interact with. If you want to use Appium you will use lots of xpath created by this inspector. Many time using xpath for selenium is not recommended because of performance problem, and confusing path expression. The path always tends to change.   

In general inspector works well but if your application has small piece of object it is really hard to find it. If I want to find the "addToFavoriteButton.png" image to add product to favourite it just let me to click on product container. If I go throught the inspector panes and find the correct containers and child container and finally the favourite image element, it doesn't give me the id of the element. Finally tapping the favourite button doesn't work. See the screenshot for the example.

Doesn't Show ID of Element

Couldn't find small elements 
Inspector couldn't tap the favourite button 


For me, console is the best part of the Calabash when we compare with Appium. In Calabash console we can query all element with this query("*"), then we can find favourite button and touch it easily. See the console output.

Appium has also recording option which is very nice for especially beginner of the tool.



Appium Can be BDD but Calabash Has Built-in BDD

If you have experience with Capybara in BDD style, you can apply exactly same style for Calabash in Ruby. With the Ruby environment you can add cucumber gem for supporting BDD easily. What is more Calabash also has predefined BDD steps that you can create test steps without writing single line of code. But iOS and Android project may have some differences for the steps. The example below doesn't need any code. This is great! You can see the full list for Calabash-ios, just click here.

Then I touch "login"
Then I fill in "username" with "mesut.gunes"
Then I fill in "passord" with "password123"
Then I touch the "login" button 
Then I wait until I don't see "login"
Then I should see "Mesut Güneş"

Appium is a Single Project But Calabash Has Two

When you install Appium you can use it for iOS and Android but Calabash-iOS and Calabash-Android are separated project that means there are some common functions but there are also platform specific functions. Therefore you can ask "why they didn't do it same", most probably this is the worse part of the Calabash. 


What is on Github

Since these tools are opensource, it is very important to contribute from other developers and testers so this is also important factor for comparison. Github information shows that Appium has more contributer and commits but Calabash has also good number contributers as of 26 of July, 2016.


CommitsBranchesReleasesContributorsOpen IssuesClosed IssesVersion
Appium5,966421091517203,8471.5.2
Calabash-iOS2,6181010555495750.19.1
Calabash-Andoid1,6055010264274140.7.3

Monday, June 6, 2016

QA in Production


For a while, I was thinking the responsibilities of QA Engineers by broadening them to the production environment. I this post I want to write the reasoning behind the needs of QA Engineers in production. 

Integration Testing: Tools and Technology

Everyday we are facing new integration points with third party tools or technologies for win-win strategy. Most of the time these technologies are in the term of trial period or just an cheap alternative to the present one. These technologies or tools simple don't have adequate documentations so it makes the development and testing of integration more risky. This means that development, test, uat or preprod test environment are not enough for catching all possible bugs, as a result production environment is the final destination for checking logs and monitoring the integration results. 


Validation of Non-Functional Requirements

There are also unclear points behind the non-functional requirements like performance and usability. The real performance of the applications can be seen under real user environments. This doesn't mean that performance testing should be done in production. This means that with the performance of the application, the real users behave as expected or they just leaving the application just before the message which was integrated with last build appears. Another meaning of this, just need to check the performance requirement is statistically valid in term of customer satisfaction. The usability part, even we had done usability testing with real user before the new feature request in usability lab, it is not guaranty that the result will be repeated by the real user in their environments. This time we need to check the usability requirement with the real user.

The Nature of Development Process

Also development process can impose QA Engineer to monitor production environment. If you adopt CI or CD the process more like to monitor on in production because production is tent to update regularly which means you have more risky situation. Translation from traditional QA to agile adopted QA requires more collaboration with other teams; furthermore translation from manual deployment to CD requires better monitoring system for development process. Checking the monitoring system is a task of QAs because it gives the overall quality of process, which test result can be as good as development process. About the subject ThoughtWorks has taken it to radar and name it as a technology with "QA in Production" and to recommend it as "Trial" which means that enterprise should use it for the project which risk can be handed. It give the following explanation: 

Traditionally, QA roles have focused on assessing the quality of a software product in a pre-production environment. With the rise of Continuous Delivery, the QA role is shifting to include analyzing software product quality in production. This involves monitoring of the production systems, coming up with alert conditions to detect urgent errors, determining ongoing quality issues and figuring out what measurements you can use in the production environment to make this work. While there is a danger that some organizations will go too far and neglect pre-production QA, our experience shows that QA in production is a valuable tool for organizations that have already progressed to a reasonable degree of Continuous Delivery.

Understandable Monitoring Tools

In the market there are very successful monitoring tools which every one in the organization can understand what going on in the production. Most of them gives realtime and plotting graphs for stream data, giving number of non-responsive requests, network errors and more. These are the some of the well-known tools: NagiosHappyApps, KibanaPerformance Co-Pilot, Icinga, OpenNMS, Op5.




Thursday, May 19, 2016

Change Default Timeout and Wait Time of Capybara

One of the biggest challenge for automation is handling timeout problem. Most of the time, timeout is 60 seconds but it may sometimes not enough if you have badly designed asynchronous calls or the third party ajax calls. This makes handling timeout more complex.
set large enough to tolerate network related problems. For Selenium based automation frameworks, like Capybara, default Webdriver timeout is set to
 Net::ReadTimeout (Net::ReadTimeout)

Changing ReadTimeout

If you have timeout problem for Capybara, it gives an error like above. This means that the page is not fully loaded in given timeout period. Even you can see that page is loaded correctly but webdriver wait until the Ajax calls finish.
  class BufferedIO   #:nodoc: internal use only
    def initialize(io)
      @io = io
      @read_timeout = 60
      @continue_timeout = nil
      @debug_output = nil
      @rbuf = ''
    end
    .
    .
    ...

    def rbuf_fill
      begin
        @rbuf << @io.read_nonblock(BUFSIZE)
      rescue IO::WaitReadable
        if IO.select([@io], nil, nil, @read_timeout)
          retry
        else
          raise Net::ReadTimeout
        end
      rescue IO::WaitWritable
        # OpenSSL::Buffering#read_nonblock may fail with IO::WaitWritable.
        # http://www.openssl.org/support/faq.html#PROG10
        if IO.select(nil, [@io], nil, @read_timeout)
          retry
        else
          raise Net::ReadTimeout
        end
      end
    end

end


If you need more or less time than predefined time 60 seconds you can change the read_timeout variable in the source code from the following module as in the shown above source code.
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/protocol.rb

Editing source code may be dangerous, which may let you do unnecessary changing and the update may be overwritten by incoming source code update. Instead of doing this, you can simply set this value by overriding creating new instance of driver. The followings are new instances for Chrome and Firefox:
# Firefox instance with timeout is set to 100 seconds
Capybara.register_driver :firefox_timeout do |app| client = Selenium::WebDriver::Remote::Http::Default.new client.timeout = 100 Capybara::Selenium::Driver.new(app, :browser => :firefox, :http_client => client) end # Chrome instance with timeout is set to 100 seconds Capybara.register_driver :chrome_timeout do |app| client = Selenium::WebDriver::Remote::Http::Default.new client.timeout = 100 Capybara::Selenium::Driver.new(app, :browser => :chrome, :http_client => client) end

You also need to use one of these driver instance, to do this change the following lines in your features/support/env.rb:

#Capybara.default_driver = :firefox_timeout
Capybara.default_driver = :chrome_timeout

If you still have problem with loading test pages you get the following error:

Net::ReadTimeout: Net::ReadTimeout
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/protocol.rb:158:in `rescue in rbuf_fill'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/protocol.rb:152:in `rbuf_fill'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/protocol.rb:134:in `readuntil'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/protocol.rb:144:in `readline'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http/response.rb:39:in `read_status_line'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http/response.rb:28:in `read_new'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:1412:in `block in transport_request'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:1409:in `catch'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:1409:in `transport_request'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:1382:in `request'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:1375:in `block in request'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:852:in `start'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:1373:in `request'
 from /Library/Ruby/Gems/2.0.0/gems/selenium-webdriver-2.52.0/lib/selenium/webdriver/remote/http/default.rb:107:in `response_for'
 from /Library/Ruby/Gems/2.0.0/gems/selenium-webdriver-2.52.0/lib/selenium/webdriver/remote/http/default.rb:58:in `request'
 from /Library/Ruby/Gems/2.0.0/gems/selenium-webdriver-2.52.0/lib/selenium/webdriver/remote/http/common.rb:59:in `call'
 from /Library/Ruby/Gems/2.0.0/gems/selenium-webdriver-2.52.0/lib/selenium/webdriver/remote/bridge.rb:645:in `raw_execute'
 from /Library/Ruby/Gems/2.0.0/gems/selenium-webdriver-2.52.0/lib/selenium/webdriver/remote/bridge.rb:623:in `execute'
 from /Library/Ruby/Gems/2.0.0/gems/selenium-webdriver-2.52.0/lib/selenium/webdriver/remote/bridge.rb:134:in `get'
 from /Library/Ruby/Gems/2.0.0/gems/selenium-webdriver-2.52.0/lib/selenium/webdriver/common/navigation.rb:33:in `to'
 from /Library/Ruby/Gems/2.0.0/gems/capybara-2.6.2/lib/capybara/selenium/driver.rb:45:in `visit'
 from /Library/Ruby/Gems/2.0.0/gems/capybara-2.6.2/lib/capybara/session.rb:232:in `visit'
 from /Library/Ruby/Gems/2.0.0/gems/capybara-2.6.2/lib/capybara/dsl.rb:51:in `block (2 levels) in <module:DSL>'

Changing Wait Time

If you are searching an element on the page, Capybara is clever enough to search element  repeatedly with given parameters until the Capybara.default_max_wait_time (before Capybara version 2.5.0, it is Capybara.default_wait_time) finishes.  In such a case you need more or less time for finding an element you can Capybara.using_wait_time that is added with Capybara version 2.1. It requires time paremeter and a block of what you want. See the example below:
Capybara.using_wait_time 5 do 
  find("#password").set("12qw34")
end

Capybara.using_wait_time 5 do 
  page.has_css? "#comment"
end


Saturday, April 30, 2016

What Reminds From Testistanbul 2016

First of all, I am appreciated to attend to an international conference on software testing. It is 7th international Testistanbul conference. There was good number of attendants from newly starters to over 30 years of experienced professionals. Most of the attendants were from Turkey thought there were some guys from abroad and some international companies to present their products. Testistanbul conference is important because it is definitely the most valuable conference on software testing in Turkey. It gives us opportunity to meet largest professional group and to share the knowledges in domestic market. The conference topics were as follows: 


09:00 - 09:30 OPENING CEREMONY SPEECH:
FORMULA 1, CONTINUOUS INTEGRATION, CONTINUOUS DELIVERY AND TEST DATA MANAGEMENT PROCESSES - TURKEY SOFTWARE QUALITY REPORT (TSQR) 2016 / 17 (In Turkish)
Koray Yitmen
09:30 - 10:15 IBM SPONSOR SPEECH: SHIFT LEFT FOR HIGHER QUALITY AT GREATER SPEED
Mehmet Çağrı Elibol
10:15 - 10:35 Coffee Break
10:35 - 11:25 KEYNOTE: WHY AUTOMATED VERIFICATION MATTERS
Kristian Karl
11:25 - 11:40 Coffee Break
11:40 - 12:30 KEYNOTE: THE STORY OF APPIUM: LESSONS LEARNED CREATING AN OPEN SOURCE PROJECT,
0 TO 100,000 USERS
Dan Cuellar
12:30 - 13:45 Lunch
13:45 - 15:05 KEYNOTE: ENTERPRISE CHALLENGES OF TEST DATA
Rex Black
15:05 - 15:20 Coffee Break
15:20 - 16:10 KEYNOTE: PERFORMANCE TESTING OF BIG DATA
Roland Leusden
16:10 - 16:25 Coffee Break
16:25 - 18:00 PANEL: TEST DATA MANAGEMENT CHALLENGES (Turkish)
Barış Sarıalioğlu - Keytorc (Moderator), Cankat Şimşek - Emerson Network Power, Ertekin Güzel - Intertech, Hazar Tuna - Kredi Kayıt Bürosu, Koray Yitmen - TTB, Mert Hekimci - Kariyer.net, Nasibe Sağır - Doğuş Yayın Grubu

In the opening ceremony by Koray Yitmen, as in the title, continues integration is explained by an analogy with Formula One car racing. To be honest this small presentation is one of the most impressive part of the conference. The given example is "F1 car racing continues and as a whole racing team is supporting to finish race with a minimal out-of-service and without any breakdown" this is similar to role of development operations (dev-ops) in software development process as "software is a live object but as a team you are adding new features, fixing issues and updating some other parts" all these things is happening continuously by help of dev-ops culture. Pit-Stop in the F1 is like the deployment process in software development. The fastest pit-stop is under 2seconds, so why not deploying a feature to live be that fast.


Second speech is given by main sponsor IBM, Mehmet Çağrı Elibol, subject is "shift left" or old motto "test early and often". It comes to me a new term for old and famous motto. IBM presented tools for CI. They are, Rational Test Workbench (RTW), Rational Performance Test Server (RPTS) and Rational Test Virtualization Server (RTVS), can fully automate the development process by performing functional, integration, performance and regression testing with RTW; suppling load agents, SaaS load agents and Virtualization agents with RPTS; modelling test environment to reduce decencies by RTVS.



The third speech was given by Kristian Karl from Spotify. This speech was the best part of the conference for me. Kristian has much experience of almost my age, he is the creator of GraphWalker. The topic is concept and scope of test automation. Briefly, he said everything can be automated. In some team, there may be 2-3 QA engineers although there may not be any QA engineer in some teams. It depends on needs and project details. However, QA engineers can have a role of consultant to supports developers to achieve automation goals. I want to write a separate post because he explains lots of things with many good examples but you can find the most relevant pictures below:


The definition of testing reminds me the exploratory testing by James Bach, Kristian replay my tweet as saying "exploratory testing is inspiration point".






 














The rest of the conference, creater of Appium, Dan Cuellar gave speech about the history of Appium, actually there is not technical information given by Dan. Enterprise test data management by Rex Balck, former president of ISQTB, it was a long speech about test data management. The last speech topic is performance testing of big data but I didnt find much information about "performance testing of big data" it was related to big data definition and handling it.










Wednesday, March 23, 2016

Confusion of Using Selenium for Performance Testing


Time to time, I see some questions about integration test automation script to performance testing tool on Stackoverflow. There is a confusion about performance testing and test automation for whom newly started performance testing after doing some test automation. It is demanded that they have some automation code and want to use these scripts for performance testing to simulate user behaviour. Theoretically it seems a very good idea that tons of users are performing some cases and you are capturing the response time and finding bottle neck in terms of client side. However this is not applicable in practice because there are lots of obstacle to handle. The followings outline the obstacles and misunderstandings you will face, for the idea.

I can open some browsers and simulate real user behaviour

Automation scripts run over a browser, for web project. Most of the application are becoming as web app. For GUI based application what the hack test automation will have a role on performance testing. Crazy thing I have ever heard. If you want to test the client side performance problem, it is another case. For web application you need to test the services first, and if every thing is performing well enough then you can check the GUI related problems as in testing the mobile application too. Therefore you have to create  an instance of browser for each users so you have a limit for number of browsers, let's say it is no more than a 2K. See that each Chromedriver consumes %0,2 memory of total physical RAM on idle, which means I can open, but not to send some commands, only 500 Chromedrivers.





No Need Browsers for Headless Testing

Headless testing doesn't open a browser so it is cheap for testing. Idea behind the headless browser is reducing the time for loading of user interface related things like photos, tables, JavaScript etc. and making light weighted version of a browser. With this way you can make your automation test faster but you have stick what it requires. Even it is somehow faster than a real browser, and it also cheaper in terms of  consuming ram and processor, there is a limit for the number of headless browsers.

Problem of Handling Browsers 

Every browser needs to have a valid driver to drive the automation code throughout the connection protocol. This is needed to send commands to browser to interact with the web objects. To run many cases parallel, you need to have a configuration for 
  1. Dom should be ready to be able to handle the commands
  2. Handling waiting time is another problem since the time is the major concern for performance testing
  3. Physical needs for sending, rendering and handling commands
  4. The performance indicators, how to see that problematic parts of the test
  5. Managing test environments 

Why This is Becoming a Request

The idea Selenium can be used for performance testing is inspired by a feature that Jmeter has a Webdriver plug-in to run Selenium script in terms of testing client side performance. With this plug-in, you run some Selenium scripts to get time consuming for completing the Selenium script. Another meaning of this, it calculates the overall time that a client most probably face for the case under predefined amount of loads. This does not mean that you are running the Selenium to create loads. As the following is a part of the guideline of the plug-in:

for the Web Driver use case, the reader should be prudent in the number of threadsthey will create as each thread will have a single browser instance associated with it. Each browser consumes a significant amount of resources, and a limit should be placed on how many browsers the reader should create.

What is more Practical for The Client Side Performance Testing

When you have a ready performance testing script and have some scripts for automated test cases then you are good to test the client side performance. Instead of injecting Selenium script to your performance testing tools you can run the performance scripts to load the server as demanded load then run the automation script and calculate the time taken for completion of test cases. 

Tuesday, March 22, 2016

Mobile Application Testing Change Host to Redirect Test Environment


When you need to test the server side development via Mobile application in isolation, you can redirect the host of the API over the hosts file in the emulator. By this way, with the same application that users have, you can test the new development before passing it to live servers.

Android Application

I have using GenyMotion for emulating android devices. You can edit the hosts file in GenyMotion by the following commands, at the third step you need to change `hosts` file then push it emulator.
  1. adb root
  2. adb pull /etc/hosts hosts
  3. adb remount
  4. adb push hosts /system/etc
See the example below, the host of the application is www.morhipo.com which is now hitting the test server. The third step is important, if you don't remount new host file can not be push to device.

~ adb root
adbd is already running as root
~ adb pull /etc/hosts hosts
~ cat hosts
~ echo "10.1.6.37 www.morhipo.com" > hosts
~ cat hosts
10.1.6.37 www.morhipo.com
~ adb remount
remount succeeded
8 KB/s (30 bytes in 0.003s)
~ adb push hosts /system/etc

iOS Application

For iOS simulator, it is simpler just you need is to update the hosts (/etc/hosts) file because the Simulator uses the same host file as in Mac OS.

~ sudo su
Welcome to fish, the friendly interactive shell
Type help for instructions on how to use fish
root@Mesuts-MacBook-Pro /U/mesutgunes# echo "10.1.6.37 www.morhipo.com" > /etc/hosts
root@Mesuts-MacBook-Pro /U/mesutgunes#





Friday, January 8, 2016

Performance Testing on CI: Integration of Locust and Jenkins

It is a raising question around performance testing people that how to integrate, super hero tool,Locust to Jenkins as a step to continues integration (CI) pipeline. There have been some solutions for this but they are not simple, plus you need to write own log writer to handle this problem.  Actually there was a commit to save the result of the test but it has not been merged to Locust. With these commit we can integrate Locust to Jenkins.

Update Locust

This update is still open so you can not see it in latest Locust version yet. However you can  use this by updating your local repository, or just change two files main.py and stats.py with the files in your local. For Window change the files in the following directory: C:\Python27\Lib\site-packages\locustio-0.7.3-py2.7.egga\locust\stats.py and for Unix-base OS change the files in the following directory: /Library/Python/2.7/site-packages/locust/stats.py. Alternatively, you can clone this commit from github and install by python setup.py install.

Fix the Bug of The Locust Update

It was committed on Jul 12, 2015 but it is still in review status and  it has a bug. If they haven't solved it I have added just the following lines of code to fix it. You should find the store_stats(filename, stats)function in stats.py at line 499 and change it with the following function:


Add Build Parameters

Basically, with the performance,  we should check the response time with pre-defined treshhold, if any failures and if any redirections. Therefore we need to add some steps to check the log file if these criteria are passed. Instead of defining performance criterias into code directly, I added there build parameters to check min_respons_time, max_response_time and avg_response_time, because these may changes over the time. 

Run Locust

If you update your code, you can run your tests with adding new parameters --statsfile=result.log by that you will save the result into result.log file. This file will be present in workspace. The whole command should look like, for Windows I am adding a Window batch command:
C:\Python27\Scripts\locust.exe -f C:\Users\Administrator\Documents\performance\amazon.py --clients=20 --hatch-rate=1 --num-request=500 --no-web --only-summary --statsfile=result.log

Evaluate Test Result


To check the log file create a build step to with execute python script, to add this you need to add python code executer plugin. Then add the following code into this step:






Run the Job

Save the job and click "Build with Parameters",  set the values for each then click the "Build" button.

Check the Job Result

Click build number and then click console log to the result of the job.


Tuesday, December 1, 2015

Why Performance Test on Cloud


When a software comes to production the weakest points are mostly related to performance of the application. The reason is that most of the organization test the application functionally by testers and on acceptance testing period again the applications is under testing to check functional requirements. The missing point is non-functional testing like performance of the application. 

Another failure point is testing the performance as the last level of development process. This means that any misleading failures in the service level can cause a very big problem on production stage so it requires extra source if possible and extra budget. As the terminology says "finding the big issue earlier stage of the development can reduce the fixing effort exponentially". Therefore test the performance of the software output whenever it is ready, this could be an api end point, a function servers data to system, a web servers, and etc. 

Another failure is related to technology, tools and knowledge. Even you test the performance of the application, if it is not reflecting the real users with real behaviours, the result may cause a wrong conclusion and finally real user test your application performance on production. To achieve the behaviour of real user you must analyze your statistics and I suggest you test performance of the application on cloud.

In this post I want to share my experience about `performance testing on cloud`.  


Scalability

Simulating huge amount of concurrent user deepens on the limit of threads or processes. How your performance tools consume your computer, it can be thread as in Jmeter; or thread or process as in LoadRunner depends on your choice; or better approach is to use co-routine as in Locust, there is limit, check this for more information. The limit is not more than 5000 concurrent users for a very good personal computer (as I have experienced, sure it could be higher next year). Therefore have a 100K or 500K users with sending a good amount of request per second and managing the source is not an easy job. But working as master with slaves and adding demanded number of slaves to system on-the-fly is a general feature of cloud computing.

Global Scaling

With the scalability, think that your slaves are anywhere in the world and waiting to join your test army. Yeap it sounds good, why you think that your users are only from you location. Without cloud, even you have good amount of user, you can not make them global.


Testing in live environment 

Real statistics, real user, real user's environment, servers, etc. The key point is to be as much as similar to product on live.


Cost of per Request (CopR)

Arranging the budget of the test, pay only for what you have used. Don't buy something for performance testing, instead rent them for short period of time. Performance testing (mostly stress test) is generally done when you have something new as architectural or database level. This means you don't need to perform these tests after deployment of new component, or you don't need to use them frequently so buying the hardware and paying licences are not a cost effective approach, instead rent them.

Realistic Load

The key point of the performance testing is to get realistic result and to estimate the risk of failure in the live environment by minimizing it. To have it done, you must run the tests over a real world not a system behind a firewall in a development environment and within a internal network. Also there are the third party integration tools and services interact with your system. The better one is always to test your application with ever thing is real or very close to reality.

There is also the limit of LAN for sending and receiving the request which if exceeds you test summary tends to become divergence from the reality.


Managing Test

Cloud services provides some interfaces to manage load testing so every people who are responsible of the test, system, code, and etc. can monitor the result and if any failure occurs they can also handle it.


Managing Test Environment

When talking about testing on a single machine it is easy to install required tools and to prepare required script but having a hundred machines and doing these things one by one is not that easy. Cloud computing has some managing tools that when you create a master, by cloning it you can create lots of slaves with test ready state. You can see some cloud computing in the market and their features below taken from a slide of Intechnica.