Skip to main content

Mobile Test Automation: Calabash on Xamarin Test-Cloud

If you are testing your application against emulator / simulator, you will still have risks that the expected features may possible not work on several real devices. To handle the real device testing you should have many devices to run your tests. In the current market, we have lots of possibilities for device and operating system versions combinations. Best approach to find most used devices, you should use some statistical data. Most of the cases Pareto analysis can help for selecting high coverage devices. However if you want to find more bug before release, you can focus on the newest device with latest OS version and the oldest devices with the oldest OS version. Any case you should have at least 5-10 android devices and 3-5 iOS devices for a good level of coverage in the beginning. Therefore managing the devices is another problem if you want to have your own local test suites. As a good alternative you can use the cloud services. In this post I want to share some information about running Calabash test on Xamarin Test-Cloud.

Xamarin Test-Cloud supports X-platform frameworks so you can use same feature files for both iOs and Android project. However you should submit the code with providing the profile and the config files in your command. If you have your own Calabash project then you are ready to submit your code to test-cloud.

Go to Xamarin Test-Cloud and open and account, then you will get 3hours X 30days free usage. Click "New Test Run" and create platform specific devices group. At the end of the creation select "Calabash" then it gives you complete command to submit test-cloud. To run test-cloud you need to install `xamarin-test-cloud` gem by the following command:
gem install xamarin-test-cloud
If you are not using X-platform, you submit the code with the following command:
test-cloud submit yourAppFile.apk f957b60sd2322wddwd1f6140c760c2204a --devices 481d761b --series "AndroidMostUsed" --locale "en_US" --app-name "ProjectName" --user
For X-platform example, you submit the code with the following command:
test-cloud submit yourAppFile.apk f957b60sd2322wddwd1f6140c760c2204a --devices 481d761b --series "AndroidMostUsed" --locale "en_US" --app-name "ProjectName" --user --profile android  --config=config/cucumber.yml
As I have explained in my previous post Calabash-Android and Calabash-iOs is differentiated by architecture becauseof the platform dependency. These commands are directly applied but for iOs project you need to have a new target as -cal and it should be built for device. Then you should produce .ipa files from this target. To produce .ipa file you can run the following command:
 /usr/bin/xcrun -sdk iphoneos PackageApplication -v ~/Library/Developer/Xcode/DerivedData/ModacruzV2-hdlgquuxyftvplepqknjiywdnclj/Build/Products/Debug-iphoneos/ -o ~/Projects/mobile_app_automation/IOSProject.ipa

Then you can submit your iOs project with newly created .ipa file:
test-cloud submit ~/Projects/mobile_app_automation/IOSProject.ipa f957b60sd2322wddwd1f6140c760c2204a --devices 21d1d61b --series "IosMostUsed" --locale "en_US" --app-name "ProjectName" --user --profile ios  --config=config/cucumber.yml  --profile ios --config=config/cucumber.yml
Then you see the progress in the console or web, on the console it gives the result and the url to reach the project on Xamarin Test-Cloud. The Xamarin very friendly user interface to see the failing test cases. 

One of my favourite features of cucumber is to handle test cases with tag, but Xamarin has not implemented the tags options but they suggesting the the using categorisation. By adding the command  `--include CATEGORY-NAME` for Nunit test. However I could not satisfy with this feature, hope to solve it. 


Answer for running with tag option came from stackoverflow, you can add the tag you want to run at the config file like `--tag @regression`
android: RESET_BETWEEN_SCENARIOS=1 PLATFORM=android -r features/support -r features/android/support -r features/android/helpers -r features/step_definitions -r features/android/pages --tag @regression

The final word, this is where the cloud testing takes advantages, you can run the same scripts with many devices in parallel.

Popular posts for software testing and automation

Selenium Error "Element is not currently interactable and may not be manipulated"

Selenium webdriver can drive different browsers like as Firefox, Chrome or Internet Explorer. These browsers actually cover the majority of internet users, so testing these browsers possibly covers the 90% of the internet users. However, there is no guaranty that the same automation scripts can work without a failure on these three browsers. For this reason, automation code should be error-prone for the browsers you want to cover. The following error is caught when the test script run for Chrome and Internet Explorer, but surprisingly there is no error for the Firefox. Selenium gives an error like below: Traceback (most recent call last):   File "D:\workspace\sample_project\", line 10, in <module>     m.login()   File "D:\workspace\ sample_project \", line 335, in login     driver.find_element_by_id("id_username").clear()   File "C:\Python27\lib\site-packages\selenium-2.35.0-py2.7.egg\selenium\webdriver\r

Change Default Timeout and Wait Time of Capybara

One of the biggest challenge for automation is handling timeout problem. Most of the time, timeout is 60 seconds but it may sometimes not enough if you have badly designed asynchronous calls or the third party ajax calls. This makes handling timeout more complex. set large enough to tolerate network related problems. For Selenium based automation frameworks, like Capybara, default Webdriver timeout is set to Net::ReadTimeout (Net::ReadTimeout) Changing ReadTimeout If you have timeout problem for Capybara, it gives an error like above. This means that the page is not fully loaded in given timeout period. Even you can see that page is loaded correctly but webdriver wait until the Ajax calls finish. class BufferedIO #:nodoc: internal use only def initialize (io) @io = io @read_timeout = 60 @continue_timeout = nil @debug_output = nil @rbuf = '' end . . . . . def rbuf_fill beg

Create an Alias for Interactive Console Work: Selenium and Capybara

If you are working on shell most of the time Aliases are very helpfull and time saving. For testing purposes you can use Alias for getting ready your test suites. In this post, I want to explain both running Selenium and Capybara on console and creating aliases for each.  This post is for Windows machines, if you are using Unix-like see   this post . Creating Scripts for Selenium and Capybara First of all, it is assumed that you have installed Selenium and Capybara correctly and they work on your machines. If you haven't installed, you can see my previous posts. I am using the Selenium with Python and the Capybara with Ruby. You can use several different language for Selenium but Capybara works only with Ruby.  Create scripts in a directory called scripts (in your home folder, like as  ~/scripts ) for your automation tool as following, save them as capybara.rb, :  Creating Aliases Depends on your favourite shell, you need to add the alias to .bashrc bash

Page-Object Pattern for Selenium Test Automation with Python

Page-object model is a pattern that you can apply it to develop efficient automation framework. With the page-model, it is possible to minimize maintenance cost. Basically page-object means that your every page is inherited from a base class which includes basic functionalities for every page. If you have some new functionalities that every page should have, you can simple add it to the base class. Base class is like the following: In this part we are creating pages which are inherited from base page. Every page has its own functionalities written as python functions. Some functions return to a new page, it means that these functions leave the current page and produce a new page. You should write as much as functions you need in the assertion part because this is the only part you can use the webdriver functions to interact with web pages . This part can be evaluate as providing data to assertion part.   The last part is related to asserting your test cases against to the

Performance Testing on CI: Locust is running on Jenkins

For a successful Continuous Integration pipeline, there should be jobs for testing the performance of the application. It is necessary if the application is still performing well. Generally performance testing is thought as kinds of activities performed one step before going to live. In general approach it is true but don't forget to test your application's performance as soon as there is an testable software, such as an api end point, functions, and etc. For CI it is a good approach to testing performance after functional testing and just before the deployment of next stage. In this post, I want to share some info about Jenkins and Locust. In my previous post you can find some information about Locust and Jenkins. Jenkins operates the CI environment and Locust is a tool for performance testing. To run the Locust on Jenkins you need command line arguments which control the number of clients ,   hatch rate,  running locust without web interface and there should be so