Skip to main content

Browser Compatibility Testing


1. What is Compatibility Testing?

According to the ISTQB, compatibility is an activity which independent computer systems work successfully in the same environment without affecting the other systems in the environment. In terms of testing, compatibility testing is a testing activity to ensure that after any updates in the environments the application is still work as expected. From this point of view, compatibility testing is aiming to explore the environment of the users, which should be supported by the statistical data and to find any un-expected behaviors of the application before the customers/users have such experiences. Compatibility matrix should be prepared to ensure that all the environment can be tested by desired frequency, a sample compatibility matrix can be found below: 

Browser Compatibility Matrix

Aim of the compatibility testing:
  • Determining any functional or visual missing, errors or bugs of the application in the users’ environments with testing 
  • By a range of most known web browser and most favorite hardware platforms, testing your web site for compatibility 
  • Determine the effects of the operating system patch and/or update
  • By different add-ins, operating systems and application, testing the servers 
  • By a number of different operating systems with installed many other applications which the target users may have, testing the application
Normally, compatibility testing should be run after system acceptance test and user acceptance test are completed successfully.

2. Determination of Sample E-Commerce Site User Environments

Constitute a risk to the test cases to test and provide better service to customers on behalf of the media, the statistics of current customer can be investigated by two dimensions, operating systems and used browsers. Information obtained from the Google Analytics provides the necessary requirement about the users' browsers and operating systems. In this compatibility analysis a specific time slot information is obtained. Therefore, this information will also show the user trend and the compatibility test environment should be updated by the user trend. 

Let's look at the type of operating system and browser used by the site's customers, 27 operating system and more than 40 different operating system versions, 57 different browsers and more than 100 browser versions. To cover all the possible test environment, there must be more than a thousands test environments. Even in this case we will not perform testing all possibilities. According to the ISQTB,  100% test is impossible! Risk analysis can be helpful in this situation. An applicable testing aim is to cover the 95% of customers environments and taking risk of 5% is manageable test objectives.

You can find some of the configurations below; operating system and browser information.


Operating System vs Visit

Generally customers use Windows operating system so by testing all the version of the Windows can cover the testing objectives (95% confidence range is considered). However, for the image of the company testing the Linux, Mac and Iphone is important. In addition, growing Android market should be take into the account.

Browsers and Operating Systems


Also if we look at the Browser Table, the first three browsers cover the of 98% of the current user's browser. But the Safari web browser which generally used by Macintosh operating system may be reasonable. A large part of the browser of left in the list (more than 50) are composed of browser of mobile devices. If any problems encounter, it should be solved reported for the mobile browsers. Mobile users can be motivated to use Opera Mini to reduce the browser originated problem.
Browser vs Visit


The table above shows that Windows operating system with Internet Explorer, Chrome and Firefox cover the majority users environment by 97% of the sum of all users. But the reason above, testing the Linux, iPhone, Mac is benefitial for the image of the company.

By the help of this information, compatibility test environments shown as a matrix below.



Compatibility Test Environments

3. Determination of Compatibility Test Environments

Compatibility testing can be done on test scenarios which is matured. Therefore, the test scenarios should be develop and rating the risk of each test case. Since seen in the table above, approximately 15 different test environment should be created this can be made by Virtual Box virtual. Each test environment, compatibility tests should be performed. This is a time consuming event for manual test execution and update of the test environment so this should be automated, at least a part of the test cases. Some interesting tools in order to reduce the workload on these tests can help for automation. For compatibility testing, the following requirements come up:
  1. A powerful computer: A test machine - Ubuntu 64 bit installed
  2. We want to install operating system images for media
  3. Test automation tools
  4. iPhone, iPad, and Mac simulator
  5. iPhone, iPhone, Mac computer 

4. Compatibility Test Levels

By considering the project scope and the requirements of customers, there should be three different test levels: Partial Coverage, Full Coverage and Sanity Testing should be done under the name of compatibility tests. In general, these tests covered by the test environments, the amount (%) covered by the test case and the priority level ( Priority: Critical, High, Medium, Low ) were evaluated. Levels in this test should be done as follows. 

4.1. Partial Coverage 

In the 95% of the test environment for at least run the main test cases. ( in the test case Priority = critical )
Partial coverage

4.2. Full Coverage

In the 90% of the test environment all the master test case should be executed
Full Coverage

4.3. Sanity Test

Sanity test is to test the basic functions of the application which help to make decision for continue further testing or stop the testing. Testing the main test cases in an adequate test environment.

4.4. Unsupported Region (not support)

Test environment or test cases which are not included in full coverage testing cannot be tested so these test cases never be tested.

Popular posts for software testing and automation

Selenium Error "Element is not currently interactable and may not be manipulated"

Selenium webdriver can drive different browsers like as Firefox, Chrome or Internet Explorer. These browsers actually cover the majority of internet users, so testing these browsers possibly covers the 90% of the internet users. However, there is no guaranty that the same automation scripts can work without a failure on these three browsers. For this reason, automation code should be error-prone for the browsers you want to cover. The following error is caught when the test script run for Chrome and Internet Explorer, but surprisingly there is no error for the Firefox. Selenium gives an error like below: Traceback (most recent call last):   File "D:\workspace\sample_project\sample_run.py", line 10, in <module>     m.login()   File "D:\workspace\ sample_project \test_case_imps.py", line 335, in login     driver.find_element_by_id("id_username").clear()   File "C:\Python27\lib\site-packages\selenium-2.35.0-py2.7.egg\selenium\webdriver\r

Change Default Timeout and Wait Time of Capybara

One of the biggest challenge for automation is handling timeout problem. Most of the time, timeout is 60 seconds but it may sometimes not enough if you have badly designed asynchronous calls or the third party ajax calls. This makes handling timeout more complex. set large enough to tolerate network related problems. For Selenium based automation frameworks, like Capybara, default Webdriver timeout is set to Net::ReadTimeout (Net::ReadTimeout) Changing ReadTimeout If you have timeout problem for Capybara, it gives an error like above. This means that the page is not fully loaded in given timeout period. Even you can see that page is loaded correctly but webdriver wait until the Ajax calls finish. class BufferedIO #:nodoc: internal use only def initialize (io) @io = io @read_timeout = 60 @continue_timeout = nil @debug_output = nil @rbuf = '' end . . . . . def rbuf_fill beg

Create an Alias for Interactive Console Work: Selenium and Capybara

If you are working on shell most of the time Aliases are very helpfull and time saving. For testing purposes you can use Alias for getting ready your test suites. In this post, I want to explain both running Selenium and Capybara on console and creating aliases for each.  This post is for Windows machines, if you are using Unix-like see   this post . Creating Scripts for Selenium and Capybara First of all, it is assumed that you have installed Selenium and Capybara correctly and they work on your machines. If you haven't installed, you can see my previous posts. I am using the Selenium with Python and the Capybara with Ruby. You can use several different language for Selenium but Capybara works only with Ruby.  Create scripts in a directory called scripts (in your home folder, like as  ~/scripts ) for your automation tool as following, save them as capybara.rb, sel.py :  Creating Aliases Depends on your favourite shell, you need to add the alias to .bashrc bash

Page-Object Pattern for Selenium Test Automation with Python

Page-object model is a pattern that you can apply it to develop efficient automation framework. With the page-model, it is possible to minimize maintenance cost. Basically page-object means that your every page is inherited from a base class which includes basic functionalities for every page. If you have some new functionalities that every page should have, you can simple add it to the base class. Base class is like the following: In this part we are creating pages which are inherited from base page. Every page has its own functionalities written as python functions. Some functions return to a new page, it means that these functions leave the current page and produce a new page. You should write as much as functions you need in the assertion part because this is the only part you can use the webdriver functions to interact with web pages . This part can be evaluate as providing data to assertion part.   The last part is related to asserting your test cases against to the

Performance Testing on CI: Locust is running on Jenkins

For a successful Continuous Integration pipeline, there should be jobs for testing the performance of the application. It is necessary if the application is still performing well. Generally performance testing is thought as kinds of activities performed one step before going to live. In general approach it is true but don't forget to test your application's performance as soon as there is an testable software, such as an api end point, functions, and etc. For CI it is a good approach to testing performance after functional testing and just before the deployment of next stage. In this post, I want to share some info about Jenkins and Locust. In my previous post you can find some information about Locust and Jenkins. Jenkins operates the CI environment and Locust is a tool for performance testing. To run the Locust on Jenkins you need command line arguments which control the number of clients ,   hatch rate,  running locust without web interface and there should be so