Skip to main content

What Reminds From Testistanbul 2016

First of all, I am appreciated to attend to an international conference on software testing. It is 7th international Testistanbul conference. There was good number of attendants from newly starters to over 30 years of experienced professionals. Most of the attendants were from Turkey thought there were some guys from abroad and some international companies to present their products. Testistanbul conference is important because it is definitely the most valuable conference on software testing in Turkey. It gives us opportunity to meet largest professional group and to share the knowledges in domestic market. The conference topics were as follows: 


09:00 - 09:30 OPENING CEREMONY SPEECH:
FORMULA 1, CONTINUOUS INTEGRATION, CONTINUOUS DELIVERY AND TEST DATA MANAGEMENT PROCESSES - TURKEY SOFTWARE QUALITY REPORT (TSQR) 2016 / 17 (In Turkish)
Koray Yitmen
09:30 - 10:15 IBM SPONSOR SPEECH: SHIFT LEFT FOR HIGHER QUALITY AT GREATER SPEED
Mehmet Çağrı Elibol
10:15 - 10:35 Coffee Break
10:35 - 11:25 KEYNOTE: WHY AUTOMATED VERIFICATION MATTERS
Kristian Karl
11:25 - 11:40 Coffee Break
11:40 - 12:30 KEYNOTE: THE STORY OF APPIUM: LESSONS LEARNED CREATING AN OPEN SOURCE PROJECT,
0 TO 100,000 USERS
Dan Cuellar
12:30 - 13:45 Lunch
13:45 - 15:05 KEYNOTE: ENTERPRISE CHALLENGES OF TEST DATA
Rex Black
15:05 - 15:20 Coffee Break
15:20 - 16:10 KEYNOTE: PERFORMANCE TESTING OF BIG DATA
Roland Leusden
16:10 - 16:25 Coffee Break
16:25 - 18:00 PANEL: TEST DATA MANAGEMENT CHALLENGES (Turkish)
Barış Sarıalioğlu - Keytorc (Moderator), Cankat Şimşek - Emerson Network Power, Ertekin Güzel - Intertech, Hazar Tuna - Kredi Kayıt Bürosu, Koray Yitmen - TTB, Mert Hekimci - Kariyer.net, Nasibe Sağır - Doğuş Yayın Grubu

In the opening ceremony by Koray Yitmen, as in the title, continues integration is explained by an analogy with Formula One car racing. To be honest this small presentation is one of the most impressive part of the conference. The given example is "F1 car racing continues and as a whole racing team is supporting to finish race with a minimal out-of-service and without any breakdown" this is similar to role of development operations (dev-ops) in software development process as "software is a live object but as a team you are adding new features, fixing issues and updating some other parts" all these things is happening continuously by help of dev-ops culture. Pit-Stop in the F1 is like the deployment process in software development. The fastest pit-stop is under 2seconds, so why not deploying a feature to live be that fast.


Second speech is given by main sponsor IBM, Mehmet Çağrı Elibol, subject is "shift left" or old motto "test early and often". It comes to me a new term for old and famous motto. IBM presented tools for CI. They are, Rational Test Workbench (RTW), Rational Performance Test Server (RPTS) and Rational Test Virtualization Server (RTVS), can fully automate the development process by performing functional, integration, performance and regression testing with RTW; suppling load agents, SaaS load agents and Virtualization agents with RPTS; modelling test environment to reduce decencies by RTVS.



The third speech was given by Kristian Karl from Spotify. This speech was the best part of the conference for me. Kristian has much experience of almost my age, he is the creator of GraphWalker. The topic is concept and scope of test automation. Briefly, he said everything can be automated. In some team, there may be 2-3 QA engineers although there may not be any QA engineer in some teams. It depends on needs and project details. However, QA engineers can have a role of consultant to supports developers to achieve automation goals. I want to write a separate post because he explains lots of things with many good examples but you can find the most relevant pictures below:


The definition of testing reminds me the exploratory testing by James Bach, Kristian replay my tweet as saying "exploratory testing is inspiration point".






 














The rest of the conference, creater of Appium, Dan Cuellar gave speech about the history of Appium, actually there is not technical information given by Dan. Enterprise test data management by Rex Balck, former president of ISQTB, it was a long speech about test data management. The last speech topic is performance testing of big data but I didnt find much information about "performance testing of big data" it was related to big data definition and handling it.










Popular posts for software testing and automation

Selenium Error "Element is not currently interactable and may not be manipulated"

Selenium webdriver can drive different browsers like as Firefox, Chrome or Internet Explorer. These browsers actually cover the majority of internet users, so testing these browsers possibly covers the 90% of the internet users. However, there is no guaranty that the same automation scripts can work without a failure on these three browsers. For this reason, automation code should be error-prone for the browsers you want to cover. The following error is caught when the test script run for Chrome and Internet Explorer, but surprisingly there is no error for the Firefox. Selenium gives an error like below: Traceback (most recent call last):   File "D:\workspace\sample_project\sample_run.py", line 10, in <module>     m.login()   File "D:\workspace\ sample_project \test_case_imps.py", line 335, in login     driver.find_element_by_id("id_username").clear()   File "C:\Python27\lib\site-packages\selenium-2.35.0-py2.7.egg\selenium\webdriver\r

Change Default Timeout and Wait Time of Capybara

One of the biggest challenge for automation is handling timeout problem. Most of the time, timeout is 60 seconds but it may sometimes not enough if you have badly designed asynchronous calls or the third party ajax calls. This makes handling timeout more complex. set large enough to tolerate network related problems. For Selenium based automation frameworks, like Capybara, default Webdriver timeout is set to Net::ReadTimeout (Net::ReadTimeout) Changing ReadTimeout If you have timeout problem for Capybara, it gives an error like above. This means that the page is not fully loaded in given timeout period. Even you can see that page is loaded correctly but webdriver wait until the Ajax calls finish. class BufferedIO #:nodoc: internal use only def initialize (io) @io = io @read_timeout = 60 @continue_timeout = nil @debug_output = nil @rbuf = '' end . . . . . def rbuf_fill beg

Create an Alias for Interactive Console Work: Selenium and Capybara

If you are working on shell most of the time Aliases are very helpfull and time saving. For testing purposes you can use Alias for getting ready your test suites. In this post, I want to explain both running Selenium and Capybara on console and creating aliases for each.  This post is for Windows machines, if you are using Unix-like see   this post . Creating Scripts for Selenium and Capybara First of all, it is assumed that you have installed Selenium and Capybara correctly and they work on your machines. If you haven't installed, you can see my previous posts. I am using the Selenium with Python and the Capybara with Ruby. You can use several different language for Selenium but Capybara works only with Ruby.  Create scripts in a directory called scripts (in your home folder, like as  ~/scripts ) for your automation tool as following, save them as capybara.rb, sel.py :  Creating Aliases Depends on your favourite shell, you need to add the alias to .bashrc bash

Page-Object Pattern for Selenium Test Automation with Python

Page-object model is a pattern that you can apply it to develop efficient automation framework. With the page-model, it is possible to minimize maintenance cost. Basically page-object means that your every page is inherited from a base class which includes basic functionalities for every page. If you have some new functionalities that every page should have, you can simple add it to the base class. Base class is like the following: In this part we are creating pages which are inherited from base page. Every page has its own functionalities written as python functions. Some functions return to a new page, it means that these functions leave the current page and produce a new page. You should write as much as functions you need in the assertion part because this is the only part you can use the webdriver functions to interact with web pages . This part can be evaluate as providing data to assertion part.   The last part is related to asserting your test cases against to the

Performance Testing on CI: Locust is running on Jenkins

For a successful Continuous Integration pipeline, there should be jobs for testing the performance of the application. It is necessary if the application is still performing well. Generally performance testing is thought as kinds of activities performed one step before going to live. In general approach it is true but don't forget to test your application's performance as soon as there is an testable software, such as an api end point, functions, and etc. For CI it is a good approach to testing performance after functional testing and just before the deployment of next stage. In this post, I want to share some info about Jenkins and Locust. In my previous post you can find some information about Locust and Jenkins. Jenkins operates the CI environment and Locust is a tool for performance testing. To run the Locust on Jenkins you need command line arguments which control the number of clients ,   hatch rate,  running locust without web interface and there should be so