Skip to main content

Importance of Testing: Case of Samsung Note 7


Test it or let the customers do it in wild! Testing is not a state it is a process, any failures in the process, make the quality ruined. Moreover, any failures detected in any part of a product make the product fail in terms of customers. Therefore we need to deploy the quality as a process of products and services which are given to customers along the product life cycle. In this post, I want to explain the importance of testing with the case of the successful Korean technology company 'Samsung'. The real problem with the product is that design of the battery has some defective points because they want to push boundaries too much for having more power small battery cell. For more detail, you can this post.

Let's look at product life cycle management (PLM) first. It explains a product from an idea to delivering a product and terminating support. It divides the product life cycle management into four main stages as follows:
  • Conceive  : Imagine, specify, plan, innovate
  • Design      : Describe, define, develop, test, analyze and validate
  • Realize     : Manufacture, make, build, procure, produce, sell and deliver
  • Service     : Use, operate, maintain, support, sustain, phase-out, retire, recycle, and disposal
 
For the classical approach, quality attributes are associated with the design phase as a step just before the product is released. However, the quality is very prone to failure if something is dismissed in the previous steps just because the failing points cumulatively occur in any stage of the process. It is also important that the most of tests are performed just before the product release but if you have fulfilled the correct quality activities in every stage of the process then you will have less important / less complex / less critical bugs in the testing step.

What was not Good for Samsung

For sure the failing point is that the defective product was released. However, the most important thing is that the defect is harmful effects. So we should classify it as the highest priority and critical severity. Priority is highest because even the defect is not encountered with the majority but if it was encountered it could cause serious problems. The most serious problem is killing the users or even killing a group of people so the severity is critical. This defect caused financial loss, reduced the company's reputation, damaged the environment, losing customers, fortunately not losing of lifes.

What was the Worst Thing for Samsung After a failure best thing could be to announce the risk and replace the product with non-defective products. This was the thing tried by Samsung but the horrible part is that the replacement product also had the same kind of issue. Most people believed that Samsung could do better for forgivingness but this was reduced the reputation of the brand people started to think of the opponent Samsung, even some of them started to think of the opponent Android.



What Should Do

Samsung's case does not mean that they have a bad testing team, strategy, and environment but maybe this case has ever been the serious problem they had. According to Routers, they recalled 7 million devices and expected to have $17 billion lost in revenue, plus lost customers and reputations. Let's look at what we can do to minimize such risks:

Test Environment Limitations

Test results are as good as test environments! If you expect that a feature can work in the live environment because you couldn't test it in test environments, it is clear that there are no assumptions in testing. This is just taking the risk of the feature not working. Therefore test environments should be continuously updated with the live environment in terms of software, hardware, and other components.

Non-Functional Testing is Difficult

Functional testing is to perform testing by running the product code on a machine, so it gives a fail or passes result but non-functional tests are very challenging. Basically, non-functional tests are performance, usability, operability, maintainability, stability, reliability, and so on. Each of the tests depends on many factors such as; location, temperature, air pressure, age, gender, education, disability, and so on. Under these conditions, there is no standard value to check a non-functional attribute. If you produce a device like a mobile phone, people are carrying it anywhere they go so the range of temperature can be changed from -60°C to 50°C, for more detail check this. As a result, your design should be considered this fact, and your test environment should be convertible to these conditions to run non-functional testings. See that the hottest and coldest temperatures in the world and these are not that easy!
        Canada −63.0 °C (−81.4 °F) Snag, Yukon 1947-02-03
        The United States 56.7 °C (134.1 °F) Death Valley, California 10 July 1913

Beta Testing

Unfortunately, no matter how hard try to create test environments the same as real user live conditions, you will not totally achieve this because the number of user environments is countless. Therefore beta testing is a solution for better results. Beta testing is a stage when a product is ready for release it is given to a specific group of customers to use the product and get feedback for the real performance of the product. If only a selected group of customers use it in their environment without control of the company it is called closed beta testing, if all the customers are willing to use the product, it is called open beta testing.

Conclusion

It is not possible to test the %100 of the features of the product in controlled test environments. In any case, there is always something that reminds customers to see if it works or not but the important point is taking controlled risks. Depending on risk-based testing, the risk is equal to multiplications of the likelihood of the case and impact of the case:
        Risk = Likelihood X Impact

So list the risks from highest to lowest, start from the highest one and test until the release date and don't take any risk that you never want to face in the live environment.

Popular posts for software testing and automation

Selenium Error "Element is not currently interactable and may not be manipulated"

Selenium webdriver can drive different browsers like as Firefox, Chrome or Internet Explorer. These browsers actually cover the majority of internet users, so testing these browsers possibly covers the 90% of the internet users. However, there is no guaranty that the same automation scripts can work without a failure on these three browsers. For this reason, automation code should be error-prone for the browsers you want to cover. The following error is caught when the test script run for Chrome and Internet Explorer, but surprisingly there is no error for the Firefox. Selenium gives an error like below: Traceback (most recent call last):   File "D:\workspace\sample_project\sample_run.py", line 10, in <module>     m.login()   File "D:\workspace\ sample_project \test_case_imps.py", line 335, in login     driver.find_element_by_id("id_username").clear()   File "C:\Python27\lib\site-packages\selenium-2.35.0-py2.7.egg\selenium\webdriver\r

Change Default Timeout and Wait Time of Capybara

One of the biggest challenge for automation is handling timeout problem. Most of the time, timeout is 60 seconds but it may sometimes not enough if you have badly designed asynchronous calls or the third party ajax calls. This makes handling timeout more complex. set large enough to tolerate network related problems. For Selenium based automation frameworks, like Capybara, default Webdriver timeout is set to Net::ReadTimeout (Net::ReadTimeout) Changing ReadTimeout If you have timeout problem for Capybara, it gives an error like above. This means that the page is not fully loaded in given timeout period. Even you can see that page is loaded correctly but webdriver wait until the Ajax calls finish. class BufferedIO #:nodoc: internal use only def initialize (io) @io = io @read_timeout = 60 @continue_timeout = nil @debug_output = nil @rbuf = '' end . . . . . def rbuf_fill beg

Create an Alias for Interactive Console Work: Selenium and Capybara

If you are working on shell most of the time Aliases are very helpfull and time saving. For testing purposes you can use Alias for getting ready your test suites. In this post, I want to explain both running Selenium and Capybara on console and creating aliases for each.  This post is for Windows machines, if you are using Unix-like see   this post . Creating Scripts for Selenium and Capybara First of all, it is assumed that you have installed Selenium and Capybara correctly and they work on your machines. If you haven't installed, you can see my previous posts. I am using the Selenium with Python and the Capybara with Ruby. You can use several different language for Selenium but Capybara works only with Ruby.  Create scripts in a directory called scripts (in your home folder, like as  ~/scripts ) for your automation tool as following, save them as capybara.rb, sel.py :  Creating Aliases Depends on your favourite shell, you need to add the alias to .bashrc bash

Page-Object Pattern for Selenium Test Automation with Python

Page-object model is a pattern that you can apply it to develop efficient automation framework. With the page-model, it is possible to minimize maintenance cost. Basically page-object means that your every page is inherited from a base class which includes basic functionalities for every page. If you have some new functionalities that every page should have, you can simple add it to the base class. Base class is like the following: In this part we are creating pages which are inherited from base page. Every page has its own functionalities written as python functions. Some functions return to a new page, it means that these functions leave the current page and produce a new page. You should write as much as functions you need in the assertion part because this is the only part you can use the webdriver functions to interact with web pages . This part can be evaluate as providing data to assertion part.   The last part is related to asserting your test cases against to the

Performance Testing on CI: Locust is running on Jenkins

For a successful Continuous Integration pipeline, there should be jobs for testing the performance of the application. It is necessary if the application is still performing well. Generally performance testing is thought as kinds of activities performed one step before going to live. In general approach it is true but don't forget to test your application's performance as soon as there is an testable software, such as an api end point, functions, and etc. For CI it is a good approach to testing performance after functional testing and just before the deployment of next stage. In this post, I want to share some info about Jenkins and Locust. In my previous post you can find some information about Locust and Jenkins. Jenkins operates the CI environment and Locust is a tool for performance testing. To run the Locust on Jenkins you need command line arguments which control the number of clients ,   hatch rate,  running locust without web interface and there should be so