In this post, I want to give you some practical information about a monitoring
system for automated tests. I will touch the following topics:
- Why we need a monitoring system for automated tests
- What are the potential benefits of the monitoring system
- A simple approach using the following technologies:
- Elasticsearch
- Kibana
- Firebase test lab, gcloud and gsutil
- XCTest and Espresso
What is the Centralization of Log Systems
In a development pipeline, we have a variety of tools and technologies with
different languages and scripts. CI/CD pipeline is a good place the see the
process of the development, but to get the detail of an item in the pipeline,
we need to get into that item and tackle with logs and data. Some times this
investigation requires some capabilities to understand it. One of the easiest
ways to get a common understanding from the logs in the pipeline is using one
easy tool for the whole process. When we collect all the logs and data into
one place then we can start debugging activity from here. Therefore the
centralized log system means that all the logs and data related to the
development pipeline should be stored in one place with a timestamp and/or
tags for data separation.
What are Automated Tests
Any types at any testing level which are scripted and run over tools can be
called as automated tests. Therefore, it is not aiming to one level just as
a unit test or an integration level testing and
not also functional or non-functional tests, any tests that we
can run on CI/CD pipeline could be considered in the fashion. Basically, the
following test matrix shows the test types considering the test levels. As the
test pyramid suggests that the more test in the low-level, the more reliable
and fast automated tests we would get so staring from
the unit level most of the tests could be automated and bonded to
the monitoring system.
The test types considering the test levels |
What Means the Monitoring System for Automated Tests
As we can see from the testing types table, we have lots of testing types and
each testing types have a great variety of tools with different language
support. Each test type produces different kinds of output, such as XML, JSON,
HTML, and so on, in a different environment so reaching these outputs and
understanding them is a pretty hard and time-consuming task and it also
requires credential management for each environment.
The monitoring system for the automated test can collect the test results from
the different environments as soon as the tests completed or during a test
run. Then the data can be used to draw graphs, create tables, and some
simulations. In this way, everyone can see the result of all the automated
tests written for the product in realtime.
Why We Need a Monitoring System For Automated Tests
As defined in the previous section, collecting the logs and data into one
common place makes it easier to work with these data. These data can be used
for monitoring, debugging, reporting, alerting, and other purposes so
important thing is collecting data and making them available for the whole
team.
The benefits of the monitoring system for automated tests can be summarized
into the following topics:
1. Team Awareness
If everything in a system is transparent, then the system itself shows what it is. There will no extra word to explain it. Every day the monitoring system shows the result of the tests run against the product and it highlights how much of them passed and failed. If there are two kinds of tests on the monitor and one has a red part, then there is no more word to explain that the red part test has a problem and it needs to be solved. This can create team awareness of the product to take any action about failure.
2. Contribution
Anyone can write a test for any test type if he is willing to write. The
monitoring system can be itself a self-explained system that you can easily
see the separation between test types and products. If the number of unit
tests written for the Android app is half of the number of the unit tests
written for the iOS app then the team should take action to remove this
gap.
3. Quality-Centric Development
Software development is a continuous effort so every commits adds something
new on the product. If there is something wrong with one of the quality
metrics then the related action should be taken to improve the related
metric.
4. Increased Number of the Automated Test
The stability in the results is good as far as the numbers of the tests go up.
If the number of the test stay stable for every commit, we will get lower code
coverage. Each new requires new automated tests and fixing defects also
requires updating correctness of tests or adding some extra tests for edge
cases. Monitoring these metrics, courage us to write more tests and make them
passed.
5. Complete-Working Process - CI
In practice, sometimes, the CI process doesn't have tests, or these steps are
somehow skipped. The main responsibility of the team is to develop a
high-quality product. The meaning of the high-quality product depends on the
requirement but there is only one way to understand the quality of the product
is to check requirements, or other meaning testing it against to the
requirements. The good news is that; if we can monitor the tests, we should
have already implemented those tests to the CI.
6. Easier Debugging
The best part of the centralized logging system, having a starting point for
debugging errors, failures, exceptions. Logging not only results but also
logging the deepest info of the failures and the exceptions make debugging
easier.
What is Elasticsearch
Elasticsearch is the heart of the Elastic (ELK) Stack. Elasticsearch is
an open-source search engine. It is built on Apache Lucene. It provides a
Restful API. The first version of Elasticsearch was released in 2010, then it
became the de facto search engine and it is commonly used for full-text
search, business analytics, log analytics, security intelligence, and some of
the operational intelligence.
Elastic Stack |
Elasticsearch is a full-text and distributed NoSQL database so it stores
documents with indexing them instead of tables and schemes. We can send data
in JSON format to the Elasticsearch database by Restful API or
Logstash, another tool for logs operations.
What is Kibana
Kibana is a product from ELK Stack. It is an open-source data
virtualization tool that is designed to use Elasticsearch as a data source.
Once you integrated Elasticsearch and Kibana and fed the Elasticsearch
database with data then you use the Kibana. Kibana has a user-friendly
web user interface to create indexes, search, and dashboards. On the
dashboard, you can add flexible chars, graphs, heat maps, histograms.
Setting Up Elasticsearch and Kibana in Docker
Containers make everything easier, we can run Elasticsearch and Kibana easily with a docker-compose.yaml. Basically, I am using the docker images provided
by Elastic company officially with the version of 6.6.0 for both Elasticsearch
and Kibana. Then it runs the Elasticsearch on port 9200 and the Kibana on port
5601. Therefore after the docker-compose up command, you can navigate
to http://localhost:9200 for
Elasticsearch and http://localhost:5601 for Kibana.
Export Data from Firebase to Elasticsearch
I mentioned above, to feed the Elasticsearch database Logstash is used for
most generic architecture. However, in this post, we want to integrate
mobile application testing in the cloud. We use Firebase as a cloud mobile
testing provider. Firebase provides test labs to run
mobile application testing. For mobile application testing we have
implemented tests with native tools so for Android testing, we will use
Espresso and for iOS testing, we will use XCTest.
To be able to export data from Firebase to Elasticsearch we need to follow
these steps:
and this is the Python script to export the data
- Run mobile tests with auto-generated UUID by using GCLOUD cli
- Calculate the related test result files with this UUID
- Grasp the .XML files from Google Storage by using GSUTIL
- Export data on those .XML files to Elasticseach by using Restful API. For each run to be distinctive, create new label called timeString. This variable will be used to separate the test runs and make them ordered.
You run the script with android or ios options:
# run test and export data bash run_tests.sh android bash run_tests.sh ios # only export data python3.6 export_xml_data_elastic.py android python3.6 export_xml_data_elastic.py ios
and this is the Python script to export the data
Creating Interactive Dashboard on Kibana
Search on Kibana with Filters
Go to the Discover menu and select one item in the list then click the + button of the related filter item. Once you click the + button next to the name item it filters the search item with value name has and then I want to add the testProject to filter both ios project with unit tests.
Kibana searching with filters |
Create Visualizations
Click Visualize then click + button on the middle of the page, it will pop-up types of visualization menu then you can select one of your favorite charts. Then it opens the saved search item, you can select the search you saved in the previous step.Kibana setting the metrics |
I choose the bar chart, later I will add one column to show the number
of success, failure, and error. As
you can see from the picture above you need to define the
Metrics and the picture below shows how to split the chart for
each run by setting the Buckets.
|
Creating an Interactive Dashbord
Finally, we can create a Kibana Dashboard that can display every chart
related to the automated test results. Click on the Dashboard menu and click the add button
on the top-right then select the visualization that you created in the
previous step. That's all! Now we can check the dashboard.
Kibana dashboard with the result of automated tests |
If you want to see the repository and contribute it, you can
fork it from Github
:)