Executed tests

This dashboard shows data collected from the continuous integration pipeline artefacts produced by test executions of one or more projects (in the following “project” and “source code repository” are treated as synonymous).

This dashboard is focussed on number of executed tests, its trends and its correlation with the size of the tested files.

dashboard of coverage

Number of executed tests

The first chart displays the time series of the number of executed tests (skipped tests are excluded). For each project, this number is the sum over of the selected test levels run by that project.

Collected data

Data is collected from test artefacts. These include:

  • pytest reports in json and xml

  • jest reports in xml.

Number of executed tests panel with four projects

Usage

You can use this panel:

  • to easily visualise how frequently tests are run within the continuous integration pipeline;

  • to monitor the trend of this number;

  • to compare numbers and trends between projects.

Benefits

  • You can identify projects where tests are run very seldom, or with irregular periods.

  • You can identify projects that have a very small number of tests.

  • You can identify projects where the number of executed tests does not grow as expected.

Note

Warning: this is a very simple metric, which can be easily tricked. To increase this number a developer might add a large number of tests that contribute very little in identifying quality issues. Or the developer might be using an automatic generator of test cases. In this cases this metrics will increase, without bringing an increase in the testing effectiveness.

We recommend that other metrics are used to analyse the current situation before drawing some conclusions. Furthermore, it is wise to inspect the testware and assess its quality - in addition to monitoring its quantity.

Test density

This chart shows the ratio of the number of executed test cases over the number of lines of code (thousands) of the system being tested.

Collected data

Data is collected from test artefacts. These include:

  • pytest reports in json and xml;

  • pytest coverage reports in json.

Tests density panel with four projects

Usage

You can use this panel:

  • to easily visualise divergence of the number of tests and the number of lines of code;

  • to monitor the trend of this ratio;

  • to compare ratios and trends between projects.

Benefits

  • You can identify projects where tests do not grow in synch with size of the codebase.

  • You can identify projects that have a very small test density, which might require a more intense effort in test development.

Relative change of executed tests

This chart shows the trend of the relative change of the number of executed tests within the selected time interval. For each project, the number of executed tests at the initial date of the time interval is used as a reference value. Subsequent data points show how much the number of executed tests has changed.

For example, for the interval 13-31 January a value of 1.3 for January 17 implies that relative to the number of executed tests shown for 13 January, on the 17th of January there were 30% more tests.

Collected data

To compute this chart Argos uses the same artefacts needed by “Executed Tests”, namely:

  • pytest reports in json and xml.

Relative change of executed tests for four projects

Usage

You can use this panel:

  • To monitor the rate of change of the number of exectuted tests in different time intervals;

  • to compare the rate of change between different projects.

Benefits

With this panel you can:

  • identify projects where the rate of change does not evolve as expected (for example, it decreases over time);

  • identify time periods, for a single project, where the rate of change is not as expected.

Executed tests by category

This panel displays the number of executed tests split by test levels (if more than one). More specifically, for the selected time interval and for each test level, Argos computes the mean of the number of executed tests for each project, and then the mean of the means.

Collected data

To compute this chart Argos uses the same artefacts needed by “Executed Tests”, namely:

  • pytest reports in json and xml.

Number of executed tests for two different categories

Usage

You can use this panel:

  • To have at a glance numbers of executed tests for selected projects in selected time interval, split by test levels.

  • To compare different test levels.

Benefits

With this panel you can:

  • have an overall view the distribution of tests across test levels, and possibly spot situations where too many tests are being developed for the “wrong” level.

Last change

This chart shows the relative change of the number of executed tests within the selected time interval. For each project, the number of executed tests at the initial date of the time interval is used as a reference value for computing the number for the last time point of the interval.

For example, for the interval 13-31 January a value of 1.5 implies that relative to the number of executed tests shown for 13 January, on the 31st of January there were 50% more tests.

Green bars are shown for numbers greater than 1.00; red bars otherwise.

NOTE: for each project, this number is the same as the last data point shown by “Relative change of executed tests”.

Collected data

To compute this chart Argos uses the same artefacts needed by “Executed Tests”, namely:

  • pytest reports in json and xml.

Last change of executed tests for four projects

Usage

You can use this panel:

  • To see at a glance which project is doing better or worse in the selected time interval.

Benefits

With this panel you can:

  • identify projects where the rate of change does not evolve as expected (for example, it is too small).