“Quality is never an accident; it is always the result of intelligent effort.”
– John Ruskin
Software testing is a key component of software development to ensure that your software remains accurate and reliable. While testing is not a straightforward process, when performed consistently and correctly, it takes your product to new heights of usability, functionality and capacity. Our Process Mining software is no different, and in this article, we would like to share the various methods of testing we undertake to make your process mining journey as enjoyable as possible!
When working and expanding a complex system, it’s highly unlikely you’ll be able to avoid creating a bug. Quality assurance is not so much a way to avoid issues as it is a way to find and fix issues that will occur.
There are multiple reasons to check the results of your software. The obvious reason of course is to guarantee the quality of the output. If newly implemented features change existing functionality, then these changes need to be reviewed. If they are deemed undesirable, the bug should be fixed. If the change is correct, then the test should be changed.
To do this you need to have good coverage of the features in your system. When you do not have tests, there is no way of knowing if a subsystem behaves as expected.
If a bug is detected, we need to know what change in the software triggered the test results. Minimizing the amount of software changes between test runs is more efficient, so testing as often as possible is advised. However, doing this manually takes a lot of time and can be error prone. Automating these tests increases both efficiency and reliability.
1. Usability tests
Unfortunately, some tests cannot be automated. For example, usability tests, which check if the newly created features align with the needs of our users. The most efficient way to perform these is to sit down with different end users in separate sessions and walk through a prototype. Doing this early in the design process can prevent wasting precious development time on any unnecessary features.
2. Design tests
Similarly, design tests should be in place to check if new interfaces conform to the overall design of the software. While automation can be used very well to check if something is identical, in design tests we want to know if related interface elements are similar, both in their layout and in their use.
Back-end functionality can more easily be automated when the tester has control of the input and output of the function. These functions have a defined effect, or, at the very least, an assumed effect. This can be tested by providing the function with input data and checking the input for the effect. Any deviation in the effect can lead to problems for any function that uses the generated data.
3. Integration tests
On a higher level, we want to test if all components work together correctly. These Integration tests use the actual user interface to add data into the system, then checks that interface to check if the expected outcome is achieved. We can look at known user scenarios and other use cases of the system to see what we need to check.
In the ProcessGold platform, all features that add or change the user interface require usability tests to be performed. Both internal and external end users are contacted and asked for input. Design checks are performed before new features are accepted. These tests are performed by our software team as part of new feature creation.
4. Unit tests
Additionally, each change to our software automatically triggers our unit tests. Any anomalies are directly communicated to the developer that committed the change and the inconsistency will be fixed. More complex use cases are run on a nightly basis during integration tests. Regressions during these tests are considered high priority and are reviewed as soon as possible.
If you are creating your own app in the ProcessGold Platform, you are dependent on the data you have access to. Most visualizations make certain assumptions to calculate the values you want to present to your customer. For example, a Case Identifier should always be unique, an Activity value should never be NULL, or a sum of percentage values should at most be 100%.
These values might be correct in your current development dataset, but if new data is loaded in, as a developer you want to be warned when any of these assumptions turn out to be incorrect.
This can be done using invariants. Invariants can calculate values in the same manner as any other expression. It can check values for every row of your data separately or it can check aggregated values. An invariant is considered correct if all values of the calculated expression evaluate to true. If this is not the case, the invariant is triggered.
There are two ways to gather this feedback, once the invariants are set up. When you are developing your own application, you can perform a full scan of your project. This will find relevant issues that you might want to fix, including failed invariants.
In a production environment, when a release of your application is created and the application data is refreshed, these invariants are re-calculated. An invariant can be set to abort data generation if needed. The ProcessGold Platform will then fallback to the most recent generated extraction. In a default installation, this is done silently, but it is possible to be notified when this happens. This can be done by setting an email server and a mail report recipient in the server settings of your ProcessGold Platform installation.
If you would like more control, it is possible to create your own integration tests on top of the Platform. This can be done using a browser automation framework, for example Selenium. Tests themselves can be written directly in Java or C++, or external libraries such as TheIntern can be used.
A solid testing structure is crucial to successful software development, with the ability to make or break the software. Contact us to learn more about our platform and how we keep it in tip-top condition for the best process mining user experience.