One tricky thing when writing application tests is to deal with flaky tests that sometimes fail without any changes made. Such test failures are usually the result of a race condition in the code base or in the test code. Regardless, flaky tests are real motivation killers and need to be addressed. A situation that has happened a few times for us is that a test would run perfectly on localhost, but then fail later when run in the cloud using SauceLabs or BrowserStack. Cloud testing providers typically provide you with virtual servers running much slower than a regular laptop. As a result, the timing of the test execution is greatly affected as things just take more time (network requests and moving the cursor in the test for example).

A simple application test scenario

Let’s say you have an application where you load some data into a table. After loading, a toast message is shown and you want to verify that you can dismiss the toast by clicking it. The toast will hide itself after 5 seconds of being visible, here’s the imaginary test scenario running fine on localhost:

Video recording to the rescue

The test scenario we cooked up includes a race condition as we only have 3 seconds to move the cursor to the toast target before it hides. When the test above is executed in the cloud on a really slow machine, the test might actually run and fail with a strange message:

So it sounds like the test cannot find the toast element, which is strange. Figuring out why this test fails might not be easy, especially if you’re not familiar with the application code or test code. If we could see exactly what the test execution looked like, it would be a huge help when trying to understand why it failed. At Bryntum, we record any strange failures to help us debug tests quickly. We do this by simply starting RootCause video recording inside the test, and if the test fails we save the session so we can review it later. Here’s what we put in our Siesta test class:

As you can see, we initiate RootCause loading in the test setup method and tell our test to wait for it to be ready before starting the test. Same thing in the tearDown method, we delay the test finalization until the session has been saved. After seeing this test failure, we can open the test session in RootCause.

Watching the video in RootCause

Simply open your RootCause dashboard and navigate to the test session and activate the video tab:

Now you can play the video and see what exactly what happened during the test.

It’s now crystal clear that we’re dealing with a race condition, and we need to make sure the animation can be set to a value high enough that we always have time to move the cursor to it before it hides. What do you think? Have you encountered hard to solve test failures where a video recording would have been useful? We would love to hear your own application testing experiences.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">