You are here

You are here

The dos and don'ts of testing automation

public://pictures/Lena-Katz .png
Lena Katz QA Manager in ALM organization, HP
 

I think we can all agree that automation is a critical part of any organization's software delivery pipeline, especially if you call yourself "agile." It's pretty intuitive that if you automate testing, your release cycles are going to get shorter. "So, if that's the case," you might say, "why don't we just automate everything?" There's a good reason: automation comes with a price.

First, you need the right tools. Second, you need qualified testers who need to be trained. Third, you need to invest time and effort in automation infrastructure and to develop tests on top of it. Developing automated tests is a software development effort itself. Tests need to be designed, coded, and validated before you can really put them to use. But the biggest effort comes just when you think you're done.

Keeping up with upkeep

Many automation projects have failed because the stakeholders didn't consider how much effort goes into maintaining their automated tests. When they're not properly maintained, tests fail and dashboards "go red." Eventually, the whole project becomes irrelevant.

In this article, I'll discuss some of the best practices I discovered through on my own journey toward automation. These are practices you should consider when automating your testing cycles to make sure you build a suite of tests that work well and can be maintained throughout the life of your application. (This article is based on a presentation that can be viewed in full here.)

Choose which tests to automate

The ROI on automation tests varies depending on several factors. Some tests are difficult to develop because of technology constraints. For example, testing frameworks may not support test cases that run across several browser sessions or across different devices. Other tests may not need to be run frequently. For example, it might be more cost-effective to occasionally and manually test a use case for a rarely used feature, rather than invest the time to develop and maintain an automated test that runs after each nightly build. Each organization will make its considerations according to its own priorities, but it's always important to consider the ROI you'll get by automating your tests.

Don't automate from day one

Every software project takes time before its requirements and design stabilize. A classic comparison is between the UI that can change at any time in an application's lifecycle and back-end services that may live untouched for generations. Agile projects behave differently from waterfall in this respect. If you're developing a SaaS product, you must use automation to support frequent deliveries, but you'll have to carefully consider the effort you invest in developing tests because your requirements may also change frequently. This a fine balance you'll have to learn to work with. For an on-premise solution, it may be easier to identify the stage in which automation tests can be safely developed and maintained. For all these cases, you have to carefully consider when it's cost-effective to develop automated tests. If you start from day one, you'll expend a lot of resources shooting at a moving target.

Get all the right people involved

We've emphasized the importance of getting everyone involved in automation. Here's how it works in my department. An integral part of each development team, the DevTester writes and executes manual test cases for the team's user stories. The tests are written using a methodology (see connect manual tests with automation using a clear methodology) that clarifies how to automate them later on. Once a feature is stable, the DevTester writes the actual automation tests. Then, there's the Developer. In addition to developing the application, the developer works with the DevTester to review both the test's design and the testing code itself. The developer's involvement in the automated tests increases his or her engagement in the automation efforts, which also means the DevTester can help with test maintenance should the need arise. The QA architect is an experienced QA professional who is instrumental in deciding which feature tests should be automated. This is the person with the higher-level view of the overall testing effort who can understand which test cases will yield the best ROI if automated. With a broader view of the application, the architect is also responsible for cross-feature and cross-team QA activities to make sure that end-to-end testing can also be automated.

Avoid monolithic tests if you can

The principles of software development are just as valid when writing tests. Just like you don't want monolithic code with many interconnected parts, you don't want monolithic tests in which each step depends on many others. Break your flows down into small, manageable, and independent test cases. That way, if one test fails, it won't make the whole test suite grind to a halt, and you can effectively increase your test coverage at each execution of your automation suite.

Connect manual tests with automation using a clear methodology

Before you automate tests, you still have to invest a lot of work in test authoring and manual execution. But once the tested feature has stabilized, your methodology should clearly indicate how the test can be automated. In my group, we use "Given, When, Then" (GWT) with variants. Here is a typical test case:

Description

Given: Go to Dashboard module. Make sure there are some widgets.
When: Hover mouse over the title of the widget. Drag and drop the widget.

Expected Result

Then: The widget should be moved to the desired location.

Variants:
Links widget
Pie chart widget
Bar chart widget

To keep track of our ever-growing suite of tests, we also classify the automation status of our tests ("already automated," "blocked," "cannot be automated," "in progress," "to be automated") and define the scope of each test (API, integration, user interface, end-to-end, etc.) Note that we have recognized that not all tests should (or can) be automated.

Carefully choose which tests should be run at each execution

While automation saves you a lot of time, it still takes time. You can't run all your tests all the time. It takes too long and would generate an unmanageable analysis and maintenance effort. In my group, we've taken both manual and automation testing to three levels: sanity, end-to-end, and full. In addition to our feature tests, on every code commit, we run a set of high level, cross-feature tests to make sure that a code change in one feature hasn't broken another one. Only then do we run a set of more extended tests specific to the feature for which the code was committed. Then, we run our suite of feature-level sanity tests on our continuous delivery environment every three hours to make sure all features are in good shape. We only do this on one browser though, because we've found that if a test fails, it doesn't usually depend on the browser. Finally, we run feature end-to-end testing on our nightly environment.

These are our most extensive test suites that cover all features end-to-end on different browsers and environments. We include additional tests such as integration, performance, UI, and more. For now, we can still complete this suite of tests overnight.

In my organization, we've taken automation to the extreme, and we automate every test we believe will yield a good ROI. Usually, this means we run automation tests on all delivered features at both sanity and end-to-end levels. This way, we achieve 90 percent coverage while also maintaining and growing our test automation suite at all stages of the application lifecycle.

Keep learning

Read more articles about: App Dev & TestingTesting