Reading:
An Introduction To The Automation Test Wheel
Ministry of Testing Meetups image
We are a global community with member-led local software testing focused meetups.

An Introduction To The Automation Test Wheel

Read "An Introduction To The Automation Test Wheel" by Kristin Jackvony in Ministry of Testing's Testing Planet

Content in review

By Kristin Jackvony

As a software tester, I like thinking about testing a product from a number of different angles.  Over the last few years I’ve learned about security testing and performance testing, and I’ve come to see how important those activities are in validating software quality. However, many testing approaches do not include these types of tests, expecting instead that those activities will be handled by separate departments in their company. As testers, we should be validating the quality of the entire application. In thinking about how to incorporate all test types into a traditional testing workflow, I conceived of the idea of an Automation Test Wheel.

The Traditional Approach: The Test Automation Pyramid

Anyone who has spent time working on test automation has likely heard of the Test Automation Pyramid.  The pyramid is typically made of three horizontal sections: UI Tests, Service Tests, and Unit Tests. The widest section of the pyramid is for the unit tests, which should comprise the largest number of tests since they are the closest to the code and run very quickly. The middle section of the pyramid is for the API tests, which run quickly but depend on the back-end data store; there should be more of these tests than UI tests, but fewer of these than unit tests. Finally, the smallest section of the pyramid is for the UI tests, which should comprise the smallest number of tests due to their slow speed and multiple dependencies.  

There are many variations of the Test Automation Pyramid, including one that considers the base of the pyramid to be testability, and one that thinks of the pyramid as an upside-down cone that is a section of a round Earth-like sphere. But the original version of the pyramid is one that most people think of when they are trying to decide what to automate.  

While the pyramid is a great reminder that we want to automate as close to the code as possible, this model does not help us think about what to test. This is where the Automation Test Wheel can help.

I arrived at a wheel design simply because I came up with eight different application areas that should be tested. Each of these test types can be considered as spokes in a wheel; none is more important than another, and they are all necessary. The size of each section of the wheel does not indicate the quantity of the tests to be automated. Each test type should have the number of tests that are needed in order to verify quality in that area. 

The Automation Test Wheel is helpful because it reminds us of areas that we may have forgotten to test.  It is not a dictum; it is merely a guide.  

The Sections Of The Automation Wheel

Each section of the wheel should be considered when you are creating a test automation strategy.  For each of the sections below, the title includes a link to a GitHub repository showing a working example of that test type.  Additionally, the “Read Me” section of each repository has a link to a blog post that describes how to run the tests.

Unit Tests:  A unit test is the smallest automated test possible.  It tests the behavior of just one function or method, and is written directly in the application code.  These tests will usually mock out dependencies such as databases or services, so that only the function or method is being tested.

Component Tests: These tests check the various services that the code is dependent on, such as a database or an external API.

Services Tests:  These are tests that check the web services (often APIs) that are used in our code.  

User Interface (UI) Tests: UI tests verify that end-user activities work correctly.  These are the tests that will fill out text fields and click buttons.  User workflows are often tested in this area.

Visual Tests: Visual tests verify that elements are actually appearing on the screen.  Examples of visual tests would be verifying that a button's label is rendered correctly and verifying that the correct product image is appearing on the screen.

Security Tests:  These are tests that verify that security rules such as authentication and authorization are being respected.  Tests to validate that inputs are sanitized to prevent cross-site scripting and SQL injection can also be automated.

Performance Tests: Automated performance tests can verify that request response times and web page load times happen within an appropriate time period.  

Accessibility Tests:  These tests validate that an application is accessible for as many people as possible, verifying such things as the presence of alternate text tags for images for the visually impaired.  Localization and internationalization can also be tested here.

Frequently Asked Questions About The Wheel

What about the Test Pyramid?

The Automation Test Wheel is not designed to replace the Test Pyramid. The Test Pyramid focuses on how to test, making sure that test automation is done as close to the code as possible. The Automation Test Wheel focuses on what to test so that no key areas are neglected.

What about manual testing?

Manual testing always has a place in the tester’s tool belt. There is nothing like a pair of human eyes and hands for verifying that an application is working correctly. The Automation Test Wheel can also serve as a reminder for areas in which to try manual exploratory testing.

Why are the sections of the wheel all the same size?

The sections of the wheel are the same size because each application will vary in how much testing is needed for each section. For example, an e-commerce application might need more visual testing because of the number of product pictures it displays, while a messaging application might need more services tests because of the number of external APIs it uses.

Are all of these test types really necessary?

It will depend on the application. An API which has no UI will not need UI testing.

What about monkey testing, data-driven testing, etc.?

All testing methods are welcome in the Automation Test Wheel! Monkey testing could be considered part of UI testing, and data-driven testing could be integrated into Unit, Services, Security, or UI tests.

How will we have time to do all this automation?

Finding enough time to test and automate is a perennial problem. I suggest beginning by automating the areas that are the most important to your application. Once that automation is up and running, you will be able to use the time saved on those tests to write more automation. When you have a complete automated test suite, you will have freed up valuable time for manual exploratory testing.

What about maintaining various different kinds of test automation suites?

Test automation should be treated as any other development project.  When work needs to be done on an automated test suite, a story should be created and added to the backlog.  Testers can add the story to a sprint during times when they are waiting for new features to test. If the testers are too busy to take on the story, one of the developers can work on it.  Maintaining a robust automated test suite should be the goal of the entire team.

Designing A Test Strategy With The Automation Test Wheel

When using the Automation Test Wheel to create an automation strategy, it’s helpful to move through these four questions in order:

What should we test?  

Looking at each section of the wheel, think about what might go wrong in your application in that area, and identify areas of functionality that you would like to verify.

How should we test it?

We know that it is helpful to test as close to the code as possible. When looking at your list of areas to be tested, think about the most efficient way to verify those areas. For example, saving a First Name field can be tested at the unit level. When you move to testing a POST request that will add a new contact, you can test this at the API level. For testing a Save button on a web form, you might need to use a UI test.  

Note that the sections of the Automation Test Wheel do not all have to use the same tools. You may decide to do some of your services tests directly in the code, and other services tests using an API test tool like Postman.

Also, your test sections do not have to be discrete; that is, each section doesn’t need to have a separate suite of tests. If you have a UI test suite, you can use it for your visual and accessibility testing as well as your UI testing.

When should we test it?  

You may want to set up some of your tests to run with every build; this is a great place for Unit and Component tests. If the developer breaks some key functionality of the code, tests here will provide fast feedback.  

You will probably want to set up some tests to run with deployments of your software.  How often software is deployed varies by company; you and your team can choose the optimal frequency to run your automated tests.  This is a great place for smoke testing, which verifies that the most important requests and user workflows are operating as expected before the deployment is completed. If the tests fail, the deployment can be rolled back or halted.

Nightly or off-peak tests are a great place to do more extensive testing, such as UI testing. Tests that are scheduled to run during off-peak hours won’t disrupt daily work, even if they take a long time to run. UI tests can check older areas of the application that you may not be testing frequently.

Finally, there may be health checks that you would like to run every hour, or even several times an hour.  Examples of a health check could include a call to a server or a check that response times are within acceptable limits.  You can set up alerts so that you are instantly notified of a problem, such as a server failing to respond.

Where should we test it?  

Most companies have an environment specifically for testing. This is a great place to do extensive testing, such as nightly or off-peak regression runs. If someone has recently deployed buggy code to the environment, unit tests could catch the bug. Additionally, a test environment is often where you have the most control over your test data. However, this also means that other testers might change your data, so consider a good test data management strategy.  This article describes five different strategies for managing your automated test data.  

Some companies also have an environment that mirrors production. This can be a great place to do performance testing because the response times should be similar to those of the production environment. But this environment can be difficult to maintain, so it’s a good idea to only run those tests that really need the feedback that comes from running in a production-like environment.  Also, there are other factors that might make the environment less of a mirror than you would expect, such as the number of servers used or the presence or absence of auto-scaling.

Finally, there should always be some tests that run in the production environment. These should be tests that don’t significantly impact the backend data of the environment.   A perfect example of how production tests can have an unforeseen impact is the story of the college that had a site where students could purchase academic papers for use in their research.  An obscure paper wound up being reprinted because it was purchased so often. It turned out that it was actually being purchased daily through an automated test, not by students who wanted to read the paper!  Another limitation with testing in a production environment is that you may not have the level of control over this environment as you would for a test environment; for example, you may lack the ability to change or delete your test data.

Try it For Yourself!

If the Automation Test Wheel sounds like it might be helpful in crafting test strategies for your organization, why not give it a try? Consider your application in terms of all eight areas of the wheel. Do you have good test coverage in each section? If not, focus on those areas where you are missing good coverage and consider what kinds of tests you could add.  

If you are missing visual tests in your automation strategy, you can take a look at your site and consider which elements on the page are important in terms of their appearance. For example, an e-commerce site might want to verify that a product image is appearing when a shopper searches for that product.

After you have identified your ideal tests, think about how you should be testing. Are there any tests that can be moved closer to the code?  If you have a UI test that fills out a form to add a new user, perhaps that test could be replaced with an API POST request. If you have an API test that validates the character limit for a new record, perhaps that test could be replaced by a unit test. As Ham Vocke mentions in his excellent article on The Practical Test Pyramid, if an automated test finds an issue, that probably means that a test should be written at a lower level. 

Next, think about when you should be running your tests. 

Q: Are there tests that you are running with every deployment that are slowing down the feedback process? 

A: Perhaps they should be moved to a nightly or off-peak suite.  

Q: Are there tests missing from your deployment process?  

A: You may be able to add more tests in this area, reducing the amount of manual testing needed during deployment.

Finally, think about where you are running your tests.  

Q: Could you get more efficient and accurate feedback if you were running your tests in a different environment?  

A: If you were running a nightly test in your test environment, you might catch bugs long before they make it to production. However, if you found that you are missing bugs in production because the production environment is different from the test environment, you could move some simple checks to the production environment.

In the end, the Automation Test Wheel and the Test Automation Pyramid are simply mental models to help you think about what to test and how to test it. They do not represent hard and fast rules, and strategies from both models can be incorporated into your thinking. I’d love to hear your experiences with the Automation Test Wheel on The Club. Please let me know if you found the concept helpful in your test planning, and if you have any other ideas for how it can be implemented.

Author Bio

Kristin Jackvony discovered her passion for software testing after working as a music educator for nearly two decades. She has been a QA engineer, manager, and lead for the last ten years and is currently working as a QA Lead at Paylocity. Her weekly blog, Think Like a Tester, helps software testers focus on the fundamentals of testing.  You can also find Kristin on Twitter.

You  might also be interested in:

Ministry of Testing Meetups image
We are a global community with member-led local software testing focused meetups.
Explore MoT
Episode Eight: Exploring Quality Engineering image
Land on the quality engineering planet!
MoT Intermediate Certificate in Test Automation
Elevate to senior test automation roles with mastery in automated checks, insightful reporting, and framework maintenance
This Week in Testing
Debrief the week in Testing via a community radio show hosted by Simon Tomes and members of the community
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.