Reading:
More than just ‘manual testing’: Recognising the skills of software testers
MoT Professional Membership image
For the advancement of software testing and quality engineering

More than just ‘manual testing’: Recognising the skills of software testers

Discover why the term ‘manual testing’ has limitations and negative impacts on the testing craft and learn to embrace more modern terminology

More than just ‘manual testing’: Recognising the skills of software testers image

Why I'm writing this

If you have been working as a software tester or in software development for any length of time, you will probably have come across the term ‘manual testing’. It’s a phrase that can ignite passionate responses. 

And this isn't just because of differing opinions, either. The term often misrepresents the essence of testing. At its worst, the term 'manual testing' can create a damaging divide, undervaluing human contributions while over-glorifying automation. At best, it diminishes thoughtful test activities. 

In this article, I want to discuss why this term might be harmful to the craft of software testing. And I want to suggest how we can shift our focus to a more inclusive and accurate understanding of what all testers truly bring to the table. Who knows if we will ever see the term ‘manual testing’ go away, but we can and must try. 

What does 'manual testing' mean?

Before I continue, let me run some quick questions by you. You don’t have to answer now, but bear them in mind when reading the rest of this article. 

  • Does a musician play music, or do they ‘manually play music’? 
  • Could you ever imagine yourself saying ‘manual’ to describe the work of another profession like painter, actor, sculptor, doctor, scientist, chef, philosopher, designer, innovator? 
  • When humans move, do they manually walk, talk, breathe or think? 
  • Have you ever heard the phrases: manual accessibility testing, manual security testing or manual usability testing? 

The phrase 'manual testing' is often used to describe testing activities performed by humans without the aid of automated scripts or tools. On the surface, it seems harmless enough, a simple way to distinguish between human-driven testing and tool based testing. 

Adding ‘manual’ to describe testers' work became common after the rise of a credible alternative, which was automation. As people were talking about automation testing, the term manual testing crept in. It should not have had a long shelf life. However, trends along the lines of ‘testing is dead because…’, which started with automation and continues through today with AI replacing testers, have kept this limiting and inaccurate label alive. 

Testing is not simply about how tests are executed (manually or automated). It’s a multifaceted process that requires exploration, critical thinking, and creativity. These activities cannot be reduced to a binary measure of 'manual' versus 'automation.' Worse still, the term 'manual tester' can inadvertently imply 'less valuable' or 'tech-lite,' leading to the perception that these testers are simply executing predefined steps of a script without the need for deeper analytical or investigative skills. This misconception not only does a disservice to the profession, but to the brilliant testers who happen not to create or like doing automation and, more importantly, to the quality of software itself.

The problems with the 'manual tester' stereotype

Undermining the craft

Labeling testers as 'manual' reduces the role to button-clicking and step-following, ignoring the depth of expertise required to uncover edge cases, understand complex systems, and empathise with end-users. It trivialises the intellectual rigor and depth of thought that testers bring to identifying risks and improving software quality. This in turn makes outstanding exploratory testers, including those that explore deep into the heart of software, seem less valuable than a bunch of unit tests. 

Creating a false hierarchy

The rise of automation has brought undeniable benefits to testing, but it has also created an unintended hierarchy whose benefits to software quality are questionable at best. 

'Automators' are often seen as more technical and therefore, by extension, more valuable. This overlooks the fact that automation is a tool for testing, not a replacement for it. Designing effective automated tests requires a deep understanding of the testing process. So it is not about testers versus automators, since automators are, or should be, testers first and foremost. 

Different types of testing add value and contribute to testing and quality in many varied ways. When organisations value one type of testing over another, they often do so in a 'penny-wise, pound-foolish' way. They might undervalue the exploratory tester who through deep thought uncovers several high-priority bugs that, if in production, could bring the system down. (Think high-profile banking apps.) They could undervalue the accessibility tester who, by helping improve ease of use for a variety of people using only keyboards or screen readers, increases sales by over 10 percent. Yes, automation gives us stability and confidence, but it is only one type of contribution to overall quality and value. 

Misaligned expectations

By focusing on the method rather than the outcome, the term 'manual' shifts the conversation away from the purpose of testing: to uncover information about the product. This can lead to organisations undervaluing exploratory testing, where human intuition and adaptability are irreplaceable.

Barrier to growth

For testers themselves, being pigeonholed as a 'manual tester' can limit career opportunities and professional growth. It perpetuates the myth that testers need to 'move to automation' to advance, rather than embracing and honing the broad spectrum of skills that testing requires. It has created a culture where it's easy (but mistaken) to believe that getting a job as an SDET (software development engineer in test) or automation tester elevates you above a quality assurance specialist or test engineer. Depending on the roles themselves the levels of skill might vary greatly even at the same level. 

Little or no interest in automation can prevent testers from applying for jobs where ‘some automation’ is an added-on ‘grab’ of skills by employers. Some job descriptions these days appear to require three or four specialities rolled into one person! Being labelled ‘manual’ might mean people are overlooked when promotions become available. Being perceived as 'less than' can follow you through your career and that is not fair to anyone.  

Why the term matters

Language shapes perception. By continuing to use terms like 'manual testing,' we inadvertently reinforce the idea that testing is a divided discipline, where human skills are somehow lesser than automated processes. This is not only inaccurate but harmful to the evolution of the craft.

Instead, we need to recognise testing as a cohesive whole. Automation is a powerful tool within the tester’s toolbox, but it is not the craft itself. Testing—in all its forms—is about learning, questioning, and providing insights. It’s about uncovering risks and ensuring that the software we deliver meets the needs of its users.

A new perspective: Collaborative testing

Rather than framing testing as 'manual (versus / or) automation,' we should embrace a collaborative approach. Automation can enhance testing by handling repetitive tasks, freeing testers to focus on more complex and exploratory activities. Human and machine working together is not a competition; it’s a partnership.

When we shift our language to reflect this mindset, we also shift our culture. Terms like 'exploratory testing,' 'investigative testing,' or simply 'human-driven testing' better capture the value that testers bring, emphasising their analytical and creative contributions. Or simplest of all, just say 'testing' if it isn’t automated. 

Beyond ‘manual testing’: What else can we say? 

I’m yet to see a sentence that uses the term 'manual testing' that is fundamentally changed or becomes less understandable if you simply remove the word 'manual' or if you use a more precise description. Let’s try a couple. 

Here's a typical job description for a position that is not centered on test automation. What happens if you remove 'manual'´from the title? Is any meaning lost?

"We are hiring a Manual QA Engineer to join our team!  

Location: Anywhere on Mars! Enjoy the flexibility of remote work! 

What You'll Do: 

  • Design and execute functional, regression, and exploratory test cases. 
  • Validate web, mobile, and Smart TV applications to ensure an outstanding user experience. 
  • Report and track defects, collaborating closely with developers and product teams. 
  • Perform API testing and analyze logs to identify backend issues. 
  • Ensure cross-browser and cross-device compatibility. 
  • Contribute to continuous improvements in QA processes and best practices."

Next, try reading the article Manual testing for beginners: A comprehensive guide and mentally replacing the phrase 'manual testing' with 'end-user testing.' 

Do we lose any clarity simply by dropping the word 'manual' or using a more precise term that's suitable to the context? Not really, but feel free to let me know if you disagree. So if you start writing the word 'manual,' just stop and you can mostly just continue on your way. 

I’ve seen some folks recommend using ‘exploratory testing or tester’ instead of manual. I can see the attraction and I have used it in the right context. But not all human-centric testing is exploratory in my opinion. And I know some might say even if we are following a script, our brains should still be switched on. If we see something not quite related to the script we should absolutely take note. But are we completely convinced that happens every time? 

Alan Julien stimulated a great conversation on LinkedIn and through that came some interesting suggestions such as: 

  • Investigative testing 
  • Analytical exploration 
  • Strategic validation 
  • Cognitive testing 

What other replacements can we suggest? 

To sum up

By moving away from divisive language like 'manual testing,' we can foster a more inclusive and accurate view of the craft. Let’s celebrate the skills, knowledge, and curiosity that testers bring to user-focused and automation alike and focus on what truly matters: delivering great software and enhancing value to the business and users alike. 

I firmly believe it would strengthen the craft if we could all: 

  • Drop the use of ‘manual’ to describe testers. If you want someone who doesn’t do automation, put ‘exploratory testing’ in your desired skills. 
  • Challenge the term in articles and job descriptions if you see it. 
  • Educate colleagues and stakeholders about the value of all human-driven testing in all its forms. 
  • Focus on skills rather than titles. Testers can be great at many things. 

For more information

Ady Stokes
He / Him
Freelance IT and Accessibility Consultant
Freelance IT consultant and accessibility Advocate. I curate the Essentials Certificate STEC, and co-run the Ministry of Testing Leeds. MoT Ambassador. I teach, coach, and give training.
Comments
MoT Professional Membership image
For the advancement of software testing and quality engineering
Explore MoT
Castelo Branco Meetup image
Tue, 6 May
The Future of Testing in an Automated World: Embracing Continuous Learning and A
Cognitive Biases In Software Testing image
Learn how to recognise cognitive biases, explain what they are and use them to your advantage in your testing
Leading with Quality
A one-day educational experience to help business lead with expanding quality engineering and testing practices.
This Week in Testing image
Debrief the week in Testing via a community radio show hosted by Simon Tomes and members of the community
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.