Reading:
So, What Is Software Testing?
Share:

So, What Is Software Testing?

by Claire Reckless

If you had to answer the question ‘What is software testing?’ what would you say? It’s something that is pretty difficult to compress into a couple of short sentences.

There are also a lot of misconceptions about what software testing is, and what testers do, even amongst testers themselves. Testing as a skill, and an industry, is constantly evolving. In this article, we’ll seek to look at some of the things that software testing is, and isn’t.

What Software Testing Is

Investigation

To investigate is defined as ‘to observe or study by close examination and systematic inquiry’[1].  

The process of testing should be an investigation. We may not always know what the outcome will be but it’s our job to uncover information which help people make decisions. It is much more than comparing against a specification which has an expected result. We need to think critically, ask difficult questions, pick up on risks, notice those things which at first glance seem inconsequential, yet on close examination are much more important and need investigating further.

Exploration

A list of requirements is never really complete, there will always be requirements which are not stated, which are assumed, or omitted. Regardless of how comprehensive your requirements are, they will never be an exhaustive list. You won’t know everything the software will do up front. That’s where exploratory testing comes in.  

Exploratory testing is defined as simultaneous learning, test design and execution [2].  The tester explores the application, discovering new information, learning, and finding new things to test as they go.  They could do this alone, or pair with another tester, or a developer perhaps.  

Software testing shouldn’t be perceived only as a task where the tester works through a list of pre prepared tests or test cases giving a firm pass or fail result. If you have a user story, or set of requirements, it is of course important to make sure what you are testing adheres to those things, however it can be helpful to reframe acceptance criteria as ‘rejection criteria’. When the acceptance criteria are not met, the product is not acceptable, but if they are met, that doesn’t mean the product has no issues.

Checking and verifying should be combined with exploration and investigation, asking questions of the product like ‘What happens if…’ that you may not know the answers to before you start, and that test cases written in advance may not cover.    

Mitigation

One of the reasons we test is to discover issues, risks, and other information about a software product, enabling action to be taken so that the end user is not adversely impacted by them. This action might be:

  • Fixing bugs

  • Re-assessing and changing the original requirements

  • Providing user assistance within the product

  • Creating user documentation

  • Communicating known issues to stakeholders

For software of any complexity, it will be impossible to remove every issue a user might come across, however, by testing we can seek to reduce the risk of them experiencing issues, or the severity of any issues they do experience.

Valuable

Software testing is a valuable activity in software development but often misunderstood due to its unpredictable and creative nature.

Developers output code as a result of their day to day work, analysts may output requirements or documentation, yet tester’s output may sometimes be difficult to measure. Often, testers struggle to communicate their plans, progress and outcomes. This can make it tricky for those who do not know testing, to understand what has been done, how it has been done, and why. As a result, some struggle to see the value of testers and testing. There are many companies out there who develop software with no tester involvement whatsoever.  

The lack of countable things created by testers is one reason some people like to use test cases as a way of measuring, they are a tangible, countable output. The value of testing extends beyond test cases. The testing carried out during exploratory testing sessions may not necessarily provide a defined set of test cases, however the tester often finds more interesting bugs by not following a scripted path.  

This is part of the reason many people like to introduce metrics which involve counting the numbers of bugs logged, numbers of test cases written and executed, and various other ‘countable’ things. Some projects will try to use metrics to measure the quality of a product, as well as the developers and testers themselves. These measurements often focus on the wrong things, and can be misleading.  

Testing is valuable at all stages of the development lifecycle, not just when code has been written. Other things can be tested too.

  • Ideas

  • Requirements

  • Designs

  • Assumptions

  • Documentation

  • Infrastructure

  • Processes

It’s the testers job to ask questions, explore, and think critically about all of these things.  It could result in something that could have turned into a bug later in the development process being caught much earlier.

Communication

A massive part of a tester’s job is communication. Testers provide information about the quality of a software product, so it’s important we communicate this information accurately to enable the right decisions to be made.  

Someone can start as a tester with few technical skills, but a real ability to communicate with others and be clear about what it is you are saying is vital.  

As testers, we need to make sure we use the correct words and phrasing so as to not be ambiguous, and to remove the risk of misunderstanding. What you mean to say isn’t always what you end up saying, and often assumptions are made, and incorrect actions are taken as a result of poor or insufficient communication.  

We need to communicate regularly with people in different roles, at different levels of seniority, with different levels of knowledge.

  • Developers - To ask questions and gain knowledge of the software product they have written. To enable us to understand the technical aspects, explain bugs we have found and how to reproduce them.  

  • Product Owners - To understand requirements, to question use cases and provide information about scenarios. To provide information to enable decisions to be made about product releases.  

  • Testers - If you work within a team of testers, it’s vital to be able to communicate with your peers, to discuss issues, and to make decisions. You might need to train up a new or junior team member, and it’s important you clearly explain any tasks they need to perform or when providing assistance if they are struggling.

  • Users / Customers - To ensure you understand their expectations correctly and to have clear knowledge of any issues they are having.  If you are assisting with a problem, you need to be able to explain any troubleshooting or problem solving procedures in a way they will understand.  

  • Managers - To report what has been done and what is yet to be done. To inform them about risk and consequence, as well as timescales. If you are suggesting improvements, you need to be clear about your ideas and their impact.  

Written communication is equally as important as the spoken word. It’s easy to produce brilliantly written, extensive documentation which turns out to be unnecessary and which no one reads. We need to ensure we choose the right way to communicate which is most valuable to the recipient, the process, and the project.

Potentially Infinite

All testing is sampling. For every non trivial product, there are an unimaginable number of parameters with a great number of possible values. How do you know you are testing the important ones? We can’t test everything.  

It’s part of our job to make the decisions about what to test, understand the consequences of only testing those things, and be able to explain our decisions.

What Testing Is Not

Simple

Testing is often thought of as something anyone can do. This is true to some extent, anyone can explore a product, ask questions about it, run a step by step test case, or check something against a list of requirements. It takes real skill to do these things well and in a systematic way.

A lot of us have been told to write test cases ‘so anyone can come in and run them’, and this could be what adds to the perception that testing is simple. We just write tests from the acceptance criteria don’t we? Testers who perform exploratory, investigatory testing, know this to be untrue.

Checking is not simple. Deciding where possible checks should be done and automated is far from a simple task. It could require understanding of automation frameworks, knowing how to code, knowing how API’s work, understanding tools like Selenium. That’s quite a lot of technology to understand. Additionally, we need to know when to automate and when automation isn’t the best idea.

Neither is exploration simple, it isn’t just ad hoc or ‘playing around’ with the software to see what happens. It is a structured, technical activity. To explore an application at a deeper level could require knowing something about the architecture, the technologies used, as well as the psychological aspect of thinking like different types of user.

Automatable

“We don’t need manual testers anymore….we can automate all the testing!”.  We’ve all seen variations on this in Twitter discussions, on forums, and in articles.  Testing as an exploratory, investigative activity, cannot be replaced by automated checks.  A computer cannot currently explore in the same way as a human being.  

What we can automate are individual checks, but a computer and a human running those same checks will not really be performing exactly the same checks.  A person will pick up on other things while they are carrying out the procedure, will take notice of any feeling telling them something doesn’t seem right, and provide feedback beyond a pass or fail result.  A computer will only perform the exact checks which it has been programmed to perform.  Automated checks are extremely valuable as part of an overall test strategy, but at this point in time, cannot replace human testers. They ultimately do different things.  

Testers should use tools, including automated checks, to support the testing work they do. Maybe custom tools can be created to assist with data creation, to automate repetitive actions, to analyse test output.  It’s about making the most effective use of the tools available to help you, not trying to get them to replace you.  

Increasing Quality

Testers do not, on the whole, perform actions which change quality directly. By performing a test, we are not affecting the underlying code, so the quality of the software remains the same. It is only by subsequent action by developers, that the quality of the product may change in any way. We cannot test quality into a product.

Testing is not the only part of software development where quality should be taken into account.  This should be done at all stages of the lifecycle, and is the responsibility of all members of the team. Testers can use their specific skillset to collaborate with others, at all stages, but it is not our job alone. It is a whole team exercise.

Neither, by testing, or by subsequent changes to the code by development, can we make the assumption that quality has increased as a result. As we cannot test everything, there may be scenarios we have not tried where issues occur. The quality could be worse because of changes or unknowns, we just don’t know until something happens to expose an issue.  Also, where testers could provide information indicating that the product is of sufficient quality for release, the end user’s perception could be that the product is of poor quality, perhaps due to incorrect requirements. It depends on your point of view.

Quality is defined as ‘value to some person to whom it matters’. It’s generally not easily measurable, and therefore to definitively say testing, at any stage has contributed to increased quality, is extremely difficult, if not impossible.  

Fixed, Unimaginative, and best confined to strict rules

Very often, the most interesting bugs are uncovered during exploratory testing sessions.  Running the same set of tests over and over again is unlikely to uncover much information which is new or interesting and let’s face it, if you have to do it manually, can be pretty dull.  

There are no best practices for testing that can be applied absolutely everywhere. You will  need to find out what works in your context and industry requirements.

Thinking of new and creative ways to test is a great part of a tester’s role. Being able to experiment and find the best tools for the job, learn new skills and new technologies, and do what suits the needs of the project help us to keep learning and keep our skills fresh.  

Vital for Project Success

A project can be successful without testers, and many are. However, even where there are no testers, testing is still being performed by someone at some stage of the development process.. Developers will test their own code, and stakeholders will question requirements. The end user might test the product before they roll it out. People can test, without always realising they are doing it.  

Never Finished

By ‘Never finished’, it means you cannot possibly test every single thing there is to test for a given application. To test every combination, or action a user might take, or environmental variation, or possible data value, or path through the code, or variable, is unrealistic. In this sense, you can never ‘finish’ testing. There will always be a need to accept things which will be left untested. The majority of projects will be subject to time, budget, and other staffing or resource constraints and testers need to work within these boundaries while performing the most effective testing they can.

Part of the skill of being a tester is making the decisions on what to test.  Understanding the implications of not testing the other things, and any associated risks with decisions to exclude some or all of a thing which as a low risk, from testing.  

Ultimately, testing is ‘finished’ when management has enough information to enable them to make the decision whether or not to release the product.

It’s So Much More

These are just some of the things that software testing is. This article could be significantly longer! There is no ‘one’ definition and it’s pretty difficult to squeeze it into a short sentence which adequately conveys what it is testers do. An internet search for ‘What is Software Testing’ returns a number of definitions which indicate testing is executing software with the aim of finding bugs, but, as we’ve seen, it’s so much more.

References

  1. Explaining Testing To Anybody - James Bach

  2. Software Testing Club - So What Is Testing ?

  3. The Impossibility of Complete Testing - Cem Kaner

  4. Exploratory Testing - James Bach

  5. Acceptance Tests : Let’s Change the Title Too - Michael Bolton

  6. The Rapid Software Testing Guide to What You Meant To Say - Michael Bolton

  7. Exploratory Testing Explained - James Bach

  8. Explore It - Elisabeth Hendrickson

About Claire Reckless

Claire Reckless is a tester at Avecto, working on endpoint security software. Her passion is in helping people learn how to become better testers. Her domain expertise also includes financial and ERP software. Claire lives in Manchester, with her husband Rob, their cat, Max, and Ted the dog. She also enjoys running as time allows. You can find Claire on her Twitter.

Discovering Logic In Testing - Rosie Hamilton
TestChat 1: The many ways testers and testing are misunderstood
The Role of QA In Software Development
Explore MoT
Episode One: The Companion
A free monthly virtual software testing community gathering
Essentials - Introduction to Software Development and Testing
Start your journey into software development and testing by learning what it's all about