Ady Stokes
Freelance Consultant
He / Him
I am Open to Write, Teach, Speak, Meet at MoTaCon 2026, Podcasting
STEC Certified. MoT Ambassador, speaker, and accessibility advocate. Consulting, training, Leeds meetup host. MoT Certs curator and contributor. Testing wisdom, friendly, jokes, parody songs and poems

Achievements

Career Champion
Club Explorer
Bio Builder
Avid Reader
TestBash Trailblazer
Article Maven
Testing Scholar
MoT Community Certificate
MoT Software Testing Essentials Certificate
Scholarship Hero
Trend Spotter Bronze
TestBash Speaker
99 Second Speaker
The Testing Planet Contributor
Meetup Organiser
MoT Streak
Unlimited Member
In the Loop
MoT Ambassador
MoT Inked
404 Talk (Not) Found
Bug Finder
Collection Curator
Glossary Contributor
Meme Maker
Photo Historian
TestBash Brighton 2025 Attendee
TestBash Brighton 2024 Attendee
TestBash Teacher
Cert Shaper
Course creator
Author Debut
A tester's role in continuous quality
Prompting for testers
Cognitive biases in software testing
Introduction to software development and testing
Introduction to modern testing
Introduction to accessibility testing
Bug reporting 101
Introduction to JavaScript
Advanced prompting for testers
99 and Counting
Meetup Contributor
Pride Supporter
Meme Machine
Inclusive Companion
Social Connector
Open to Opportunities
Found at 404
Picture Perfect
Story Sharer

Certificates

MoT Software Testing Essentials Certificate image
  • Ady Stokes's profile
Awarded for: Passing the exam with a score of 100%
MoT Community Certificate image
  • Ady Stokes's profile
Awarded for: Achieving 5 or more Community Star badges

Activity

Contributions

Existential crisis: What is quality engineering? - Ep 112 image
  • Ady Stokes's profile
  • Scott Kenyon's profile
  • Ben Dowen's profile
  • Judy Mosley's profile
Unpack the blurred meaning of quality engineering through real stories and honest reflection
Post-Incident Analysis image
  • Ady Stokes's profile
Post-incident analysis, or post-mortem, looks at what happened after something has gone wrong, mainly in production, but sometimes in testing. It could be due to an outage, a bug in production, configuration issues, or any number of bugs or incidents. It should not be about blame or finger-pointing. It’s about understanding, learning, and adapting. When a team or, to start, an individual conducts a post-incident session, they gather the facts of what occurred, when, and how the issue was discovered, and then explore why it happened. They focus not just on the surface error but on the deeper root causes behind it. Good post-incident work digs past symptoms to find the root cause. It is better done as a team effort, and the best sessions are open, honest, and psychologically safe so everyone can share insights freely. The outcome isn’t just a report. It should be an action plan with practical steps to reduce the chance of recurrence and to strengthen either the software, the processes or both. That might include improving monitoring, changing code review practices, refining tests, or adjusting how incidents are triaged and communicated. Done well, post-incident analysis turns failure into fuel for improvement. It’s a core part of Continuous Quality to use what went wrong and make what comes next better.
Test Bench image
  • Ady Stokes's profile
A test bench is a controlled setup used to check how software or hardware behaves without needing the full system it will eventually run on. It provides an environment where components can be tested, monitored, and adjusted safely before being integrated into the complete product.In the automotive industry, for example, a test bench might replicate the hardware parts of a car and the communication between them. Engineers can connect the engine control unit, sensors, and actuators to simulate real driving conditions. This allows the software to be tested for performance, timing, and safety before it ever reaches a real vehicle.A test bench helps isolate issues early and ensures that each part works correctly in a realistic but repeatable environment. It is an essential step between isolated component testing and full system testing, giving confidence that everything will connect and perform as intended when it moves into the real world. Credit to Andres Gomez Ruiz for introducing this to me.
Quality image
  • Ady Stokes's profile
Quality, or in our case, 'software quality', is one of those words that everyone uses, is rarely questioned in the moment, but can mean so many different things.  Suman Bala, gives a great example of a Throne and a school chair. Their costs are exponentially different. Their materials, complexity and design are miles apart. Yet, for their purpose, in their context. Their quality can be argued as comparable. The soft and plush throne, with its many adornments, used only a few times, can be perfect for the context in which it is used. The hard and basic metal and plastic chair. Used tens of thousands of times, it can also be perfect in a school environment where it needs to be cheap, hard-wearing, and even stackable. Vastly different, yet equally valuable.In software, quality can depend on such a variety of factors that having a single definition can be more unhelpful than not. A developer might see quality as clean code with no 'smells'. A tester might see it as a multitude of things, from alignment with requirements or acceptance criteria to its accessibility and usability. A user might simply want something that works.Gerald Weinberg described quality as “value to someone”, which has been augmented over time with things like, 'who matters' and, 'at some time'.  Even with context, quality is not always a fixed measure but something shaped by people, context, and purpose. What matters to one group may not matter to another, and that is why software quality always needs a conversation.Stuart Crocker, in a LinkedIn post in 2024, said. "This is now my goto definition of what software testing is, "The exploration and discovery of intended and unintended behaviours in the software we build and their impact on product value—for both customers and the business." With my definition of quality being "The absence of unnecessary friction" These two definitions, working together, help me make quick, useful decisions on what to test and from that testing, what is necessary to improve." That view captures how feelings can be a primary indicator of quality (to someone).Dan Ashby talks about quality in his 2019 blog post on Philip Crosby's 4 absolutes of quality in a software context. He takes those and suggests his own adaptations, the first of which is, "Quality is defined as “correctness” and “goodness” in relation to stakeholder value. This is adapted from "Quality is defined as conformance to requirements", and as Crosby was a quality leader in manufacturing, conformance to requirements is fundamental. For software, we need more.In the end, quality in software is not something I can tell you. It is context-dependent, shaped by many things like purpose, constraints, time, and the people involved. There is no universal checklist. The key is understanding what quality means in your situation, how it will be measured, and who it truly matters to. Some are helped by customer feedback, some by monitoring and observation of software performance. Others look to quantify in various ways. They ask, 'How long does it take to learn and understand a new feature?' Or determine, 'What scales or meters can we employ to measure a 'value'.However you describe or measure software quality will be dependent on the factors you have. Have fun doing it.
If you build it accessible, they will come image
Just a fact, if it isn't accessible you are excluding 15% or more potential customers or users
Accessibility is fundamental image
From many companies digital accessibility feels optional, But it is a human right and fundamental to usable software. And you can't argue with Batman
Data Lake image
  • Ady Stokes's profile
In the simplest terms, a data lake is a big collection of raw data, useful for future analysis and a powerful tool for testers to investigate, reproduce bugs, check data quality, and assess system performance.A data lake is a vast storage area that holds a huge amount of raw, unprocessed data from many different sources. Think of it like a natural lake where various rivers flow into it. These rivers bring all sorts of different things, like logs, images and raw data. A data lake in software collects all kinds of data without first cleaning or structuring it. This means it can hold structured data, such as numbers and text, as well as semi-structured data like logs, and even unstructured data, including images and videos.The main idea behind a data lake is to store all the data first. You do not decide how you will use or analyse it until later, when you actually need it. This gives organisations a lot of flexibility for future analysis, research, and machine learning projects. It is different from a data warehouse, which typically stores data that has already been cleaned, transformed, and organised for a specific purpose.For software testers, the data lake can be a goldmine of information. Using analytics platforms or business intelligence tools, you can dive into this pool of data to understand how information flows through your systems or assess data quality by identifying inconsistencies or unexpected values, amongst other things.
Chocolate Driven Development (CDD) image
  • Ady Stokes's profile
CDD, or chocolate-driven development, started in 1992 in Switzerland. The boost in dopamine and serotonin promoted focus, happiness and other mood boosting qualities. While all those working in this methodology were happy, unfortunately, no successful software development projects were actually started, as it wasn't a real computer. There are those, however, who persist with this way of working today and are very happy with the results. 
Snapshot Testing image
  • Ady Stokes's profile
Snapshot testing is all about capturing the state of your software at a specific point in time so you can compare it later. Think of it like taking a photograph of something you want to remember. We use this technique to establish a baseline, a known good state, and then compare subsequent states against it. This helps us quickly spot unexpected changes, regressions or even visual bugs that might have crept into our system. It's a fantastic way to quickly analyse if something has gone awry without having to manually check every single detail each time.For example, imagine you have a web page with a complex layout. You can take a snapshot of its visual appearance after all the elements have loaded correctly. This becomes your reference. Later, after a new feature has been added or some code has been refactored, you can take another snapshot of the same page. Automated tools can then compare these two images pixel by pixel. If there's any difference, even a tiny shift in a button's colour or position, the test will flag it up as a potential bug. This provides a clear indication that something has changed and requires further investigation, saving you a significant amount of effort in visually checking the page.
System Testing image
  • Ady Stokes's profile
System testing is the process of checking a complete product to see if it works as intended. It can also be referred to as end-to-end testing because it examines the system from the user's perspective, encompassing the entire application.For example, rather than just testing the login function in isolation, a system test would include creating an account, logging in, browsing, adding items to a basket, and completing a purchase.It sits alongside integration testing, which checks that parts of the system work together correctly. Integration tests may confirm that the payment service communicates properly with the order service. System testing goes further by ensuring all connected parts work smoothly in a realistic scenario or user journey.The aim is to give confidence that the system behaves as expected when everything is put together, much like test-driving a fully assembled car after you have already checked the engine, brakes, and lights separately.
Test Case image
  • Ady Stokes's profile
A test case is a set of clear steps and conditions designed to check if part of a system behaves in the way we expect. It typically includes the starting point, the actions to take, and the outcome we expect to occur.For example, imagine you are testing an online shop. A test case could be “Successfully log in with valid credentials.” The test case would include the URL of the login screen. The valid credentials, steps to take and what to expect when successfully logged in, e.g. an indicator or specific screen.Test cases can be written down in detail or kept as lightweight notes, depending on the context and risk. They give you a repeatable way to explore the system, confirm expected results, find bugs, and share your understanding with others. They are also a way of making thinking visible, helping teams see what has been tested and where gaps might exist. Test Cases are normally considered more formal than other formats, such as user stories, as they tend to detail all steps required for execution.
Lessons for software testers from three great Western philosophers image
  • Ady Stokes's profile
Apply philosophical thinking techniques from Socrates, Descartes, and Aristotle to improve questioning, scepticism, and observation in software testing.
Login or sign up to create your own MoT page.
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.