Ady Stokes
Freelance Consultant
He / Him
I am Open to Write, Teach, Speak, Podcasting, Meet at MoTaCon 2026, Review Conference Proposals

STEC and SQEC Certified. MoT Ambassador, writer, speaker, accessibility advocate. Consulting, Leeds Chapter Lead. MoT Certs curator. Testing wisdom, friendly, songs and poems. Great minds think differently

Chapter Lead
Ambassador

Achievements

Career Champion
Club Explorer
Bio Builder
Avid Reader
TestBash Trailblazer
Article Maven
Testing Scholar
MoT Community Certificate
MoT Software Testing Essentials Certificate
Scholarship Hero
Insights Spotter Bronze
TestBash Speaker
99 Second Speaker
The Testing Planet Contributor
Chapter Lead
MoT Streak
Unlimited Member
In the Loop
MoT Ambassador 2025
MoT Inked
404 Talk (Not) Found
Bug Finder
Collection Curator
Glossary Contributor
Meme Maker
Photo Historian
TestBash Brighton 2025 Attendee
TestBash Brighton 2024 Attendee
TestBash Teacher
Cert Shaper
Course creator
Author Debut
A tester's role in continuous quality
Prompting for testers
Improving your testing through operability
Cognitive biases in software testing
A software tester's guide to Chrome DevTools
Introduction to software development and testing
Introduction to modern testing
Introduction to accessibility testing
Bug reporting 101
Coding for non-coders
The building blocks of the internet
Introduction to JavaScript
Advanced prompting for testers
99 and Counting
TWiQ Host
Chapter Event Speaker
Pride Supporter
Meme Machine
Inclusive Companion
Social Connector
Open to Opportunities
Found at 404
Picture Perfect
Story Sharer
Neurodiversity Matters
Everyday security testing: A practical guide to getting started
Quality coaching essentials
Kind Click
Supportive Clicker
Encouragement Giver
Encouragement Champion
Goal Setter
Insights Taster
MoT Ambassador 2026
Chapter Discovery
Call for Insights
Moment Maker
Moment Sharer
Moment Documenter
Chapter Event Host

Certificates

MoT Software Quality Engineering Certificate image
Awarded for: Passing the exam with a score of 100%
MoT Software Testing Essentials Certificate image
Awarded for: Passing the exam with a score of 100%

Activity

Ady Stokes
Ady Stokes
earned:
What is software testing?  image
What is software testing?
Ady Stokes
Ady Stokes
earned:
STEC and SQEC certified baby!!!! woooooow  image
STEC and SQEC certified baby!!!! woooooow
Ady Stokes
Ady Stokes
commented on:
Huge bug congrats Adam  image
"Huge bug congrats Adam "
Ady Stokes
Ady Stokes
thanked contributors on:
It's official, I'm competent! image
It took me a while to get there but I finally finished STEC. Nice.Even as someone who has worked in software quality for almost 10 years, there were plenty of great ideas and practical suggestions ...
Ady Stokes
Ady Stokes
earned:
It's official, I'm competent! image
It's official, I'm competent!

Contributions

STEC and SQEC certified baby!!!! woooooow  image
Very happy to be both STEC (Software Testing Essentials Certificate) and SQEC (Software Quality Engineering Certificate) certified ;-) 
100 terms added to the Glossary image
Grey box testing sounds rather innocuous, but it has a special place. It's the 100th term I've added to the glossary. Help create the largest testing terminology repository in the world for the MoT...
Grey box testing  image
  • Ady Stokes's profile image
Grey box testing is a method where the tester has partial knowledge of the application's internal structure. It is the middle ground between black box and white box testing. You might have access to the database schema or the API documentation while you test the user interface. This allows you to write better test cases because you understand the underlying logic.It is particularly useful for integration testing where you want to see how data flows between different components. During refinement, you might use your knowledge of the system architecture to identify specific risks. By looking at the acceptance criteria and the technical design, you can ensure that the tests cover both the user journey and data integrity.It helps to find bugs that a pure black box test would miss, such as a record not being updated correctly in the background or an API returning more data than it should. It is a smart way to test because it combines the user perspective with technical insight. You aren't just clicking buttons. You are verifying that the entire system is behaving as it should. 
Sad-path testing image
  • Ady Stokes's profile image
Sad-path testing is a very general term to cover testing the unexpected. It involves verifying how an application behaves when it receives invalid data or encounters an error. It is the direct opposite of happy path testing, which only follows the intended user journey. When you perform sad-path testing, you are checking that the system handles exceptions as required. This often means looking at acceptance criteria to see how the system should respond to incorrect logins. timed-out sessions. or empty fields. It is a critical part of making a product robust and reliable for real users. You are essentially trying to find where the logic breaks down when a user does something unexpected. By identifying these scenarios during refinement, you can ensure the developers build in proper error messages and recovery steps. It helps to move beyond basic functionality and ensures the software can handle the messiness of the real world. As a general term, it can cover many areas, but is a simple way to explain testing if for more than confirming software does what it is supposed to do. 
Backlog refinement  image
  • Ady Stokes's profile image
A backlog refinement is when the team gets together to review the work waiting in the queue. It is the time when a user story is reviewed to make sure the requirements are actually understood by everyone. You can spend this time adding acceptance criteria, so there is no confusion about what 'done' looks like. 3 Amigos sessions are similar but a more focused deep dive, with a smaller group which can include product owners or people not directly involved with the team. It can also be when you break down larger requirements into smaller, more manageable user stories or tasks. You are effectively checking that the user story or requirement is solid and good to go. This prevents the team from picking up a ticket in a sprint and then realising they do not know how to start, or will know when they are done. 
LLM (Large Language Model)  image
  • Ady Stokes's profile image
An LLM is an AI system which is basically a massive, high-speed pattern-recognition engine trained on a mountain of text. It’s not "thinking" in the way you or I do. It tries to produce what it thinks the answer would look like based on everything it’s ever read and the instructions and context you give it. For us, it’s like having a pair programmer who has read every technical manual and Stack Overflow thread in existence, but sometimes forgets to check whether the advice is actually useful, practical, or just 'out there'.  Developers use LLMs primarily as a productivity multiplier. It’s brilliant at the "boilerplate" stuff that usually bores us to tears. For example, you can ask an LLM to "Write a Python function to sort a list of dictionaries by a specific key," and it’ll spit out a working version in seconds. But it can also "hallucinate" (make things up). It might suggest a library that doesn't exist or uses a deprecated method with security vulnerabilities. You still need to be the "Adult in the room" to review the code. Rahul wrote a great piece on this subject, "Human in the loop vs AI in the loop."  For a Quality Engineer, or tester of any kind, an LLM can be a powerful tool for generating test ideas and data, as long as you don't let it drive the bus. For example, you could feed a set of requirements into an LLM and ask, "What are ten edge cases for this login feature?" It might suggest things you hadn't considered, like handling emojis in usernames or SQL injection attempts. But if you use it to generate your automated tests, it might create "brittle" code that looks right but fails the moment your UI changes. The biggest risk with LLMs, as with many things, is a loss of context. The model doesn't know your specific business logic, your security constraints, or your "unwritten" team rules, so be careful how you use it.  Use it to bounce ideas off, draft documentation, or create code snippets. It’s an assistant, not a replacement for the critical thinking and scepticism that a human brings to the party. Just because the LLM gave you an answer doesn't mean it is right, or that you can stop being a thought worker.
JIT (Just in time) Testing  image
  • Ady Stokes's profile image
JIT Testing is the practice of testing only what has changed, when it changed, to keep the feedback loop as tight as possible. It means creating temporary disposable tests or selecting from an existing automation suite. More tools are using LLMs (large language models) to support this type of testing, and even security tools are using similar practices. It is important to note that this isn't really about replacing or doing less testing. It's a strategy that aims to reduce effort in some areas. Where continuous quality or testing practices are used, JIT testing aligns well. 
Y2K image
  • Ady Stokes's profile image
Y2K
Y2K simply means the 'year 2000' or 'year 2000 problem'. In the late 1990s, it was a massive wake-up call about the long-term cost of technical debt. To save on expensive memory, early developers stored years as just two digits (e.g., "99" instead of "1999"). It was a clever shortcut that worked, until time nearly ran out, and it didn't anymore.The Problem: When the year 2000 was reached, computers would see "00" and assume it was 1900. This threatened to send global banking, utilities, and transport systems into a proper meltdown. There were claims that 'planes would drop from the sky.' And 'Nuclear weapons would launch'. The Outcome: It wasn’t really a "hoax" as some people thought. The reason the world didn't end is that thousands of teams spent years finding, fixing, and testing every scrap of code they could find. When we finally reached the year 2000, there was little real disruption. A couple of examples were a video store that tried to charge tens of thousands for a '100-year-old' return. Some children were registered as being born in 1900, and the US Naval Observatory showed a date of 19100 on its website for a while. The Lesson: Quality isn't just about making things work today. It's about ensuring your "temporary" hacks don't become tomorrow’s disasters.
Wow! 2,500 Community Stars image
Stunned to see I have reached 2,500 community stars from the MoTaverse. Thank you, everyone, for thanking me, completing my lessons, reading my articles and encouraging me every day to do more. 
Meme moment at the Leeds Chapter  image
  • Alex Weightman's profile image
  • Hippo's profile image
Alex talked about cloud outages and how it affected his company’s coffee machine and one slide was meme worthy 
Alex smashed it!  image
  • Alex Weightman's profile image
  • Hippo's profile image
Alex did a fantastic job of delivering his first public talk at the Leeds Chapter.  Wry little feedback required. Watch for him at conferences soon. 
Login or sign up to create your own MoT page.
Subscribe to our newsletter