Ady Stokes
Freelance Consultant
He / Him
I am Open to Write, Teach, Speak, Podcasting, Meet at MoTaCon 2026, Review Conference Proposals
STEC Certified. MoT Ambassador, writer, speaker, accessibility advocate. Consulting, Leeds Chapter Lead. MoT Certs curator. Testing wisdom, friendly, songs and poems. Great minds think differently
Achievements
Certificates
Awarded for:
Passing the exam with a score of 100%
Awarded for:
Achieving 5 or more Community Star badges
Activity
earned:
99 Second Talks – Day 1 – TestBash Brighton / MoTaCon 2025
earned:
The New Shift Left: Coding Earlier
earned:
You Climbed the Test Pyramid. Now Eat a Custard Slice.
thanked contributors on:
There's a cool lesson in the Software Quality Engineering Certificate (SQEC) which I look forward to folks checking out. It's in Module 9, Lesson 2 which has the brilliant Barry Ehigiator sharing h...
Contributions
An LLM is an AI system which is basically a massive, high-speed pattern-recognition engine trained on a mountain of text. It’s not "thinking" in the way you or I do. It tries to produce what it thinks the answer would look like based on everything it’s ever read and the instructions and context you give it. For us, it’s like having a pair programmer who has read every technical manual and Stack Overflow thread in existence, but sometimes forgets to check whether the advice is actually useful, practical, or just 'out there'.Â
Developers use LLMs primarily as a productivity multiplier. It’s brilliant at the "boilerplate" stuff that usually bores us to tears. For example, you can ask an LLM to "Write a Python function to sort a list of dictionaries by a specific key," and it’ll spit out a working version in seconds. But it can also "hallucinate" (make things up). It might suggest a library that doesn't exist or uses a deprecated method with security vulnerabilities. You still need to be the "Adult in the room" to review the code. Rahul wrote a great piece on this subject, "Human in the loop vs AI in the loop."Â
For a Quality Engineer, or tester of any kind, an LLM can be a powerful tool for generating test ideas and data, as long as you don't let it drive the bus. For example, you could feed a set of requirements into an LLM and ask, "What are ten edge cases for this login feature?" It might suggest things you hadn't considered, like handling emojis in usernames or SQL injection attempts. But if you use it to generate your automated tests, it might create "brittle" code that looks right but fails the moment your UI changes.
The biggest risk with LLMs, as with many things, is a loss of context. The model doesn't know your specific business logic, your security constraints, or your "unwritten" team rules, so be careful how you use it.Â
Use it to bounce ideas off, draft documentation, or create code snippets. It’s an assistant, not a replacement for the critical thinking and scepticism that a human brings to the party. Just because the LLM gave you an answer doesn't mean it is right, or that you can stop being a thought worker.
JIT Testing is the practice of testing only what has changed, when it changed, to keep the feedback loop as tight as possible. It means creating temporary disposable tests or selecting from an existing automation suite. More tools are using LLMs (large language models) to support this type of testing, and even security tools are using similar practices. It is important to note that this isn't really about replacing or doing less testing. It's a strategy that aims to reduce effort in some areas. Where continuous quality or testing practices are used, JIT testing aligns well.Â
Y2K simply means the 'year 2000' or 'year 2000 problem'. In the late 1990s, it was a massive wake-up call about the long-term cost of technical debt. To save on expensive memory, early developers stored years as just two digits (e.g., "99" instead of "1999"). It was a clever shortcut that worked, until time nearly ran out, and it didn't anymore.The Problem: When the year 2000 was reached, computers would see "00" and assume it was 1900. This threatened to send global banking, utilities, and transport systems into a proper meltdown. There were claims that 'planes would drop from the sky.' And 'Nuclear weapons would launch'. The Outcome: It wasn’t really a "hoax" as some people thought. The reason the world didn't end is that thousands of teams spent years finding, fixing, and testing every scrap of code they could find. When we finally reached the year 2000, there was little real disruption. A couple of examples were a video store that tried to charge tens of thousands for a '100-year-old' return. Some children were registered as being born in 1900, and the US Naval Observatory showed a date of 19100 on its website for a while. The Lesson: Quality isn't just about making things work today. It's about ensuring your "temporary" hacks don't become tomorrow’s disasters.
Stunned to see I have reached 2,500 community stars from the MoTaverse. Thank you, everyone, for thanking me, completing my lessons, reading my articles and encouraging me every day to do more.Â
Alex talked about cloud outages and how it affected his company’s coffee machine and one slide was meme worthyÂ
Alex did a fantastic job of delivering his first public talk at the Leeds Chapter. Â Wry little feedback required. Watch for him at conferences soon.Â
Alex’s first time speaking at the Leeds Chapter of Ministry of Testing at Hippo
A lovely spread of pizza for those attending the Leeds chapterÂ
Scott is on holiday and Colin is ill so I’m on my own tonight. Wish me luck.Â
Highlights from the Ambassador’s gathering. With an eye on the future as it is 6-months away. MoTaCon All talks are about how AI will create, test, and maintain all security systems....