Ady Stokes
Freelance Consultant
He / Him
I am Open to Write, Teach, Speak, Meet at MoTaCon 2026, Podcasting
STEC Certified. MoT Ambassador, speaker, and accessibility advocate. Consulting, training, Leeds meetup host. MoT Certs curator and contributor. Testing wisdom, friendly, jokes, parody songs and poems

Achievements

Career Champion
Club Explorer
Bio Builder
Avid Reader
TestBash Trailblazer
Article Maven
Testing Scholar
MoT Community Certificate
MoT Software Testing Essentials Certificate
Scholarship Hero
Trend Spotter Bronze
TestBash Speaker
99 Second Speaker
The Testing Planet Contributor
Meetup Organiser
MoT Streak
Unlimited Member
In the Loop
MoT Ambassador
MoT Inked
404 Talk (Not) Found
Bug Finder
Collection Curator
Glossary Contributor
Meme Maker
Photo Historian
TestBash Brighton 2025 Attendee
TestBash Brighton 2024 Attendee
TestBash Teacher
Cert Shaper
Course creator
Author Debut
A tester's role in continuous quality
Prompting for testers
Cognitive biases in software testing
Introduction to software development and testing
Introduction to modern testing
Introduction to accessibility testing
Bug reporting 101
Introduction to JavaScript
Advanced prompting for testers
99 and Counting
Meetup Contributor
Pride Supporter
Meme Machine
Inclusive Companion
Social Connector
Open to Opportunities
Found at 404
Picture Perfect
Story Sharer

Certificates

MoT Software Testing Essentials Certificate image
  • Ady Stokes's profile
Awarded for: Passing the exam with a score of 100%
MoT Community Certificate image
  • Ady Stokes's profile
Awarded for: Achieving 5 or more Community Star badges

Activity

Ady Stokes
Ady Stokes
earned:
Ady Stokes
Ady Stokes
contributed:
<div>DevOps is a way of working that brings together development (Dev) and operations (Ops) teams so they can build, deliver, and improve software through close collaboration and shared goals. It breaks down the barriers between development and operations so that everyone involved takes responsibility for quality and delivery.<br><br>At its heart, DevOps is about creating faster and more meaningful feedback loops. When teams work together from ideation to release, they can spot issues sooner, learn from real users, and respond more quickly to change.<br><br>For example, instead of developers waiting weeks to hear if a new feature performs well in production, they can work alongside operations to understand what happens much quicker.<br><br>This shared approach builds trust, speeds up learning, and reduces waste. DevOps helps teams focus on outcomes rather than handovers, creating an environment where improvement is constant and communication keeps quality at the centre of everything.</div> image
Definitions of DevOps
Ady Stokes
Ady Stokes
earned:
<a href="https://www.ministryoftesting.com/software-testing-glossary/sdet-software-development-engineering-in-test" rel="noopener nofollow">SDET (Software Development Engineering in Test)</a> image
SDET (Software Development Engineering in Test)
Ady Stokes
Ady Stokes
contributed:
<div>An SDET is a person with coding and developer skills who uses them to focus on creating automation artefacts, such as tests, frameworks, mocks, stubs, and, more recently, CI/CD pipelines. They combine testing knowledge with developer skills to support the creation of automation. Coming to prominence in the late 1990s and early 2000s at companies such as Microsoft, the role was created to allow someone to focus on the key advances in automation happening at the time.<br><br>The emergence of Agile, then DevOps and even CI/CD pipelines made this bridging role between testers and developers more widespread. While there are still plenty of SDET roles being advertised, these days, many organisations expect testers to have some automation skills, and developers will have some testing knowledge. Indeed, some job descriptions include required skills for four or more 'traditional' roles. In Quality Engineering environments, developers may write the automation, but are supported by those with much deeper testing knowledge to work together in creating valuable automation that supports faster feedback cycles.</div> image
Definitions of SDET (Software Development Engineering in Test)
Ady Stokes
Ady Stokes
earned:
<a href="https://www.ministryoftesting.com/software-testing-glossary/quality-assurance" rel="noopener nofollow">Quality Assurance</a> image
Quality Assurance

Contributions

DevOps image
  • Ady Stokes's profile
DevOps is a way of working that brings together development (Dev) and operations (Ops) teams so they can build, deliver, and improve software through close collaboration and shared goals. It breaks down the barriers between development and operations so that everyone involved takes responsibility for quality and delivery.At its heart, DevOps is about creating faster and more meaningful feedback loops. When teams work together from ideation to release, they can spot issues sooner, learn from real users, and respond more quickly to change.For example, instead of developers waiting weeks to hear if a new feature performs well in production, they can work alongside operations to understand what happens much quicker.This shared approach builds trust, speeds up learning, and reduces waste. DevOps helps teams focus on outcomes rather than handovers, creating an environment where improvement is constant and communication keeps quality at the centre of everything.
SDET (Software Development Engineering in Test) image
  • Ady Stokes's profile
An SDET is a person with coding and developer skills who uses them to focus on creating automation artefacts, such as tests, frameworks, mocks, stubs, and, more recently, CI/CD pipelines. They combine testing knowledge with developer skills to support the creation of automation. Coming to prominence in the late 1990s and early 2000s at companies such as Microsoft, the role was created to allow someone to focus on the key advances in automation happening at the time.The emergence of Agile, then DevOps and even CI/CD pipelines made this bridging role between testers and developers more widespread. While there are still plenty of SDET roles being advertised, these days, many organisations expect testers to have some automation skills, and developers will have some testing knowledge. Indeed, some job descriptions include required skills for four or more 'traditional' roles. In Quality Engineering environments, developers may write the automation, but are supported by those with much deeper testing knowledge to work together in creating valuable automation that supports faster feedback cycles.
Quality Assurance image
  • Ady Stokes's profile
Quality Assurance is a traditional term you will frequently hear in software development when someone is referring to software testing. It originally came from the world of manufacturing, where it referred to a systematic set of actions designed to ensure that a product, such as a car or a physical item, meets certain quality standards or requirements, like 'a 6mm gap'. Unlike software, which often has requirements that are not defined to such a specific degree. When software development needed a label for its quality activities, they borrowed 'QA', and it stuck for a long time.In many companies, you will find that the software testing team is still called the 'QA team', and that is where the confusion lies. The term implies that the act of testing can somehow 'assure' quality, but that is simply not true. As testers, our job is to find and expose risks and bugs, not to guarantee perfection. You cannot 'test in' quality at the end of a project. Quality must be built in from the very first planning meeting, involving everyone from analysts to developers.Because of this inherent conflict between the word 'assurance' and the reality of finding risk, many modern teams are moving towards more accurate job titles such as Software Tester or Quality Engineer. This better reflects the actual work we do, which is about engineering quality into the process and thinking about product quality from the start, rather than making impossible promises of assuring quality.
OKRs (Objectives and Key Results) image
  • Ady Stokes's profile
OKRs stand for Objectives and Key Results. It is both a management framework and an individual tool used for setting goals and tracking the outcomes across an organisation, a team or for an individual. In the simplest terms, the Objective is what you want to achieve, and the Key Results are how you measure the progress and the level of success of that Objective.Without making the 'Key Results' part quantifiable, they become almost useless. As an example. An Objective that says, "Make the product more profitable," is just too broad. It is noise that cannot be proven to have been achieved without a specific focus, such as 'by increasing users'. Then adding a Key Result that says, "Increase daily active users from 500 to >800 by [date]" is something you can measure and track. Just like a good requirement, an OKR must not be ambiguous. It needs to combine the Objective with clear Key Results, similar to a requirements acceptance criteria, to confirm that it has actually been met. If the criteria are vague, you will never know whether you've succeeded. Basically, an O, without good KRs, is practically pointless.It is also important to understand the different levels of OKRs. Team goals or team OKRs are more focused on delivering shared value for the product or the business. An individual's OKRs would generally show how they directly contribute to the broader team objectives. They are slightly different from personal goals, which are more generally focused on individual growth and development, and not necessarily tied to business outcomes.
Existential crisis: What is quality engineering? - Ep 112 image
  • Ady Stokes's profile
  • Scott Kenyon's profile
  • Ben Dowen's profile
  • Judy Mosley's profile
Unpack the blurred meaning of quality engineering through real stories and honest reflection
Post-Incident Analysis image
  • Ady Stokes's profile
Post-incident analysis, or post-mortem, looks at what happened after something has gone wrong, mainly in production, but sometimes in testing. It could be due to an outage, a bug in production, configuration issues, or any number of bugs or incidents. It should not be about blame or finger-pointing. It’s about understanding, learning, and adapting. When a team or, to start, an individual conducts a post-incident session, they gather the facts of what occurred, when, and how the issue was discovered, and then explore why it happened. They focus not just on the surface error but on the deeper root causes behind it. Good post-incident work digs past symptoms to find the root cause. It is better done as a team effort, and the best sessions are open, honest, and psychologically safe so everyone can share insights freely. The outcome isn’t just a report. It should be an action plan with practical steps to reduce the chance of recurrence and to strengthen either the software, the processes or both. That might include improving monitoring, changing code review practices, refining tests, or adjusting how incidents are triaged and communicated. Done well, post-incident analysis turns failure into fuel for improvement. It’s a core part of Continuous Quality to use what went wrong and make what comes next better.
Test Bench image
  • Ady Stokes's profile
A test bench is a controlled setup used to check how software or hardware behaves without needing the full system it will eventually run on. It provides an environment where components can be tested, monitored, and adjusted safely before being integrated into the complete product.In the automotive industry, for example, a test bench might replicate the hardware parts of a car and the communication between them. Engineers can connect the engine control unit, sensors, and actuators to simulate real driving conditions. This allows the software to be tested for performance, timing, and safety before it ever reaches a real vehicle.A test bench helps isolate issues early and ensures that each part works correctly in a realistic but repeatable environment. It is an essential step between isolated component testing and full system testing, giving confidence that everything will connect and perform as intended when it moves into the real world. Credit to Andres Gomez Ruiz for introducing this to me.
Quality image
  • Ady Stokes's profile
Quality, or in our case, 'software quality', is one of those words that everyone uses, is rarely questioned in the moment, but can mean so many different things.  Suman Bala, gives a great example of a Throne and a school chair. Their costs are exponentially different. Their materials, complexity and design are miles apart. Yet, for their purpose, in their context. Their quality can be argued as comparable. The soft and plush throne, with its many adornments, used only a few times, can be perfect for the context in which it is used. The hard and basic metal and plastic chair. Used tens of thousands of times, it can also be perfect in a school environment where it needs to be cheap, hard-wearing, and even stackable. Vastly different, yet equally valuable.In software, quality can depend on such a variety of factors that having a single definition can be more unhelpful than not. A developer might see quality as clean code with no 'smells'. A tester might see it as a multitude of things, from alignment with requirements or acceptance criteria to its accessibility and usability. A user might simply want something that works.Gerald Weinberg described quality as “value to someone”, which has been augmented over time with things like, 'who matters' and, 'at some time'.  Even with context, quality is not always a fixed measure but something shaped by people, context, and purpose. What matters to one group may not matter to another, and that is why software quality always needs a conversation.Stuart Crocker, in a LinkedIn post in 2024, said. "This is now my goto definition of what software testing is, "The exploration and discovery of intended and unintended behaviours in the software we build and their impact on product value—for both customers and the business." With my definition of quality being "The absence of unnecessary friction" These two definitions, working together, help me make quick, useful decisions on what to test and from that testing, what is necessary to improve." That view captures how feelings can be a primary indicator of quality (to someone).Dan Ashby talks about quality in his 2019 blog post on Philip Crosby's 4 absolutes of quality in a software context. He takes those and suggests his own adaptations, the first of which is, "Quality is defined as “correctness” and “goodness” in relation to stakeholder value. This is adapted from "Quality is defined as conformance to requirements", and as Crosby was a quality leader in manufacturing, conformance to requirements is fundamental. For software, we need more.In the end, quality in software is not something I can tell you. It is context-dependent, shaped by many things like purpose, constraints, time, and the people involved. There is no universal checklist. The key is understanding what quality means in your situation, how it will be measured, and who it truly matters to. Some are helped by customer feedback, some by monitoring and observation of software performance. Others look to quantify in various ways. They ask, 'How long does it take to learn and understand a new feature?' Or determine, 'What scales or meters can we employ to measure a 'value'.However you describe or measure software quality will be dependent on the factors you have. Have fun doing it.
If you build it accessible, they will come image
Just a fact, if it isn't accessible you are excluding 15% or more potential customers or users
Accessibility is fundamental image
From many companies digital accessibility feels optional, But it is a human right and fundamental to usable software. And you can't argue with Batman
Data Lake image
  • Ady Stokes's profile
In the simplest terms, a data lake is a big collection of raw data, useful for future analysis and a powerful tool for testers to investigate, reproduce bugs, check data quality, and assess system performance.A data lake is a vast storage area that holds a huge amount of raw, unprocessed data from many different sources. Think of it like a natural lake where various rivers flow into it. These rivers bring all sorts of different things, like logs, images and raw data. A data lake in software collects all kinds of data without first cleaning or structuring it. This means it can hold structured data, such as numbers and text, as well as semi-structured data like logs, and even unstructured data, including images and videos.The main idea behind a data lake is to store all the data first. You do not decide how you will use or analyse it until later, when you actually need it. This gives organisations a lot of flexibility for future analysis, research, and machine learning projects. It is different from a data warehouse, which typically stores data that has already been cleaned, transformed, and organised for a specific purpose.For software testers, the data lake can be a goldmine of information. Using analytics platforms or business intelligence tools, you can dive into this pool of data to understand how information flows through your systems or assess data quality by identifying inconsistencies or unexpected values, amongst other things.
Chocolate Driven Development (CDD) image
  • Ady Stokes's profile
CDD, or chocolate-driven development, started in 1992 in Switzerland. The boost in dopamine and serotonin promoted focus, happiness and other mood boosting qualities. While all those working in this methodology were happy, unfortunately, no successful software development projects were actually started, as it wasn't a real computer. There are those, however, who persist with this way of working today and are very happy with the results. 
Login or sign up to create your own MoT page.
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.