Ady Stokes
Ady Stokes
Freelance IT and Accessibility Consultant
He / Him
I am Open to Write, Teach, Speak
Freelance IT consultant and accessibility Advocate. I curate the Essentials Certificate STEC, and co-run the Ministry of Testing Leeds. MoT Ambassador. I teach, coach, and give training.

Badges

Career Champion
TestBash Trailblazer
Avid Reader
Club Explorer
Bio Builder
Article Maven
MoT Community Certificate
Scholarship Hero
Trend Spotter Bronze
TestBash Speaker
99 Second Speaker
The Testing Planet Contributor
Meetup Organiser
MoT Streak
Unlimited Member
In the Loop
MoT Ambassador
MoT Inked
Bug Finder
Collection Curator
Glossary Contributor
Meme Maker
Photo Historian
TestBash Brighton 2025 Attendee
TestBash Brighton 2024 Attendee

Community Stars

Contributions

You Aren't Gonna Need It (YAGNI) image
  • Ady Stokes's profile
Originally a development principle from Extreme Programming (XP), which stands for “You Aren’t Gonna Need It.” YAGNI advises against implementing features, writing code, or in the case of testing, preparing test cases unless there is a clear and immediate business requirement for them. The aim is to prevent overengineering, reduce waste, and encourage teams to focus on delivering only what is necessary at the present time.Applied to software testing, the mnemonic YAGNI encourages testers to avoid designing test cases or creating test data for hypothetical scenarios or unapproved features. Instead, it promotes an evidence-led, lean approach to planning and execution, concentrating efforts on the risks and requirements that are confirmed and current. YAGNI Examples:A tester resists the urge to write automated tests for a new module that is still under discussion and not yet scheduled for development. Instead, they focus their time on creating and refining tests for features that are already in active development.During a sprint, a team member suggests writing test scripts for a future integration that has not been prioritised. The tester reminds the team of the YAGNI principle, pointing out that the work might never be needed or may change significantly by the time it is relevant.A test lead advises against preparing an extensive test plan for a third-party tool that the organisation has not yet decided to adopt, applying YAGNI to prevent wasted documentation effort.YAGNI is closely related to Agile values and helps testers avoid speculative testing and premature optimisation, encouraging just-in-time preparation aligned with business value. It is similar in ways to DRY, don’t repeat yourself and KISS, keep it simple stupid which are also coding mnemonics.
Living Documentation image
  • Ady Stokes's profile
Now, this isn't your dusty old manual that sits on a shelf gathering cobwebs or languishes in a drive or unvisited project website getting more out of date by the minute. Living Documentation, which can also be known as dynamic documentation, is an artifact that evolves alongside the software itself. Think of it as documentation that's always current and reflects the latest information available.Most test artifacts can be living documentation as long as they are kept current. You can create living documents by updating them as required or build them using automation tools. Depending on a project's context they might find a balance between documentation and dashboards to provide up to date information to colleagues and stakeholders.Why is living documentation a good thing in software development? It can help everyone in and around the team to have a clear understanding of how the system works and any other relevant information identified. It can make onboarding new team members much easier to get them up to date. It can also help to reduce misunderstandings and improve communication. When your documentation is tied to your automated tests, it can also act as a form of executable specification, showing how the system is supposed to behave. It's all about keeping everyone current with documentation that's actually useful and reliable.
Bug image
  • Ady Stokes's profile
Bug
Computers have been around much longer than you think. Charles Babbage is credited with creating the first mechanical computer in 1822. The first electronic computer was built in 1942 by John Atanasoff and Clifford Berry. Even in the early days, people did software testing, but not in the way you would see today due to the primitive level of technology. Those programming the computer would review their code, an early form of debugging or reviewing for errors. It was in the 1950s, as computing power grew, that IBM formed the first dedicated testing team. And it was right around that time that the term "bug" was coined: a computer scientist named Grace Hopper recorded a ‘bug being found’ in the Harvard II computer in 1947. In actual fact, a moth got stuck in a relay inside the machine, but a defect was recorded, and the term "bug" is used to this day. When you hear news stories about issues caused by software failures, glitches, anomalies, crashes, incidents or faults… these are all "bugs."
Glue Work image
  • Ady Stokes's profile
A collection of small, often invisible actions and tasks that generate a lot of team value” and shared this list.  It is all the small things testers do to make information visible: Helping connect colleagues who might otherwise not speak or both have important information.  Bouncing ideas off colleagues or other testers.  Asking and answering questions in stand-ups and meetings.  Facilitating meetings or presenting demonstrations to stakeholders.  Suggesting improvements to the product and the processes the team uses.  Helping resolve team bottlenecks.  Communicating with users Noticing dropped tasks
More than just ‘manual testing’: Recognising the skills of software testers image
  • Ady Stokes's profile
Discover why the term ‘manual testing’ has limitations and negative impacts on the testing craft and learn to embrace more modern terminology
First MoT Gloucestershire meetup image
  • Ady Stokes's profile
Great first event for a brand new Ministry of Testing meetup and with 3 speakers on accessibility too!
Cross browser testing image
  • Ady Stokes's profile
Cross browser testing (CBT) is essentially exactly what it says. It is all about making sure your web application or website works properly and looks as it should, across different web browsers through testing. Not everyone uses the same car, and the same goes for web browsers. You have Chromium based ones which are now the most popular. But there’s also Firefox, Safari, Microsoft Edge, and even the odd Internet Explorer (IE) is still kicking about! In December 2024 IEs usage was showing at 0.16% globally, so that’s probably an edge case! That’s before we even think about VPN or other security ones like DuckDuckGo, Opera or mobile versions. Extensions like NordVPN and others can also influence CBT.The reason this is so important is that these different browsers can sometimes interpret web code in slightly different ways. When we say web code we mean all the different technologies like HTML (HyperText Markup Language), CSS (Cascading Style Sheets), JavaScript and many others. What looks good in one browser using one or a combination of web code, might be not so good or even broken in another. As software testers, we need to make sure that however someone is browsing or on whichever device, they get a consistent experience.This means we need to test our software on a range of browsers, and ideally on different versions where possible, because things can change with updates. We're looking for things like layout issues with responsive design, where elements might be in the wrong place, functionality problems, where something works in one browser but not another, and even performance differences. It's about ensuring that everyone gets a similar quality experience, regardless of their browser preference. It’s a bit like making sure your instructions are clear no matter who is reading them!
Internationalisation image
  • Ady Stokes's profile
For software that will be used in multiple markets and languages it needs to consider all the possible variances across them. When we are designing and developing software for international markets we have to think of those considerations right from the start. Just like we would for accessibility or security. Think of it as building a solid foundation that allows your application to switch and behave appropriately for a global audience.The whole point of internationalisation is to make it easier to then localise your software for specific markets. So, things like making sure your software can handle different character set inputs (like those used in Japanese or Arabic). That your user interface (UI) is responsive to accommodate varying text lengths in different languages. A big obstacle to doing this is hardcoding assumptions like date formats or currency symbols. That can be a big problem down the line as new languages are added.For testers, it means we need to be thinking about whether the software has been built with this global perspective in mind. What are the risks? Can it handle different languages, date formats and postal locations? Can it handle very short or very long names? Are there any cultural considerations we need to be aware of and look into?Internationalisation is about ensuring this isn't just software that works in the softwares primary language, but works culturally and linguistically, for every single person who might use it, no matter who they are or where they are in the world. What do we need to test to prove that's the case?
Technical Debt image
  • Ady Stokes's profile
What is technical debt? Technical debt in software development is mostly focused on non-optimised code but can also include things like out of date documentation. Quite often a product of time constraints, software is developed quickly to meet deadlines and opportunities to improve or refactor cannot or are not taken.Technical debt can show up in the form of overly complex code due to adding more lines rather than integrating them. Duplicated functionality rather than a common method. A lack of unit or integration tests. Using older versions of components like libraries when new ones are available.Technical debt going unaddressed for long periods of time can lead to product impacts like slow loading or processing times, security vulnerabilities and lack of support through outdated documentation or just a general lack of reliable information.
Testing Debt image
  • Ady Stokes's profile
What is Testing debt? Testing debt can be described as a subsection of technical debt and can include things like outdated scripts, large regression suites or slow and complicated automation. The build up of testing debt can be a project decision in the same way as technical debt. Both generally arise from compromises or shortcuts designed to speed up things now.Examples for testing debt can include not running tests, either automation, regression or any other type. Ignoring failing tests as they are known to be ‘flaky’. A lack of test artefacts or out of date documentation.Over time unaddressed testing debt can lead to lower quality software, more test effort due to lack of reliable automation, higher bugs in production and increased maintenance costs. As with technical debt, some debt can be valuable but unaddressed debt left over time can cause a number of issues. 
Quality Characteristics image
  • Ady Stokes's profile
Quality Characteristics refer to the attributes of a software system that describe how well it performs all the other functions outside of its features. Some people refer to them as quality attributes and you may also have heard them called "non-functional requirements" in the past. The term "Quality Characteristics" is more suitable for modern software development and factually much more accurate. Because let's face it, no part of software is truly 'non-functional,' is it? Each part does something!These characteristics essentially define the qualities of the system's behaviours, performance and design. Good examples of quality characteristics include things like: Accessibility, which ensures the system can be used by as many people as possible regardless of how they interact with it Security, which is all about protecting the system and its data from malicious attacks Performance, looks at how responsive and efficient the system is Usability, focuses on how easy and intuitive the system is to use Maintainability, which considers how easy it will be to update and fix the system There are many more quality characteristics that could be listed and which ones are considered by a project will depend on the context.ISO 25010 is part of the ISO 25000 standards for software and data quality and defines software product quality under nine categories. It is included in this glossary definition to show there are different opinions and descriptions of quality characteristics. Do you agree checking only these would help define product quality? Or that all these should be checked for every project? Functional stability - the degree to which a product or system provides functions that meet stated and implied needs when used under specified conditions Performance efficiency - the degree to which a product performs its functions within specified time and throughput parameters and is efficient in the use of resources (such as CPU, memory etc.) Compatibility - degree to which a product, system or component can exchange information with other products, systems or components, and/or perform its required functions while sharing the same common environment and resources Interaction capability - degree to which a product or system can be interacted with by specified users to exchange information via the user interface to complete specific tasks in a variety of contexts of use Reliability - degree to which a system, product or component performs specified functions under specified conditions for a specified period of time Security - degree to which a product or system defends against attack patterns by malicious actors and protects information and data Maintainability - degree of effectiveness and efficiency with which a product or system can be modified to improve it, correct it or adapt it to changes in environment, and in requirements Flexibility - degree to which a product can be adapted to changes in its requirements, contexts of use or system environment Safety - degree to which a product under defined conditions to avoid a state in which human life, health, property, or the environment is endangered
The future is agile, or is it? image
  • Ady Stokes's profile
  • Rosie Sherry's profile
  • Sebastian Stautz's profile
  • Conrad Braam's profile
  • Stuart Thomas's profile
  • Tom Game's profile
  • Konstantin Sakhchinskiy 's profile
  • Arik Aharoni's profile
A look at 9 key agile perspectives and what might come next.
Login or sign up to create your own MoT page.
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.