Ady Stokes
Ady Stokes
Freelance IT and Accessibility Consultant
He / Him
I am Open to Write, Teach, Speak
Freelance IT consultant and accessibility Advocate. I curate the Essentials Certificate STEC, and co-run the Ministry of Testing Leeds. MoT Ambassador. I teach, coach, and give training.

Badges

Bio Builder
TestBash Trailblazer
Career Champion
Avid Reader
Club Explorer
Article Maven
MoT Community Certificate
Scholarship Hero
Trend Spotter Bronze
TestBash Speaker
99 Second Speaker
The Testing Planet Contributor
Meetup Organiser
MoT Streak
Unlimited Member
In the Loop
MoT Ambassador
MoT Inked
Bug Finder
Collection Curator
Glossary Contributor
Meme Maker
Photo Historian

Contributions

Cross browser testing image
  • Ady Stokes's profile
Cross browser testing (CBT) is essentially exactly what it says. It is all about making sure your web application or website works properly and looks as it should, across different web browsers through testing. Not everyone uses the same car, and the same goes for web browsers. You have Chromium based ones which are now the most popular. But there’s also Firefox, Safari, Microsoft Edge, and even the odd Internet Explorer (IE) is still kicking about! In December 2024 IEs usage was showing at 0.16% globally, so that’s probably an edge case! That’s before we even think about VPN or other security ones like DuckDuckGo, Opera or mobile versions. Extensions like NordVPN and others can also influence CBT.The reason this is so important is that these different browsers can sometimes interpret web code in slightly different ways. When we say web code we mean all the different technologies like HTML (HyperText Markup Language), CSS (Cascading Style Sheets), JavaScript and many others. What looks good in one browser using one or a combination of web code, might be not so good or even broken in another. As software testers, we need to make sure that however someone is browsing or on whichever device, they get a consistent experience.This means we need to test our software on a range of browsers, and ideally on different versions where possible, because things can change with updates. We're looking for things like layout issues with responsive design, where elements might be in the wrong place, functionality problems, where something works in one browser but not another, and even performance differences. It's about ensuring that everyone gets a similar quality experience, regardless of their browser preference. It’s a bit like making sure your instructions are clear no matter who is reading them!
Internationalisation image
  • Ady Stokes's profile
For software that will be used in multiple markets and languages it needs to consider all the possible variances across them. When we are designing and developing software for international markets we have to think of those considerations right from the start. Just like we would for accessibility or security. Think of it as building a solid foundation that allows your application to switch and behave appropriately for a global audience.The whole point of internationalisation is to make it easier to then localise your software for specific markets. So, things like making sure your software can handle different character set inputs (like those used in Japanese or Arabic). That your user interface (UI) is responsive to accommodate varying text lengths in different languages. A big obstacle to doing this is hardcoding assumptions like date formats or currency symbols. That can be a big problem down the line as new languages are added.For testers, it means we need to be thinking about whether the software has been built with this global perspective in mind. What are the risks? Can it handle different languages, date formats and postal locations? Can it handle very short or very long names? Are there any cultural considerations we need to be aware of and look into?Internationalisation is about ensuring this isn't just software that works in the softwares primary language, but works culturally and linguistically, for every single person who might use it, no matter who they are or where they are in the world. What do we need to test to prove that's the case?
Technical Debt image
  • Ady Stokes's profile
What is technical debt? Technical debt in software development is mostly focused on non-optimised code but can also include things like out of date documentation. Quite often a product of time constraints, software is developed quickly to meet deadlines and opportunities to improve or refactor cannot or are not taken.Technical debt can show up in the form of overly complex code due to adding more lines rather than integrating them. Duplicated functionality rather than a common method. A lack of unit or integration tests. Using older versions of components like libraries when new ones are available.Technical debt going unaddressed for long periods of time can lead to product impacts like slow loading or processing times, security vulnerabilities and lack of support through outdated documentation or just a general lack of reliable information.
Testing Debt image
  • Ady Stokes's profile
What is Testing debt? Testing debt can be described as a subsection of technical debt and can include things like outdated scripts, large regression suites or slow and complicated automation. The build up of testing debt can be a project decision in the same way as technical debt. Both generally arise from compromises or shortcuts designed to speed up things now.Examples for testing debt can include not running tests, either automation, regression or any other type. Ignoring failing tests as they are known to be ‘flaky’. A lack of test artefacts or out of date documentation.Over time unaddressed testing debt can lead to lower quality software, more test effort due to lack of reliable automation, higher bugs in production and increased maintenance costs. As with technical debt, some debt can be valuable but unaddressed debt left over time can cause a number of issues. 
Quality Characteristics image
  • Ady Stokes's profile
Quality Characteristics refer to the attributes of a software system that describe how well it performs all the other functions outside of its features. Some people refer to them as quality attributes and you may also have heard them called "non-functional requirements" in the past. The term "Quality Characteristics" is more suitable for modern software development and factually much more accurate. Because let's face it, no part of software is truly 'non-functional,' is it? Each part does something!These characteristics essentially define the qualities of the system's behaviours, performance and design. Good examples of quality characteristics include things like: Accessibility, which ensures the system can be used by as many people as possible regardless of how they interact with it Security, which is all about protecting the system and its data from malicious attacks Performance, looks at how responsive and efficient the system is Usability, focuses on how easy and intuitive the system is to use Maintainability, which considers how easy it will be to update and fix the system There are many more quality characteristics that could be listed and which ones are considered by a project will depend on the context.ISO 25010 is part of the ISO 25000 standards for software and data quality and defines software product quality under nine categories. It is included in this glossary definition to show there are different opinions and descriptions of quality characteristics. Do you agree checking only these would help define product quality? Or that all these should be checked for every project? Functional stability - the degree to which a product or system provides functions that meet stated and implied needs when used under specified conditions Performance efficiency - the degree to which a product performs its functions within specified time and throughput parameters and is efficient in the use of resources (such as CPU, memory etc.) Compatibility - degree to which a product, system or component can exchange information with other products, systems or components, and/or perform its required functions while sharing the same common environment and resources Interaction capability - degree to which a product or system can be interacted with by specified users to exchange information via the user interface to complete specific tasks in a variety of contexts of use Reliability - degree to which a system, product or component performs specified functions under specified conditions for a specified period of time Security - degree to which a product or system defends against attack patterns by malicious actors and protects information and data Maintainability - degree of effectiveness and efficiency with which a product or system can be modified to improve it, correct it or adapt it to changes in environment, and in requirements Flexibility - degree to which a product can be adapted to changes in its requirements, contexts of use or system environment Safety - degree to which a product under defined conditions to avoid a state in which human life, health, property, or the environment is endangered
The future is agile, or is it? image
  • Ady Stokes's profile
  • Rosie Sherry's profile
  • Sebastian Stautz's profile
  • Conrad Braam's profile
  • Stuart Thomas's profile
  • Tom Game's profile
  • Konstantin Sakhchinskiy 's profile
  • Arik Aharoni's profile
A look at 9 key agile perspectives and what might come next.
Specialist image
  • Ady Stokes's profile
A Specialist in software testing is someone with a depth of knowledge, technical skill or even expert level understanding of a testing type. Some well known examples of testing types that have specialists are security, performance, automation, accessibility and more lately AI (both testing AI applications and the ML (machine learning) and LLMs (large language models) that power AI tools).
Generalist image
  • Ady Stokes's profile
A generalist is a software tester that has aquired at least good or intermediary knowledge of several software testing areas. Sometimes described as Comb or Tree shaped testers they can do initial testing in multiple areas of testing. While that testing might not be as deep as a specialist, it can identify more potential issues or risks quickly and allow specialists to focus on the nuances particular to their discipline.
Bug Bash image
  • Ady Stokes's profile
What is a bug bash?A Bug Bash is a focused and most often time-boxed event where a diverse group of people, which includes roles like developers, testers, product owners, designers and even folks from support or marketing, all come together to try and find as many bugs as they can in a piece of software. Think of it as a concentrated effort of testing where, the more eyes you have on the product or application, the more likely you are to uncover issues or edge cases that might otherwise go undetected. The aim is not just to find bugs, but also to build a greater shared understanding of the software's quality throughout the wider team. It also allows teams to get different viewpoints on potential problems or usability.Sometimes there can be a bit of friendly competition involved, maybe even some prizes for finding the most fun, silly or the most critical bugs. It is all about getting everyone involved in making the software better.Generally a facilitator will be the central point for reporting findings and collating information. A facilitator can ask individuals to do certain tasks, follow user journeys or even hand out exploratory test charters or goals. While there is no single definitive way of running a bug bash the primary task of a diverse group swarming software is present no matter how they are run.
What is software testing?  image
  • Ady Stokes's profile
A brief introduction to software testing from its origins, to definitions and its value to working in teams and software development
Not accessible, not MVP image
Accessibility should be a primary consideration when building MVPs, pilot programs or any quick builds. Save a ton of time and money later if you do.
Adaptable test design image
  • Ady Stokes's profile
Adaptable test design in software testing mostly sits in the exploratory testing world but can be applied throughout the testing process. It has various benefits such as increasing the speed of response to change and reducing cost. The term adaptable test design in software development refers to the practice of creating test plans, cases, and strategies that can be easily adjusted. Reasons for adjustments could be to accommodate changes in the project, such as new requirements, updated features, or shifts in priorities. It ensures that testing activities remain relevant and effective as the software evolves, especially in dynamic environments like Agile or DevOps. Adaptable test design is primarily applied to exploratory testing as part of its core, but it can also be applied to planning and risk based testing. It is about remaining flexible where possible and adapting to new priorities or knowledge. 
Login or sign up to create your own MoT page.
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.