Ady Stokes
Freelance IT and Accessibility Consultant
He / Him
I am Open to Write, Teach, Speak
Freelance IT consultant and accessibility Advocate. I curate the Essentials Certificate STEC, and co-run the Ministry of Testing Leeds. MoT Ambassador. I teach, coach, and give training.
Badges
Contributions
What is technical debt?
Technical debt in software development is mostly focused on non-optimised code but can also include things like out of date documentation. Quite often a product of time constraints, software is developed quickly to meet deadlines and opportunities to improve or refactor cannot or are not taken.Technical debt can show up in the form of overly complex code due to adding more lines rather than integrating them. Duplicated functionality rather than a common method. A lack of unit or integration tests. Using older versions of components like libraries when new ones are available.Technical debt going unaddressed for long periods of time can lead to product impacts like slow loading or processing times, security vulnerabilities and lack of support through outdated documentation or just a general lack of reliable information.
What is Testing debt?
Testing debt can be described as a subsection of technical debt and can include things like outdated scripts, large regression suites or slow and complicated automation. The build up of testing debt can be a project decision in the same way as technical debt. Both generally arise from compromises or shortcuts designed to speed up things now.Examples for testing debt can include not running tests, either automation, regression or any other type. Ignoring failing tests as they are known to be ‘flaky’. A lack of test artefacts or out of date documentation.Over time unaddressed testing debt can lead to lower quality software, more test effort due to lack of reliable automation, higher bugs in production and increased maintenance costs. As with technical debt, some debt can be valuable but unaddressed debt left over time can cause a number of issues.
Quality Characteristics refer to the attributes of a software system that describe how well it performs all the other functions outside of its features. Some people refer to them as quality attributes and you may also have heard them called "non-functional requirements" in the past. The term "Quality Characteristics" is more suitable for modern software development and factually much more accurate. Because let's face it, no part of software is truly 'non-functional,' is it? Each part does something!These characteristics essentially define the qualities of the system's behaviours, performance and design. Good examples of quality characteristics include things like:
Accessibility, which ensures the system can be used by as many people as possible regardless of how they interact with it
Security, which is all about protecting the system and its data from malicious attacks
Performance, looks at how responsive and efficient the system is
Usability, focuses on how easy and intuitive the system is to use
Maintainability, which considers how easy it will be to update and fix the system
There are many more quality characteristics that could be listed and which ones are considered by a project will depend on the context.ISO 25010 is part of the ISO 25000 standards for software and data quality and defines software product quality under nine categories. It is included in this glossary definition to show there are different opinions and descriptions of quality characteristics. Do you agree checking only these would help define product quality? Or that all these should be checked for every project?
Functional stability - the degree to which a product or system provides functions that meet stated and implied needs when used under specified conditions
Performance efficiency - the degree to which a product performs its functions within specified time and throughput parameters and is efficient in the use of resources (such as CPU, memory etc.)
Compatibility - degree to which a product, system or component can exchange information with other products, systems or components, and/or perform its required functions while sharing the same common environment and resources
Interaction capability - degree to which a product or system can be interacted with by specified users to exchange information via the user interface to complete specific tasks in a variety of contexts of use
Reliability - degree to which a system, product or component performs specified functions under specified conditions for a specified period of time
Security - degree to which a product or system defends against attack patterns by malicious actors and protects information and data
Maintainability - degree of effectiveness and efficiency with which a product or system can be modified to improve it, correct it or adapt it to changes in environment, and in requirements
Flexibility - degree to which a product can be adapted to changes in its requirements, contexts of use or system environment
Safety - degree to which a product under defined conditions to avoid a state in which human life, health, property, or the environment is endangered
A look at 9 key agile perspectives and what might come next.
A Specialist in software testing is someone with a depth of knowledge, technical skill or even expert level understanding of a testing type. Some well known examples of testing types that have specialists are security, performance, automation, accessibility and more lately AI (both testing AI applications and the ML (machine learning) and LLMs (large language models) that power AI tools).
A generalist is a software tester that has aquired at least good or intermediary knowledge of several software testing areas. Sometimes described as Comb or Tree shaped testers they can do initial testing in multiple areas of testing. While that testing might not be as deep as a specialist, it can identify more potential issues or risks quickly and allow specialists to focus on the nuances particular to their discipline.
What is a bug bash?A Bug Bash is a focused and most often time-boxed event where a diverse group of people, which includes roles like developers, testers, product owners, designers and even folks from support or marketing, all come together to try and find as many bugs as they can in a piece of software.
Think of it as a concentrated effort of testing where, the more eyes you have on the product or application, the more likely you are to uncover issues or edge cases that might otherwise go undetected. The aim is not just to find bugs, but also to build a greater shared understanding of the software's quality throughout the wider team. It also allows teams to get different viewpoints on potential problems or usability.Sometimes there can be a bit of friendly competition involved, maybe even some prizes for finding the most fun, silly or the most critical bugs. It is all about getting everyone involved in making the software better.Generally a facilitator will be the central point for reporting findings and collating information. A facilitator can ask individuals to do certain tasks, follow user journeys or even hand out exploratory test charters or goals. While there is no single definitive way of running a bug bash the primary task of a diverse group swarming software is present no matter how they are run.
A brief introduction to software testing from its origins, to definitions and its value to working in teams and software development
Accessibility should be a primary consideration when building MVPs, pilot programs or any quick builds. Save a ton of time and money later if you do.
Adaptable test design in software testing mostly sits in the exploratory testing world but can be applied throughout the testing process. It has various benefits such as increasing the speed of response to change and reducing cost. The term adaptable test design in software development refers to the practice of creating test plans, cases, and strategies that can be easily adjusted. Reasons for adjustments could be to accommodate changes in the project, such as new requirements, updated features, or shifts in priorities. It ensures that testing activities remain relevant and effective as the software evolves, especially in dynamic environments like Agile or DevOps. Adaptable test design is primarily applied to exploratory testing as part of its core, but it can also be applied to planning and risk based testing. It is about remaining flexible where possible and adapting to new priorities or knowledge.
Ady Stokes wearing an MoT planet logo on the dock at the British Virgin Isles. There’s the from of a cruise ship in the background with the bay stretching across to a hillside.
Context in software testing is hard to define as it is influenced by so many different factors. The three bedrocks of time, cost and quality can all make huge differences to a projects context and therefore the testing too. While it can be difficult to define generally, understanding the context your testing will take place in can be very beneficial in supporting quality testing. Whether there are time constraints, regulation or compliance challenges or high levels of complexity. Or indeed a million and one other things that can influence it. Context will help you discover risks, empathise with users and help you prioritise testing efforts. Context is in some ways like a sea. It ebbs, flows and its tides shift as things from small pebbles to the moon influence and affect it. Context can be shifted or changed by new information or new discoveries. Every project, no matter how similar to another, has its own unique set of circumstances, constraints and objectives.