Daria Tsion
Head of QA
she/her
I am Open to Work, Write, Teach, Speak, Mentor, Internship, CV Reviews, Podcasting, Meet at MoTaCon 2026, Review Conference Proposals

Head of QA with 11+ years of experience in building QA processes, leading teams, and driving test automation. Passionate about AI in QA, process optimization, and leadership in quality.

Achievements

Career Champion
Club Explorer
Bio Builder
MoT Streak
In the Loop
Collection Curator
Glossary Contributor
Photo Historian
Author Debut
99 and Counting
Inclusive Companion
Social Connector
Open to Opportunities
Picture Perfect
Kind Click
Supportive Clicker
Encouragement Giver
Goal Setter
Moment Maker

Certificates

404 Certificate Not Found
Level up your software testing and quality engineering skills with the credibility of a Ministry of Testing certification.

Activity

Testing with feature flags: what we expected and what actually happened image
Testing with feature flags: what we expected and what actually happened
Testing with feature flags: what we expected and what actually happened image
Testing with feature flags: what we expected and what actually happened
Testing with feature flags: what we expected and what actually happened image
Testing with feature flags: what we expected and what actually happened
Testing with feature flags: what we expected and what actually happened image
Testing with feature flags: what we expected and what actually happened
Testing with feature flags: what we expected and what actually happened image
Testing with feature flags: what we expected and what actually happened

Contributions

Testing with feature flags: what we expected and what actually happened image
  • Daria Tsion's profile image
Feature flags entered our workflow as a quality safeguard.
Test surface image
  • Daria Tsion's profile image
The test surface in feature testing represents the total, combined area of all public methods, parameters, and application programming interfaces (APIs) of a component that must be validated to ensure it works correctly. It defines the scope of testing required, where a larger, more complex surface area necessitates more in-depth testing. It includes all possible variations introduced by factors such as feature flags, environment differences (e.g. dev, staging, production), user segments, and rollout strategies.Understanding and managing the test surface is important for effective test planning, as it helps teams identify what needs to be tested, avoid gaps in coverage, and reduce the risk of issues caused by untested combinations of conditions
Quality narratives and the circles of consequence - Ep 121 image
  • Cassandra H. Leung's profile image
  • Simon Tomes's profile image
  • Judy Mosley's profile image
  • Ujjwal Kumar Singh's profile image
  • Demi Van Malcot's profile image
  • Heleen Van Grootven's profile image
  • Daria Tsion's profile image
Set meaningful goals, communicate quality through risks and real-world consequences, and turn small wins like building a quality narrative into career growth.
My 2026 Goals ✨ image
  • Daria Tsion's profile image
The end of 2025 was emotionally one of the hardest periods for me since 2022. Living in a world full of political changes while war is happening in your own country is not easy. Blackouts, massive ...
Five practical ways to use AI as a partner in Quality Engineering image
  • Daria Tsion's profile image
Use these structured prompting techniques to improve the quality and usefulness of AI output in testing workflows
Prompt chaining image
  • Daria Tsion's profile image
Prompt chaining is a technique where the output of one prompt is used as the input for the next prompt in a sequence. This allows complex tasks to be broken into smaller, more manageable steps, enabling deeper analysis, comparison, and refinement across large or complex problem spaces.
Self-critique prompting  image
  • Daria Tsion's profile image
Self-critique prompting describes the practice of asking AI to review its own output against specific criteria, such as readability, coverage, or standards, and then improve the result based on those findings. This mirrors human review processes and helps identify gaps, inconsistencies, or improvement opportunities before the output is used in production.
Iterative prompting image
  • Daria Tsion's profile image
Iterative prompting is an approach where prompts are refined over multiple steps based on previous outputs. Instead of expecting a perfect result from a single prompt, the user reviews, adjusts constraints, and asks follow-up questions to gradually improve accuracy, quality, and relevance.
 Role-based prompting image
  • Daria Tsion's profile image
Role-based prompting is a prompting technique where the user explicitly defines the role or perspective the AI should take before generating a response. By assigning a role such as a tester, automation engineer, or reviewer, the AI output becomes more focused, context-aware, and aligned with real-world responsibilities and expectations.
Feature Flag image
  • Aj Wilson's profile image
At a basic level, a feature flag acts as a conditional switch. The code for a feature can be deployed to production or any other environment, but its visibility or behaviour is controlled by configuration in a specific system.  By changing the flag’s state, teams can turn the feature on or off instantly, often for specific users, environments, or percentage-based rollouts. This can make the testing process much clearer and easier.
Test log image
  • Rosie Sherry's profile image
Test Log is a record of information generated during test execution. It typically includes details such as test steps, timestamps, system responses, errors, and execution status, and is used to support debugging, failure analysis, and understanding test behaviour.
Login or sign up to create your own MoT page.
Subscribe to our newsletter