TestBash San Francisco 2019

Testbash san francisco 2019 dojo event banner
November 6th 2019 08:00 - November 7th 2019 18:00

The best in test is coming back to the west! The US edition of our software testing conference, TestBash, is returning to the home of the Golden Gate Bridge, San Francisco.

We’re heading back to the Cowell Theatre for a two-day single-track conference on November 6-7, 2019.

We've got 17 talks, a few surprises and, of course, our famously fun 99-second talks.

On both days you can expect a wonderful community coming together in a friendly, professional and safe environment. We think you'll feel right at home when you arrive!

Tickets start from $999. Grab yours now!

Event Sponsors:
Conference
November 6th 2019 08:00 - November 7th 2019 18:00
Risk or Fear: What drives your testing? Many say they do risk-based testing, but walking the walk is showing to be more challenging than organizations realized. Teams are finding they end up testing “everything” which is counterproductive to a risk-based approach. The testing community must apply a truth telling question: What motivates your test coverage decisions? Fear or Risk?
 
Many teams are realizing after implementing a risk-based strategy, they continue to test from a place of fear as opposed to calculated risk. Others never reassess or renegotiate risk as their application matures.
 
As the application under test matures, so must your strategy

Takeaways

  • Discernment: Test decisions, what is your real motivator?
  • Embracing the concept, “What is good enough quality?”
  • Reassessing risk by integrating new data
  • How to overcome bias created by fear and previous failures
 
Jenna Charlton
Profile drawing Jenna is a senior tester with 8 years of experience. When she's not testing she's going to pro wrestling shows and concerts with her husband Bob, serving as a deacon in her church and cuddling the 3 feline overlords that share her home.
Behavior-Driven Development has become a popular methodology for putting quality first in software development. However, it is a polarizing process: people love it or hate it. Process changes can be tough for a team, too. Are there beneficial aspects of BDD that can easily be incorporated into existing processes? Will they help my team instead of waste our time? Absolutely yes!
 
Users ultimately care about software behaviors, and so should we. In this talk, we will cover three quality-centric, behavior-driven practices that can help any team develop better software:
  1. Problems with miscommunication? Three Amigos Collaboration
  2. Problems with poor planning? Example Mapping
  3. Problems with missed deadlines? Snowball Test Automation
 
These practices are meant to be a pragmatic approach to BDD, with an emphasis on philosophy before process. They can help any team, even those not doing pure BDD.

Takeaways

  • To understand the purpose and goals of BDD
  • To learn three helpful practices that can help any team:
    • Three Amigos collaboration to solve communication problems
    • Example Mapping to solve poor planning problems
    • Snowball Test Automation to solve missed deadline problems
  • To adopt a pragmatic view of software development and testing processes
 
Andrew Knight
Andy profile pl copy Andy Knight is the “Automation Panda” - an engineer, consultant, and international speaker who loves all things software. He specializes in building robust test automation solutions from the ground up. He currently works at PrecisionLender in Cary, NC. Read his tech blog at AutomationPanda.com, and follow him on Twitter at @AutomationPanda.
The term "TestOps" (along with related terms like "DevTestOps", "DevSecOps", and others) gets used in two very different contexts: as an important piece of the Agile/DevOps/Continuous Delivery movements, and as a condemnation of attaching -Ops as a suffix to everything under the sun.

Let's explore this. As a person with "TestOps" in my job title, I'd like to take the audience on a journey through the history of "TestOps" as a useful term to help describe the close working relationship between infrastructure teams and testers. We'll also look at how test environments themselves can be used as part of a modern tester's toolkit, and how TestOps practices can help prevent issues from arising in your production infrastructure. Throughout, I think we can examine the usage of the term "TestOps" itself - and see if it's a useful abstraction for you and your test organization.

Takeaways

  • Is "TestOps" just a cool name, or is there something useful in defining a close alignment between Testers and Ops?
  • What are the benefits of test teams taking ownership of their own environments?
  • What does the day-to-day work of someone with "TestOps" in their job title look like?
Alexander Langshall
Alex langshall Alex Langshall is a TestOps Engineer and Release Manager for Lucid Software. His day to day work involves testing architecture-heavy features and minimizing the risk of weekly regular deploys. Alex works remotely from the Portland, Oregon area where he lives with his spouse, kiddo, and cat.

"The user" comes up frequently in testing -- understanding your users, their workflows, and ensuring users have a positive experience with your product is a critical aspect of testing. We ensure our products are easy to use and can handle invalid user inputs, however, many testers don't understand the most important aspect of the user -- their brain and how it works. 

My talk introduces testers to cognitive psychology, and establishes how gaining a better understanding of how users retain information, complete tasks, and process visual input can improve their testing. 

Takeaways

  • Gain a basic understanding of cognitive psychology, and why understanding cognitive psychology is critical to thinking like a user.
  • Gain an understanding of key cognitive psychology principles that impact product usability and how cognitive psychology and usability research has been conducted.
  • Learn methods of testing that focus on product usability and accessibility, how to spot common usability concerns, and understand why those usability issues are largely universal across user personas.
  • Leave with resources to further dive into usability concepts.
Jessica Versaw
2017 11 28 0053 My career has always centered around users -- in my seven years at Hudl, I've worked in customer support, sales, and as a Quality Engineer. I'm currently working on a Masters Degree in Human Computer Interaction at Iowa State University, and have a BA in English as well as a BJ in Advertising & Public Relations from The University of Nebraska. I'm a sucker for solid microcopy and bold, happy design. I have a strong interest in cognitive psychology and how it can be used to create intuitive design. Research makes me downright giddy (which explains the English degree). When I'm not designing or writing, I enjoy being outdoors, hunting down the perfect mid-century antique (ask me about Broyhill Brasilia, because I'll definitely talk about it), spinning, listening to practically any true crime podcast, and taking way too many photos of my baby and pets. I also enjoy volunteering with organizations that support women in technology, and co-founded Lincoln's Girls Who Code chapter.
Imagine being given the opportunity to start from scratch and create your ideal testing job in your company. From day to day tasks such as test strategy creation/testing techniques to what tools you will use to creating a culture of quality in the company, you have complete control to mold your position into exactly what you want it to be. With complete buy in from upper management you start creating your perfect tester ecosystem, and over time you continuously improve that ecosystem from mistakes you’ve made and input from new teammates.
 
I was lucky enough to find myself in this unique situation when I became the first software tester for Johnson Health Tech, one of the top 3 fitness and wellness companies in the world. Throughout my time at Johnson I have learned how to drive change in a workplace where people needed convincing that testing is a vital part of the software development life cycle. Because I started with a blank canvas, my job has seen many iterations as it continues to change and grow organically over time. 
 
During my session, I will share the lessons I’ve learned from my own journey so that people can learn how to approach enacting change in their own professional lives. I will take my audience through the challenges I’ve faced throughout this growing process, touching on subjects such as shifting test strategy creation methods and tactics to promote a culture of quality, so people can build upon my own experiences and take those lessons back with them. By sharing my story, I expect that attendees will be better equipped to lobby for change in their workplace and will feel invigorated to do so.

Takeaways

  • Strategies to enact the change you want to see in your company
  • A push to start critically thinking about testing tasks/methodologies in place at your current company to ensure they are the best solutions for your team
  • Change won't always work perfectly right away! If at first you don't succeed, don't dwell on your mistakes but learn and grow from them!
  • Tools to recognize when something isn't working and knowing how to move forward productively.
  • Even starting with something small, you can start on the path to a better culture of quality in your company.
Matthew Record
Matt record photo Matthew Record is a Software Test Engineer at Johnson Health Tech, a worldwide leader in fitness solutions. Record was the first software tester hired at Johnson and has had the unique experience of being a pivotal piece in the formation and growth of the testing group in his company today. With over five years of experience advocating for testers and implementing change, he is excited for the opportunity to share the lessons he has learned throughout his testing career with the TestBash community.
Automation is judged by its return on investment (ROI), the time spent writing and maintaining compared the time savings. To ensure solid returns a great deal of effort is put forth creating automation frameworks and choosing the right tests to automate in order to optimize for successful results.
 
If automation is an investment, then this traditional approach to test automation is in effect creating a niche market. The problem is our potential ROI is limited by the size of this market, the fraction of completely automatable tests which are deemed worth the effort to automate and maintain.
 
Is your framework designed for writing automated tests, or for automated testing?
 
To truly maximize ROI, you need to stretch the total addressable market (TAM) your automation can serve. We call them automation frameworks but typically they are single purpose software full of assumptions around that purpose, tightly coupled to executing complete end to end test cases.
 
The focus needs to be around building a set of tools that can be composed into a platform to empower exploration as well test execution, a Testing SDK (Software Development Kit).
 
Inspired by the Richardson Maturity Model for REST Api’s, I will share a model of automated testing maturity consisting of four levels:
 
Level 0: Tests exist but framework does not. UI element locators inside tests or record play back tools.
 
Level 1: Frame work exists, UI concerns encapsulated
 
Level 2: Multi Layer framework, Beyond UI, api or other components included for arranging or manipulating, only consumer is tests.
 
Level 3: Testing SDK, Separate libraries/packages, consumable outside of tests and multiple consumers. Mixing automation and manual testing. Open source potential.
 
At each level in the model I will describe the benefits to the business and testers then provide steps to leveling up their automation.
 
Attendees will see that embracing a Testing SDK provides is a holistic strategy for test automation.
 

Takeaways

  • Common pitfalls in automation frameworks
  • Automation is more than writing tests
  • Enable and Instrument Exploratory Testing
  • SDK's provide more options for contribution
Brendan Connolly
Connolly brendan headshot small Brendan Connolly is an experienced Software Tester, Developer and blogger. Currently he is a Senior Quality Engineer at Procore Technologies in Santa Barbara, California. He's written tests at all levels from unit and integration tests to API and UI tests and is responsible for creating and executing testing strategies while using his coding powers for developing tooling to help make testers lives easier.
Automation! It may be a buzzword these days, but it’s also a useful approach to include in your testing and quality toolkit. If you’re new to automated testing, you’re probably starting off with a lot of questions: Which tool or framework should you use? How do you know which tests to automate? Why is automated testing useful for you and your team? The options for automated testing are wide open, and you may feel overwhelmed when you’re first getting started. 
 
I want to help turn some of your unknown unknowns into known unknowns! The goal of automation is to create anti-fragile tests that are easy to understand, maintain, and hand off. I’ll help you get closer to that goal by answering some of the questions that I had to tackle when I first started with automated testing, and also give you some other questions to think about for your own needs:
 
  • What’s important to consider - and what isn’t - when you’re choosing a tool or framework
  • How to decide which tests to automate, and why
  • Best practices for actually writing the tests, like separation of concerns and useful failure messages
 
Your own needs may differ, but being able to set a solid foundation for automated testing is useful for everyone. I hope you’ll come away from this talk feeling confident that you know how to get started with automated testing, and better prepared when your challenges do come along! 

Takeaways

  • Understanding that the goal isn’t “automate everything”, but rather to automate the repetitive checks so you can work on testing higher-risk items
  • Learning how to create a good test structure, such as granularity, independent tests, useful failure messages
  • How to collaborate with people on your team to get their support for the time and effort of implementing automated testing
 
Angela Riggs
Bio photo 2 As a QA engineer, Angela’s work has ranged from feature testing to leading department-wide process changes. She believes that empathy and curiosity are driving forces of quality, and uses both to advocate for users and engineering teams. Outside of work, she enjoys exploring the aisles of Powell’s and the forests of the PNW. She has an enthusiasm for karaoke, and serious debates about what can truly be categorized as a sandwich.
Arranging the playing cards in a deck to be in one’s favor is called stacking the deck. Outside of card playing, we use the term more generally to mean arranging a situation to increase our chances of a favorable outcome. When it comes to automation endeavors, the meaning is no different. Specifically, we want to arrange our architecture, implementation, and usage patterns to be appropriate for our endeavor’s desired life-span.
 
One approach to future-proofing is to focus less on the automation framework and more on the automation stack. An automation stack is a layered automation architecture where each layer builds upon the previous one and provides an audience-appropriate interface to the lower levels’ capabilities. This layered approach helps extend an implementation’s longevity by increasing the portability of the implementation across frameworks and across tools.
 
Join me as we walk through how layers can be a valuable part of an automation implementation, some caveats that should be considered when layering an architecture, and several examples of layered architectures of which I've been a part. Come and learn ways to stack the deck in YOUR favor. 

Takeaways

  • Layered architectures give us options on how to use them
  • They can be used in different frameworks
  • They can be used for non-traditional automation
  • Appropriate stewardship is required
  • Appropriate logging and error message are critical
Paul Grizzaffi
Paul headshot outside As a Principal Automation Architect at Magenic, Paul Grizzaffi is following his passion of providing technology solutions to testing and QA organizations, including automation assessments, implementations, and through activities benefiting the broader testing community. An accomplished keynote speaker and writer, Paul has spoken at both local and national conferences and meetings. He is an advisor to Software Test Professionals and STPCon, as well as a member of the Industry Advisory Board of the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas where he is a frequent guest lecturer. Paul enjoys sharing his experiences and learning from other testing professionals; his mostly cogent thoughts can be read on his blog at https://responsibleautomation.wordpress.com/.
Hear the five lessons learned in testing, address them, and take back valuable solutions on your journey of a modern tester! In the “Do Nots” of Testing, we explore five traditional approaches that we’ve introduced to QA over the years. Although these approaches have added value in the past, it’s time to revisit them and discuss new ways to show even more value and to support modern development approaches. These lessons learned in Software Testing – otherwise known as the “Do Nots” show how progress and innovation should always be at the forefront of of any introduced process. Melissa will suggest different approaches and recommendations to help you either remove these pervasive activities completely from your existing Testing team or replace them with more meaningful and modern approaches.
 

Takeaways

  • I'll present the top five “do nots” that testers have introduced in to the industry. 
  • We will discuss these items in detail 
    • why they were introduced 
    • some of the mis-perceptions they have propagated. 
  • We will then discuss what to replace those “do nots” with and how those suggestions allow for a more innovative approach to the industry.
Melissa Tondi
Melissa tondi headshot 1 Melissa Tondi has spent most of her career working within software testing teams. She is the founder of Denver Mobile and Quality (DMAQ), past president and board member of Software Quality Association of Denver (SQuAD), and Senior QA Strategist at Rainforest QA, where she assists companies to continuously improve the pursuit of quality software—from design to delivery and everything in between. In her software test and quality engineering careers, Melissa has focused on building and organizing teams around three major tenets—efficiency, innovation, and culture – and uses the Greatest Common Denominator (GCD) approach for determining ways in which team members can assess, implement and report on day to day activities so the gap between need and value is as small as possible.
The skills necessary to be an effective test manager are not the same as those needed to be an effective tester. Supervising and coordinating testers and testing requires learning new testing skills, along with the significant growth necessary to be worthy of the responsibility to manage others. 
 
How do you get ready to be a Test Manager? How do you identify and nurture a future Test Manager? What are the skills and traits you could look for? What red flags might you be watching for? How will you prepare them for management, and help them succeed when they get there? What should a line manager work on to prepare for the next level?

Takeaways

In this talk, I will pose some of these questions and offer some ideas, propose a body of skills needed for test managers in many contexts, and share experiences in mentoring current and future leaders. Hopefully, this will be useful for aspiring Test Managers, current Test Managers, and the leaders trying to grow them.
Eric Proegler
Eric proegler headshot Eric Proegler has worked in testing for 20 years. He is a Director of Test Engineering for Medidata Solutions in San Francisco, California. Eric is the President of the Association for Software Testing. He is also the lead organizer for WOPR, the Workshop on Performance and Reliability. He’s presented and facilitated at CAST, Agile2015, Jenkins World, STARWEST, Oredev, STPCon, PNSQC, WOPR, and STiFS. In his free time, Eric spends time with family, runs a science fiction book club, and sees a lot of live literary events, music, and stand-up comedy. He also seeks out street food from all over, plays video games, and follows professional basketball.
Testers often come with the stigma of just being “testers”. We are often pressured to stay in our lane and our comfort zone. In this session, we will be focusing on different biases that present themselves when it comes to being “just a tester”. From personal biases to the halo effect, we will bring into light these types of biases, how to recognize them and ways to overcome them. Especially coming from a non-technical background, we will share this from our own personal experiences as well as from fellow testers whom we have encountered throughout our career in software development.

Takeaways

  • How to overcome Bias in Testing
  • How to convince people to think like a tester
  • Help others realize that we are an integral part of the development process
  • Acknowledge and be aware of the Halo Effect
  • Psychology in Testing is a real thing
Charlene Granadosin
Screen shot 2019 04 26 at 1.35.46 pm With around 8 years of QA experience, Charlene used to be obsessed with having the highest bug count in the company until she realized that quality and the product, should not be defined by the amount of bugs testers catch. Since then, she's been working to bridge the information gap in terms of quality by fostering a collaborative environment between testers and developers. She believes that testers are more than bug catchers and encourages her team to explore new technologies and solutions in automation and continuous integration.
Charlotte Bersamin
Img 3912 Charlotte is a passionate automation engineer, focused on mobile automation. She is also an avid reader, certified chef, and Polynesian dancer! With over five years of software quality experience, she has developed her talents through insights provided to her in the industries she's worked, from health, banking, and now to Sports Media at Bleacher Report. She shares her love for software testing through mentorships and hopes to inspire others to think like a tester and question everything.

In this session Jennifer Bonine will explore new shifts in testing paradigms. Demonstrate an AI first testing method that integrates with your current manual and automation testing, and understand AI that aids your app teams. Re-think where you want to spend time and money in your testing team in a challenge that plagues most companies of too much to test and too little time. This will re-position the testing and quality organizations to not be the last part of what happens, but to providing valuable insights and actionable data for your C-Suite to drive business decisions.

Takeaways

  1. Ideas to reshape your test strategies
  2. An understanding of AI solutioning and where to begin implementing
  3. Analysis of available tooling options in the AI space. (Vendor agnostic)
Jennifer Bonine
Jennifer hi res

Jennifer is experienced speaking at both international & US engagements. She has keynoted Testing and Agile Development conferences. You can see her at Google, Agile and Testing conferences. Jennifer is passionate about bringing AI to the world's App teams and delivering AI integration with a human engagement model, while educating teams on solving challenges with an AI first approach.

Jennifer began her career in consulting, implementing large ERP solutions. She brings with her the unique industry perspective of having been on the inside of many of the brand name companies all of us interact with in the entertainment, media, and retail industries among others. Jennifer believes strongly that we should what we do what we are passionate about in life and believes in living your passion. She has held executive level positions leading development and quality engineering teams for Fortune 100 companies in several industries. In a recent engagement, Jennifer served as a strategy executive and in corporate marketing for the C-Suite. She enjoys the challenges of always having new problems to solve and collaborating with new clients worldwide.


Software vendors and practitioners are using artificial intelligence (AI) and machine learning (ML) to create a new wave of test automation tools. Such tools leverage autonomous and intelligent agents to explore, model, reason and learn about a software product. But how do these testing robots really work? Is this technology any good? And can we really trust it to validate software? Tariq King will introduce you to the world of AI-driven test automation and discuss its benefits, challenges and other limitations. Learn how test bots use AI/ML technologies to mimic human testing activities such as discovering the application, generating test inputs, and verifying expectations. Come and experience the test bots in action through a demonstration of open-source AI-driven test automation prototypes.

Tariq King
Tariqkingheadshot Tariq King is the Head of Quality at Ultimate Software. With over fifteen years' experience in software testing research and practice, Tariq leads a team of directors, architects, and engineers responsible for guidance, strategy, innovation and outreach in software quality and performance engineering. His areas of research interest include software testing, artificial intelligence, autonomic and cloud computing, model-driven engineering, and computer science education. Tariq has published over 40 research articles in peer-reviewed IEEE and ACM journals, conferences, and workshops, and has been a keynote and invited speaker at international software conferences in industry and academia. He is the co-founder of the Artificial Intelligence for Software Testing Association. Contact Tariq via LinkedIn or Twitter.

For the last 5 years, I’ve been lucky enough to work on bleeding edge software initiatives using things like microservices, containerisation, cloud-based platforms and CICD to deliver more value more quickly to our customers.

So, how can testers keep pace in these incredibly fast-paced environments while trying to test these highly volatile, complex distributed systems?

This scary new world presents an entirely new set of challenges and risks for teams and consequently requires a whole team approach to testing and development.

We as testers need to unlearn our old ideas about testing and learn to accept that we can’t predict system behaviour and that failure is inevitable. The truth is our job is no longer finished when we deploy to production it’s just beginning.

In the face of this reality, we need to focus not just on prevention but also on detection and recovery so that our teams can move quickly and safely with justifiable confidence.

Teams need to build observability into their systems from the start so that can quickly detect important problems, isolate the cause and remediate the issue with minimal impact to the customer.

In this session I’ll talk about my observability journey and the lessons we’ve learned. I’ll discuss how I’ve used mapping exercises, models and workshops to help development teams embrace reality and build observability into both their software systems and the way they work.

Rob Meaney
Rob meaney

Rob Meaney is a tester that loves tough testing and software delivery problems. He works with teams to help create products that customers love and know they can rely upon. Although he enjoys learning about software delivery, in general, he’s particularly interested in Quality Engineering, Test Coaching, Testability, and Testing in Production.

Currently, he’s working as Head of Testing & Test Coach for Poppulo in Cork, Ireland. He’s a regular conference speaker, an active member of the online testing community and co-founder of Ministry of Test Cork.

Previously he has held positions as Test Manager, Automation Architect and Test Engineer with companies of varying sizes, from large multinationals like Intel, Ericsson & EMC to early-stage startups like Trustev. He has worked in diverse areas from highly regulated industries like safety automation & fraud detection to dynamic, exciting industries like gaming.


My journey at Jane started with them bringing me in as their first Automation Engineer and I was tasked with writing all of their automated tests. However, after my evaluation of the organization's maturity level, current status, and actual expectations, I quickly learned that writing test automation wasn't exactly what they needed at that time.

Their processes and culture were in flux with many devs and product owners wanting to keep things with a "startup" feel that really meant "let me do what I want". There were no QA standards or practices, testability was a foreign word, and it seemed like no one understood how to write automated tests. However, everyone seemed to agree that they _wanted_ automated tests and had started and failed multiple times.

This story probably sounds familiar to many of you because it's such a common pattern that I see in many companies. Every time I'm brought in to do some training, consulting, presenting, etc., the same questions or comments get brought up every time:

  • The QA team at the company has no idea how to implement the things they want.
  • Devs own the automated testing and QA owns the UI tests, but p0 bugs get out anyway.
  • We want to own test automation but don't know how to or where to start.

In the end, there is always an "Us versus Them" mentality. Sometimes it's even QA Engineers versus Automation Engineers! Regardless, this is one of the biggest reasons why companies fail at these Agile/DevOps transformations.

Takeaways

I want to show what I did for my current company. I want to walk through my strategy, our structure, how we integrate QA successfully in an Agile environment, Test Infrastructure and Automation, and how we are at the point where testing is top-of-mind for every member of the team. I want to talk about our transformation where leadership is excited and invested in QA and why our devs love our automated tests and are 100% bought in to testing.

Carlos Kidman
Headshot 001

Carlos Kidman is the QA Manager at Jane.com, but who would have thought that Magic the Gathering would introduce him to QA in the first place? Now it’s an integral part of his life. He started in QA Engineering, but quickly moved into Test Automation and grew to appreciate each player and role in the development game. He wants to share the love and joy he’s found with QA and believes that a rising tide raises all ships. Carlos specializes in creating Test Automation Frameworks for UI, Integration, and Service tests, scaling tests and empowering developers and testers with CI/CD tools like Jenkins, Docker, and Kubernetes, and works closely with Infrastructure and DevOps organizations.

Although he is currently a QA Manager at Jane.com, he is also very active in the community. He is the founder of QA at the Point and QA Utah and is also a board member of DevOpsDays.


In the era of DevOps and continuous deployment, more and more organisations are demanding a move from lengthy release cycles to shorter deployments - occurring weekly and sometimes even daily. To accomplish this, test automation is not only required, but is now an integral piece of the continuous integration pipeline. This is a vast contrast from how test automation was viewed not very long ago. In the past, teams treated test automation as a side project - a stepchild like Cinderella. But with its newly discovered importance, test automation is now the “belle of the ball”.

So, how does this change how we develop test automation? In this talk, Niranjani will share her experiences driving test automation from rags to riches at companies such as Lyft and Pinterest. She’ll discuss the practices of building a team and culture to support test automation, as well as failures and mishaps that they endured. She’ll also share the lessons learned of how to prepare tests and infrastructure for this new and richer lifestyle of being a part of CI/CD.

Join Niranjani on this magical journey to transform your test automation from rags to riches. You’ll learn how to dress up your test automation with design patterns that improve CI efficiency, and embark on a whimsical coach ride by wrapping your tests in containers to simplify your build process.

Niranjani Manoharan
Niranjani 071038

Niranjani is an enthusiastic engineer passionate about writing code to break applications! She has worked at both startups and well established companies like eBay, Twitter and Pinterest, balancing the challenges at both environments.

She strives to strike a balance between being a workaholic and a wanna-be-traveler, who is openminded but still likes to believe unicorns are real!


As more companies come to understand the importance and necessity of building accessibility into their applications, it’s critical for testers to implement testing that truly captures issues that can inhibit access and usability for users with disabilities and those without. This session is for those new to accessibility testing and will cover basic testing techniques that can be applied to most applications. It will also help attendees to understand the basics of web/mobile accessibility and discover tools and features to help them in their testing.

Takeaways

  • Learn the basics of digital accessibility
  • Explore techniques and best practices for accessibility testing
  • Overview of manual and automated accessibility testing tools
Crystal Preston-Watson
Profilepic

Crystal Preston-Watson is a QA Engineer at Spruce Labs in Denver, Colorado. With a decade working with tech companies in U.S and Canada, she has developed a passion for building quality and inclusive products. Over the years Crystal has worked as an accessibility engineer supporting and advising engineer teams, front-end developer in fintech, and interactive producer at a daily newspaper. Outside of work, you can find her playing backseat detective while watching old episodes of Forensic Files, photographing her two cats, Shadowmere and Ms. Etta James and getting really weird with life.


After day 1 of TestBash San Francisco, conference ticket holders will be treated to a Food Truck Party right outside the conference venue! We'll have 4 food trucks from all around the Bay area serving up some wonderful food and beverages. This is included in your ticket, it won't cost you extra.

Sit back with other attendees and speakers. Soak up the views and enjoy some great food while we reflect on a fantastic day and anticipate what the next day of talks will bring. Bring warm clothing as this is right by the water.

Simply respond "Yes" to the Food Truck Party question once you have completed your ticket purchase.

We'll cater to dietary requirements too!

Please note that this is only available to people who have purchased a ticket to the conference.

Ash Coleman
Ashcoleman

Ash, a former chef, put recipes aside when she began her career in software development, falling back on her skills in engineering she acquired as a kid building computers with her brother. A progressive type, Ash has focused her efforts within technology on bringing awareness to the inclusion of women and people of colour, especially in the Context-Driven Testing and Agile communities. An avid fan of matching business needs with technological solutions, you can find her doing her best work on whichever coast is the sunniest. Having helped teams build out testing practices, formulate Agile processes and redefine culture, she now works as an Engineering Manager in Quality for Credit Karma and continues consulting based out of San Francisco.


Angie Jones
Angie jones headshot 2 Angie Jones is a Senior Developer Advocate who specializes in test automation strategies and techniques. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, as well as writing tutorials and blogs on angiejones.tech. As a Master Inventor, Angie is known for her innovative and out-of-the-box thinking style which has resulted in more than 25 patented inventions in the US and China. In her spare time, Angie volunteers with Black Girls Code to teach coding workshops to young girls in an effort to attract more women and minorities to tech.

The term hot-fix can sometimes be used incorrectly by teams that want to release a feature or small fix out of the regular release cycle. When out of cycle releases happen, it risks issues being introduced due to changes not being properly tested or impact in areas that are not anticipated, which is why any out of cycle release should be done with extreme caution.

With every project team having their own feature priorities and deadlines, the challenge is making an unbiased decision for whether an out of cycle release should happen based on data, not goal completion, with little to no user impact.

In the talk I’ll discuss how teams can take back the term “hot-fix” by measuring quantitative vs. qualitative data, moving away from a release cadences based on feature readiness to a set release cycles. This discussion will include how to navigate those teams trying to release their features to meet their own goals and deadlines, the process of finding a solution to consistent out of cycle releases, as well as the process of creating a scalable unbiased solution.

Elizabeth Turner
Elizabeth turner Elizabeth is on the Quality Engineering team at Credit Karma where she is the driving force in testing efforts and release management for the company’s native applications. As an advocate for lifelong learning and growth, sharing her experiences with others has become one of her passions. Whether hosting webinars, meetups or having 1:1 discussions, she frequently pairs with organizations to help teams and individuals at all levels of their career work towards setting themselves up for success, not only in goal setting strategies but also in following through with the goals they set. With a passion for the outdoors, when she’s not working, she spends time dispersed camping(or planning her next camping trip) with her husband and two pups in the national forests throughout California.
Micro Sponsors: