TestBash Australia 2019

October 24th 2019 08:00 - October 25th 2019 18:00

TestBash, our software testing conference, is heading back down under to Sydney, Australia. This time we're bringing workshops too! The venue for the week is the, very central, Fishburners in Sydney.

We'll start with a 3-day Automation in Testing workshop with Richard Bradshaw and Mark Winteringham running from 21st to 23rd October.

Then we'll have a workshop day on Thursday 24th consisting of eight half-day workshops on a huge variety of topics.

We'll wrap up the week with our single-track conference day on Friday 25th consisting of 8 talks.

You can expect a wonderful community to come together in a friendly, professional and safe environment on all days. We think you will feel at home when you arrive! (We might even throw something on the barbie for you).

Tickets start at $700 for the workshops and $500 for the conference.

Event Sponsors:
Training
Monday, 21st October 2019:

What Do We Mean By ‘Automation in Testing’?

Automation in Testing is a new namespace designed by Richard Bradshaw and Mark Winteringham. The use of automation within testing is changing, and in our opinion, existing terminology such as Test Automation is tarnished and no longer fit for purpose. So instead of having lengthy discussions about what Test Automation is, we’ve created our own namespace which provides a holistic experienced view on how you can and should be utilising automation in your testing.

Why You Should Take This Course

Automation is everywhere, it’s popularity and uptake has rocketed in recent years and it’s showing little sign of slowing down. So in order to remain relevant, you need to know how to code, right? No. While knowing how to code is a great tool in your toolbelt, there is far more to automation than writing code.

Automation doesn’t tell you:

  • what tests you should create
  • what data your tests require
  • what layer in your application you should write them at
  • what language or framework to use
  • if your testability is good enough
  • if it’s helping you solve your testing problems

It’s down to you to answer those questions and make those decisions. Answering those questions is significantly harder than writing the code. Yet our industry is pushing people straight into code and bypassing the theory. We hope to address that with this course by focusing on the theory that will give you a foundation of knowledge to master automation.

This is an intensive three-day course where we are going to use our sample product and go on an automation journey. This product already has some automated tests, it already has some tools designed to help test it. Throughout the three days we are going explore the tests, why those tests exist, our decision behind the tools we chose to implement them in, why that design and why those assertions. Then there are tools, we'll show you how to expand your thinking and strategy beyond automated tests to identify tools that can support other testing activities. As a group, we will then add more automation to the project exploring the why, where, when, who, what and how of each piece we add.

What You Will Learn On This Course

Online
To maximise our face to face time, we’ve created some online content to set the foundation for the class, allowing us to hit the ground running with some example scenarios.

After completing the online courses attendees will be able to:

  • Describe and explain some key concepts/terminology associated with programming
  • Interpret and explain real code examples
  • Design pseudocode for a potential automated test
  • Develop a basic understanding of programming languages relevant to the AiT course
  • Explain the basic functionality of a test framework

Day One
The first half of day one is all about the current state of automation, why AiT is important and discussing all the skills required to succeed with automation in the context of testing.

The second half of the day will be spent exploring our test product along with all its automation and openly discussing our choices. Reversing the decisions we’ve made to understand why we implemented those tests and built those tools.

By the end of day one, attendees will be able to:

  • Survey and dissect the current state of automation usage in the industry
  • Compare their companies usage of automation to other attendees
  • Describe the principles of Automation in Testing
  • Describe the difference between checking and testing
  • Recognize and elaborate on all the skills required to succeed with automation
  • Model the ideal automation specialist
  • Dissect existing automated checks to determine their purpose and intentions
  • Show the value of automated checking

Day Two
The first half of day two will continue with our focus on automated checking. We are going to explore what it takes to design and implement reliable focused automated checks. We’ll do this at many interfaces of the applications.

The second half of the day focuses on the techniques and skills a toolsmith employs. Building tools to support all types of testing is at the heart of AiT. We’re going to explore how to spot opportunities for tools, and how the skills required to build tools are nearly identical to building automated checks.

By the end of day two, attendees will be able to:

  • Differentiate between human testing and an automated check, and teach it to others
  • Describe the anatomy of an automated check
  • Be able to model an application to determine the best interface to create an automated check at
  • How to discover new libraries and frameworks to assists us with our automated checking
  • Implement automated checks at the API, JavaScript, UI and Visual interface
  • Discover opportunities to design automation to assist testing
  • An appreciation that techniques and tools like CI, virtualisation, stubbing, data management, state management, bash scripts and more are within reach of all testers
  • Propose potential tools for their current testing contexts

Day Three
We’ll start day three by concluding our exploration of toolsmithing. Creating some new tools for the test app and discussing the potential for tools in the attendee's companies. The middle part of day three will be spent talking about how to talk about automation.

It’s commonly said that testers aren’t very good at talking about testing, well the same is true about automation. We need to change this.

By the end of day three, attendees will be able to:

  • Justify the need for tooling beyond automated checks, and convince others
  • Design and implement some custom tools
  • Debate the use of automation in modern testing
  • Devise and coherently explain an AIT strategy

What You Will Need To Bring

Please bring a laptop, OS X, Linux or Windows with all the prerequisites installed that will be sent to you.

Is This Course For You?

Are you currently working in automation?
If yes, we believe this course will provide you with numerous new ways to think and talk about automation, allowing you to maximise your skills in the workplace.
If no, this course will show you that the majority of skill in automation is about risk identification, strategy and test design, and you can add a lot of value to automation efforts within testing.

I don’t have any programming skills, should I attend?
Yes. The online courses will be made available several months before the class, allowing you to establish a foundation ready for the face to face class. Then full support will be available from us and other attendees during the class.

I don’t work in the web space, should I attend?
The majority of the tooling we will use and demo is web-based, however, AiT is a mindset, so we believe you will benefit from attending the class and learning a theory to apply to any product/language.

I’m a manager who is interested in strategy but not programming, should I attend?
Yes, one of core drivers to educate others in identifying and strategizing problems before automating them. We will offer techniques and teach you skills to become better at analysing your context and using that information to build a plan towards successful automation.

What languages and tools will we be using?
The current setup is using Java and JS. Importantly though, we focus more on the thinking then the implementation, so while we’ll be reading and writing code, the languages are just a vehicle for the context of the class.

Richard Bradshaw
Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.
Mark Winteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester


Workshops
Thursday, 24th October 2019:
Morning Sessions
As co-organiser and coach of DevOps Girls, I will run you through a slightly-testing-tailored version of a basic AWS workshop we run with women trying to break into DevOps. 

This is a zero to hero session. It opens the door to AWS and DevOps literacy and establishes some basics, making it easier to start researching your own way through the myriad of options that this field present. Just like you needed a few basic words and procedures to find your feet in testing initially, this will help establish those basics for you to explore the ever-growing world of DevOps, and how you could learn to add value there too.

There will be setup steps emailed to and I will be available on email before, and in person on the day of the workshop to answer setup questions.

Takeaways

  • Not being spooked by some buzzwords anymore
  • Establishing how easy it is to use AWS
  • Learning to deploy your own website (and test it!)
  • Helping find your feet in the DevOps conversation
 
Theresa Neate
Theresa Neate is a senior developer advocate, lead quality analyst and test consultant - with several years of leadership experience - who loves lean and agility and advocates for holistic system quality and systems thinking. Since June 2018 she has been doing developer advocacy for an internal developer cohort (supporting an internal Application Platform), underpinned by over 2 decades of testing and QA hands-on and leadership experience; the last decade spent between the ThoughtWorks consultancy stable, Australia Post’s Digital Delivery Centre and since early 2016 has been at digital media icon REA Group. In her spare time Theresa studies a Diploma of Networking, freelance blogs for TechTarget DevOpsAgenda, sits on their Advisory Board, and co-organises and contributes at DevOps Girls. She’s a lifelong and eternally curious sceptic and learner.

The web applications and API's are FULL of issues to find. The question is, how do we decide what to test, and how to test it? 

One technique we can use is Fuzzing. It's fantastic a way of targeting a specific field, form, URI, endpoint or request with a huge range of data. Whether it is a host of nasty strings, injections, buffer overflows, integers or floats, we will learn how to hammer our applications under test!

Penetration testing experts use Fuzzing to discover potential security flaws and ways to exploit applications. As testers, we can use the same technique to help support the successful development of our products.

In this workshop, we will discover how Fuzzing can become an extremely useful part of your toolset. It takes the best approaches to security and error testing and blends them with a targeted approach to automation. 

We will explore ways to identify what features and functions to test with Fuzzing, as well as useful tools, techniques and approaches to support your learning.

In addition, we will explore how we can extend the technique and build it into your continuous integration and development strategies. 

Fuzzing can add huge value to your existing testing, by driving huge amounts of exciting data through your applications. Together we will discover how useful it can be!

Takeaways

  • Develop and then add Fuzzing to your existing repertoire of testing skills.
  • Be able to apply Fuzzing to a variety of contexts - fields, forms, APIs etc
  • Utilise and explore a number of useful tools to support the use of Fuzzing.
  • Understand how to communicate about the issues you will find when Fuzzing to the rest of your teams.
  • Begin to explore how Fuzzing can be incorporated into your development, integration and deployment strategies.
Dan Billing
Dan has been a tester for 18 years, working within a diverse range of development organisations, mostly in London and the south-west of England. He is now freelance test consultant, coach and trainer, but has worked within some complex industries and contexts. His skills include mentoring, supporting and training members of the team to develop their security skills also. Dan’s love of testing drives him to become an active member of the testing community. He has organised international events and workshops in the testing community, and is a speaker at various international Agile, technology and testing conferences. He is also a co-host of the Screen Testing podcast, alongside Neil Studd.

Learn how to program the fun way by making random sandwiches! Even if you have zero programming experience, by the end of this workshop you will be writing your own programs in Ruby with confidence.

This hands-on course skips all the boring stuff and gets right to the fun part - making your first program in Ruby. You will be writing a simple program that generates randomly created sandwich recipes - so handy for lunchtime! Specifically, in this course you will:

  • Use variables - local, global
  • Use arrays, strings
  • Use loops
  • Use if, then, else, unless
  • Use objects
  • Use classes, add methods and call them
  • Run commands from a command line
  • Learn what to do when you get stuck

The skills you learn will let you generate test data, learn test automation, and more!

Trish Khoo

Trish Khoo is a software development consultant and international keynote speaker. She has over 15 years of experience in the software industry, specialising in software testing, infrastructure and automation. Her journey has taken her from Microsoft to Google, from London to San Francisco, and many places in between. Now she helps companies all over the world with their software needs from her home base of Brisbane, Australia. She also dedicates time towards fostering a strong local tech startup community and mentoring other technologists. When she’s not doing this, she’s working on her creative pursuits – artwork, singing and writing. Learn more about Trish at her website http://trishkhoo.com

Regardless of whether or not you have any "direct reports" in your organisation you may well find yourself responsible for mentoring other colleagues on testing techniques. The problem is - how do you teach skills that have become second nature for yourself. We're often so busy focusing on improving our testing that we don't develop techniques and soft skills for passing on those learnings to junior testers, non-testing colleagues or new starters.
 
We want you to leave this tutorial with confidence that you can successfully mentor colleagues and a number of techniques to facilitate your mentoring, such as:
 
  • Where's my motivation: it's not just film star divas that need to find their motivation before acting out a scene.
  • Learning will always be more successful if a mentee is motivated, and to be brutally honest there are some aspects of testing (actually some aspects of any discipline) that are dull and not in any way motivating.
  • Don't feed the mentee: It is all too easy to fall into the trap of providing answers or simply telling a mentee about a topic, unfortunately humans are very bad at retaining that kind of information and we will demonstrate this with a quick game that you can take back to work with you.
  • Imitate, Assimilate & Innovate: Our mentees will go through various stages of learning and our mentoring will need to change tack accordingly. Using role play techniques we will encourage you to frame the subject matter appropriately to maximise learning potential.
  • Role reversal: Encouraging the good grace needed when your mentee succeeds and becomes more knowledgeable than you in a subject area - time to start mentoring them on mentoring.
We will use a combination of role play, games and written exercises to work through these learnings. Activities will be varied to work with the variety of learning styles that humans display.
 

Takeaways

  • Improved empathy in dealing with challenging situations
  • Discovery of, and immersion in, different learning styles (seeing/hearing/reading/doing)
  • Practice of practical communication skills that work in fast-paced and agile environments
  • Use of clean language to explore the underlying situation behind a mentees request for help
  • Finding and maintaining common ground for a 2-way mentor-mentee relationship
Nicola Sedgwick
Nicola is one of those testers that 'fell into the role' after working as a Support Engineer and Trainer for a small software house dealing predominantly with the construction sector whose focus is continually on design and quality (no-one wants to construct anything that collapses and causes injury). However, 'fell' implies the role focus was not chosen deliberately, which is incorrect. Having worked in the IT industry for 15 years Nicola feels she can (and should) share some of her experiences to help others, and to help them help their colleagues & teammates.
Afternoon Sessions

I have come across some extreme examples of Business / Organizations who have all their eggs in one basket - in terms of

  • understand their Consumers (engagement / usage / patterns / etc.),
  • understand usage of product features, and,
  • do all revenue-related book-keeping

This is all done purely on Analytics! Hence, to say “Business runs on Analytics, and it may be OK for some product / user features to not work correctly, but Analytics should always work” - is not a myth!

What this means is Analytics is more important now, than before.

In this workshop, we will not assume anything. We will discuss and learn by example and practice, the following: 

  • How does Analytics works (for Web & Mobile)? 
  • Test Analytics manually in different ways 
  • Test Analytics via the final reports
  • Why some Automation strategies will work, and some WILL NOT WORK (based on my experience)!
  • We will see demo of the Automation running for the same.
  • Time permitting, we will setup running some Automation 
  • scripts on your machine to validate the same

Takeaways

We will learn by practice the following:

  • What is Analytics?
  • Techniques to test analytics manually.
  • How to automate the validation of analytics, via a demo, and if time permits, run the automation from your machine as well.
Anand Bagmar
Anand is a Software Quality Evangelist with 20+ years in the software testing field. He is passionate about shipping a quality product, and specialises in Product Quality strategy & execution, and also building automated testing tools, infrastructure and frameworks. Anand writes testing related blogs and has built open-source tools related to Software Testing – WAAT (Web Analytics Automation Testing Framework), TaaS (for automating the integration testing in disparate systems) and TTA (Test Trend Analyzer). You can follow him on Twitter @BagmarAnand, connect with him on LinkedIn at https://in.linkedin.com/in/anandbagmar or visit essenceoftesting.com.
Creating a testing approach for a brand new project is daunting, but it gets easier with practice ... so let's practice!

In this workshop you'll team up with other testers, and look at a product, creating plans for how you can approach testing it.

You won't be completely alone, I'll mentor you through some of the approaches he uses to tackle such projects - how to collect your ideas, cluster them, find holes and reach out.

Most of all, you'll have a lot of fun getting there - yes, test planning can be fun! See you there!

Takeaways

  • What are we trying to achieve with a test strategy
  • Collecting your own ideas, clustering, bringing in others ideas
  • Advocating for testability
Mike Talks
Mike Talks works as a test manager around Wellington. He loves strategy in all shapes and sizes, having helped to deliver all shapes and size of project. However, challenge him to a game of chess, and he might need clarification of "how does the horsey move again?".

Functional test automation is a wonderful way to frequently and expeditiously execute regression testing. However, the test scripts that we write are limited to the few assertions we’ve considered. Many times, these assertions only cover the tip of the iceberg and account for a small fraction of what a human being would have subconsciously verified.

For example, a test automation script can verify that when adding 2 and 2 via a calculator app, the sum that is returned on screen is 4. But does the 4 appear correctly? Is it upside down? Or sideways? Is it the right color? Are there errors that appear on other areas of the screen? These are all things that the human eye would notice, but an automated regression test would not. The test would continue to pass, even with all of the aforementioned errors. This is where visual validation comes in!

Visual validation is a relatively new concept to add to your test automation toolbox. Applitools, a sophisticated visual validation tool, uses AI to mimic the human eye and brain to verify the look and feel of your application.

In this workshop, you will develop automated UI tests using Java, Selenium WebDriver, JUnit, and Applitools. You’ll learn when and where to add visual assertions, how to work with various match levels including ones suitable for dynamic content, and how to evaluate and resolve visual test results.

Existing familiarity with test automation (even if with different tools) will be helpful for this workshop.

Angie Jones
Angie Jones is a Senior Developer Advocate who specializes in test automation strategies and techniques. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, as well as writing tutorials and blogs on angiejones.tech. As a Master Inventor, Angie is known for her innovative and out-of-the-box thinking style which has resulted in more than 25 patented inventions in the US and China. In her spare time, Angie volunteers with Black Girls Code to teach coding workshops to young girls in an effort to attract more women and minorities to tech.
"Testing and Quality is everyone's responsibility" - this philosophy has changed the tester's role in the agile delivery teams. Our testers are expected to act as a quality hub rather than just bug hunters. In this new role, they need to learn many new skills to exceed their role expectations.
Facilitation is one such skill.  Our testers are expected to facilitate different types of discussions or meetings like test strategy discussions, brainstorming session to discuss better monitoring and alerts for new features, different types of retrospective meetings, workshops.
Using their facilitation skills,  Our testers could guide their delivery teams to learn, plan, brainstorm and solve testing or quality issues in a collaborative and effective way. It is one of the key skills to succeed in a Quality coach or advocate role.
In this workshop, you will learn 
  • Why facilitation skills are important? How it is different than chairing a meeting?
  • Basic principles of Facilitation 
  • How do you facilitate?
  • Practice session

Takeaways

By end of this session, you would learn how to encourage and guide everyone in your teams to participate in the testing discussions and assist your team to plan and solve testing issues.
Divya Konnur
Divya is passionate quality advocate who coaches agile delivery teams to develop testing mindset and build confidence in the quality of their deliverables. Having 14 years of testing experience, she has experience of testing wide variety of products - from printers to microservices and mobile apps. She loves asking questions, creating workshops for developers to build quality culture and experimenting new tools to get faster feedback.
Conference
Friday, 25th October 2019
In an ideal world your newly hired programmers are being mentored into a good testing and TDD approach. The teams are regularly collaborating as Pairs or Mobs. And test quality is considered of more importance that code quality because if we keep the tests we could quickly rewrite the code from scratch.
 
I don’t get to live in that world. Even experienced programmers I meet haven’t had the mentoring and experience to write good tests. Often their peers are giving detailed critic of the implementation code while ignoring test code. Seb Rose says testers should take part in code reviews. I say testers should teach programmers to write better test.
 
In this talk I will cover why testers should teach programmers. How to practice and learn TDD together. The role tests serve as requirements and drive code creation. The knowledge testers bring to understand not all data in our tests are equal. Why we believe testing behaviours is more valuable that testing functions. As well as the role tests serve as documentation and regression. And I’ll explain how when we’ve helped programmers with all these testing centric concerns they get the added benefit of tests that aid - and not hinder - refactoring.
 
I’ve been teaching programmers testing techniques across different companies over the last eight years. I see testing as a key skill Agile teams need to master to succeed. Do don’t need to self-identify as a technical tester to help make that happen.

Takeaways

  • How to approach aiding programmers to learn testing.
  • A specific path to evolve their test thinking.
  • And clear benefits the programmers can gain from this knowledge we can share.
Geoffrey Dunn
Geoff has been working as a tester with embedded software teams at ResMed for over 6 years. In that role he bringing testing and software quality practices closer to the code being created. Prior to that Geoff worked as a software engineer for over 10 years before deciding testing was the biggest challenge for Agile teams to master. So made a career change and worked as a Test Manager with the startup Building IQ for a year before moving on to ResMed where he is now a Lead Test Engineer. Outside of work Geoff helps officiate Roller Derby games. At home he is supported by his Wife and two cats.
These days, by far the most common testing we hear about tends to be for web or mobile applications. Automation frameworks are plentiful and there are blogs on just about every topic you could hope to find. What happens when you take software tester with a background in enterprise software development and throw them into the world of hardware product development?
 
Virtual reality remains quite a niche market, limited for the most part to enthusiasts and high-tech enterprises. As such, there are few voices speaking on what it’s like as a tester in this market. In this presentation come with me on my journey moving from testing enterprise software to being part of a team developing VR hardware. 

How do you test in a domain where available information and tooling is harder to find or non-existent? 

What challenges are there in testing VR and what similarities are there to application testing?

Takeaways

  • Learn how past experiences can be adapted to a new context
  • An understanding of how VR differs from other software testing and an insight into some of the unique challenges presented in testing VR hardware
  • Practical advice on staying afloat when you're thrown in the deep end
Nick Pass
Nick has been involved in software development since 2004, first as a developer, then ops, a scrum master, and finally tester. In 2017 he moved to the UK and joined DisplayLink as the first tester in the product team working on what would eventually become the HTC Vive Wireless VR adapter. As the first tester in the team Nick worked to establish testing practices and processes for what was a new domain to the company, as well as helping the team evolve quality-focussed development practices as the project team grew. Nick was also regularly involved with the Cambridge Ministry of Testing meetup group and was a regular facilitator and mentor in the inaugural season of the Software Testing Clinic in Cambridge. After 2 years in the UK Nick returned to Australia where he now lives in Canberra with his family, working as a QA Analyst for Xero.

Hey N00bs!

Let me take you back in time to the 1980s and tell you a story. A story of an 8-year-old boy, seaside arcades, chunky cartridges, tape drives, floppy discs, pokes, cheats, and magazines full of code! This is the start of how I became a tester.

Video gaming is a massive world, financially, technically and also in terms of skills and development. Both the hardware and software aspects of the industry are in rapid states of change and flux, with huge competition for the loyalty of the customer base.

Gaming itself is often written off as a wasteful pastime. Hours of time are sunk into playing the latest games, just to get kudos amongst other fellow gamers. Some do take it to extremes and ignore all other aspects of their lives.

I would argue that gaming is a productive activity. Gamers test all the time. Gamers learn, fail and fail again, explore, model, tweak, mod, exploit and play to win! In this talk, we will discover how gaming can be a route to learning and better testing!

What models can be applied to games, that can also add value when testing? Similarly, what testing and other learning models can be applied when we solve gaming problems? What can open world games teach us about exploratory testing? What heuristics do we use when playing different genres of game, and how do we learn them? How does recognising these heuristics in games help us to strengthen our skills for when we are on the job.

So, don't be a camper. Don't get pwned. Let's discover together how gaming can really add huge XP to your testing!

Takeaways

  • Discover how aspects of gaming can assist in developing core testing skills, such as learning and exploration.
  • Develop an understanding of the heuristics of gaming, and how they can enhance our approaches when testing other kinds of software.
  • Be able to apply models to games, that compliment testing and vice versa.
  • How encouraging new skill acquisition such as programming and other tech skills, can be achieved by exploring the world of gaming.
  • Explore the stories of fellow testers who have tested video games, and how their careers have developed from this work.
Dan Billing
Dan has been a tester for 18 years, working within a diverse range of development organisations, mostly in London and the south-west of England. He is now freelance test consultant, coach and trainer, but has worked within some complex industries and contexts. His skills include mentoring, supporting and training members of the team to develop their security skills also. Dan’s love of testing drives him to become an active member of the testing community. He has organised international events and workshops in the testing community, and is a speaker at various international Agile, technology and testing conferences. He is also a co-host of the Screen Testing podcast, alongside Neil Studd.

The application is really very simple, it's a retro guessing game. The computer simulates rolling two dice and adding them together. You guess a number - and it will tell you if you're too high, too low or correct".

Simple right? So what could go wrong?

Using an online website, the audience will attempt to test from their seats a number of builds. Will they find the bugs?

This is a fun workshop Mike Talks has run as part of the Wellington Summer of Test workshop on testing as well as at Datacom Test Camps. He will be accompanied on stage with his brother Simon for the first time ever!

This session has helped people think about patterns they use in testing, how they observe and most importantly how the verbalise when they encounter an issue. But also what we as testers need to champion for.

Takeaways

  • Patterns used in testing
  • How you observe (and the importance of notes to support you)
  • How you verbalize your observation so that others know there's an issue
  • Asking for items which will support you
Mike Talks
Mike Talks works as a test manager around Wellington. He loves strategy in all shapes and sizes, having helped to deliver all shapes and size of project. However, challenge him to a game of chess, and he might need clarification of "how does the horsey move again?".

This session will cover the origins of DevOps, the problems it initially tried to solve, and debunk some of the myths that have since been generated regarding it.

This is a lighthearted but realistic conversation where I challenge some of the buzzwords, including DevOps itself. And what cargo-culting looks like in this space.

I will then further discuss how Testers are and have always been members of DevOps, and what this looks like practically.

The voice that I bring is a different perspective of keeping things sensible, away from extremism or “cargo culting” or fanaticism. My focus will be on pragmatism and finding your place in this conversation.

Takeaways

  • Not being spooked by some buzzwords anymore
  • Not thinking of DevOps as a fad but as the solution is was intended to be
  • Realising that you can be highly relevant in the DevOps conversation
Theresa Neate
Theresa Neate is a senior developer advocate, lead quality analyst and test consultant - with several years of leadership experience - who loves lean and agility and advocates for holistic system quality and systems thinking. Since June 2018 she has been doing developer advocacy for an internal developer cohort (supporting an internal Application Platform), underpinned by over 2 decades of testing and QA hands-on and leadership experience; the last decade spent between the ThoughtWorks consultancy stable, Australia Post’s Digital Delivery Centre and since early 2016 has been at digital media icon REA Group. In her spare time Theresa studies a Diploma of Networking, freelance blogs for TechTarget DevOpsAgenda, sits on their Advisory Board, and co-organises and contributes at DevOps Girls. She’s a lifelong and eternally curious sceptic and learner.
Security issues can be identified using the stock-and-trade critical thinking skills of a tester.
 
Some time ago I had the pleasure of taking part in a security bug hunt for a new financial product. This was a product ready to go to market, a product that had passed all penetration tests and was now being handed to a crowd of external testers for a final attempt to 'hack' the product.
 
Against all their confidence I was able to 'hack' that product and use funds to which I should not have had access. However, once I reported the vulnerability, I wasn't believed and I was asked to repeat the 'hack' multiple times until the 'experts' believed I was achieving what I was reporting - they simply couldn't believe that their penetration test result was wrong.
 
Like many security talks I will tell you all about the tool I used to perform this 'hack'; Unlike many security talks this is not a tool you can install, rent or purchase - because it's my brain, but your brain is capable of doing the same.
 

Takeaways

  • security issues can be identified by all team members not just security experts
  • critical thinking and the human brain are amazing tools for finding security issues
  • security testing should take place throughout development and not just pre-release
Nicola Sedgwick
Nicola is one of those testers that 'fell into the role' after working as a Support Engineer and Trainer for a small software house dealing predominantly with the construction sector whose focus is continually on design and quality (no-one wants to construct anything that collapses and causes injury). However, 'fell' implies the role focus was not chosen deliberately, which is incorrect. Having worked in the IT industry for 15 years Nicola feels she can (and should) share some of her experiences to help others, and to help them help their colleagues & teammates.

Adding visual validation to existing automation frameworks is known to catch cosmetic bugs due to alignment or disappearing elements. Many use it to make sure their applications are as beautiful as they intend them to be. While this is certainly a beneficial usage of visual testing, it’s only scratching the surface.

As toolsmiths, let’s explore how else we might be able to use visual testing tools to meet our regression testing needs.

In this talk, Angie will share three unexpected benefits of visual test automation:

  • The removal of boilerplate code and subpar assertions
  • The ability to perform localization regression testing more effectively
  • The ability to execute cross-platform testing faster and more reliably
Angie Jones
Angie Jones is a Senior Developer Advocate who specializes in test automation strategies and techniques. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, as well as writing tutorials and blogs on angiejones.tech. As a Master Inventor, Angie is known for her innovative and out-of-the-box thinking style which has resulted in more than 25 patented inventions in the US and China. In her spare time, Angie volunteers with Black Girls Code to teach coding workshops to young girls in an effort to attract more women and minorities to tech.

Security and Infrastructure compliance is a critical aspect of all modern business platforms. With the DevOps movement pushing teams towards faster software delivery cycles, developers are also releasing security vulnerabilities and non-compliant applications more quickly. Organizations must learn how to decrease risk by shipping software quickly, but with higher efficiency and lower risk. What if we automated our compliance audits so they could be ‘shifted left’ as part of the application and infrastructure development lifecycle?

This talk focuses on how to address these aspects and incorporate infrastructure compliance testing into a software delivery lifecycle. I will demonstrate using the open-source ‘Inspec’ framework (https://inspec.io) which provides an extensible pattern for building compliance into continuous delivery pipelines.

Takeaways

I believe the audience would leave the room with the following learnings -

  • The importance of security and infrastructure compliance testing
  • The concept of shift-left infrastructure compliance testing
  • A technical demo of compliance-as-code using the open source ‘Inspec’ framework
  • How can this be incorporated as part of a Continuous Delivery lifecycle.
Mrinal Mukherjee
Lead Engineer at ANZ. Passionate about automation and all things DevOps
Micro Sponsors: