Mobile Test Automation at the BBC: Then, Now and Next
This is a case study of how the Mobile Platform teams test automation has evolved over the last 5 years. From rounds of manual regression testing then onto automated UI testing now to isolated code level tests and what’s next for the team.
I’ll go into what worked for the team and what didn’t and how we overcome the problems we faced.
This talk will also detail what we plan to do for the future of testing within the Mobile Platform team.
- Testing: What has and hasn’t worked with our testing from manual to automated testing and everything in between
- Pairing: How we collaboratively pair tester and developers to write code level tests
- Team structure: How our teams are organised and why
- Risk Vs Value: How we balance the risk of not testing Vs value of testing it
- Feedback: What new feedback mechanisms we are looking at to understand the quality of our products and feeding this back into development
Jit has 15 years experience working with a wide variety of companies from Mobile manufactures to OS builders and app developers.
He is currently the Test Team Lead for the BBC Mobile Media Playback Team defining their test strategy. He also supports other teams across the BBC adopt DevOps practices by helping them understand that its more than just tools that enable teams to be successful.
Pipeline Architectures to Fit Your Software Architecture
The journey your code change goes on from idea to benefiting the end user depends on a lot of things which are technology dependent like investment in automation and application tech stack but also business dependent like organisational structure and risk profile. Technologists around the world point to their deployment pipelines to allay any fears their business stakeholders may have about risks when changing the software. The thing is, just having an automated pipeline does not guarantee full confidence in releasing changes to your software in large part because not enough teams are looking at the implementation and architecture of their pipeline.
This talk will look at the complexities that arise from different software architectures. Do you have a monolith? Or maybe your application is a monolith deployment but actually is spread across multiple repositories? How does being an independently deployable service architecture impact your delivery pipeline? For all the job titles, working groups, and decision making that goes into software architecture, this talk will explore the implications and support system required for your deployment pipeline to balance the contextual needs, pros and cons of different choices, and good practices in the wider industry.
Three main takeaways will be:
- Examples of how to apply the same architectural awareness and evolution of software to delivery pipelines
- Commonly used patterns to build confidence in software which interacts with other systems like 3rd party application and custom libraries
- How to incorporate clean code methodologies and practices to the creation of delivery pipelines
Abby Bangser has been an excited member of the Ministry of Testing family for 3 years now. After attending in 2014 she took her first ever stage in 2015 as a part of the 99 second talks in Brighton, was able to volunteer at TestBash NY in the fall and then co-host a workshop in Brighton 2016. Outside of TestBash, Abby has had the opportunity to speak on the DevOps track at Agile20xx and Agile Testing Days in 2015 as well as European Testing Conference and Nordic Testing Days in 2016.
At ThoughtWorks Abby worked in a variety of domains, countries, and team dynamics. While the technical challenges of each domain and tech stack have been interesting, she has realized that team practices and team ownership have a much deeper impact on the end deliverable.
What's that Smell? Tidying Up Our Test Code
We are often reminded by those experienced in writing test automation that code is code. The sentiment being conveyed is that test code should be written with the same care and rigor that production code is written with.
However, many people who write test code may not have experience writing production code, so it’s not exactly clear what is meant by this sentiment. And even those who write production code find that there are unique design patterns and code smells that are specific to test code in which they are not aware.
Given a smelly test automation code base which is littered with several bad coding practices, we will walk through each of the smells and discuss why it is considered a violation and demonstrate a cleaner approach.
Key takeaways include how to:
- Identify code smells within test code
- Understand the reasons why an approach is considered problematic
- Implement clean coding practices within test automation
Angie Jones is a Consulting Automation Engineer who advises several scrum teams on automation strategies and has developed automation frameworks for countless software products. As a Master Inventor, she is known for her innovative and out-of-the-box thinking style which has resulted in 22 patented inventions in the US and China. Angie shares her wealth of knowledge by speaking and teaching internationally at software conferences, serving as an Adjunct College Professor of Computer Programming, and teaching tech workshops to young girls through TechGirlz and Black Girls Code.
You Can Become a Toolsmith Too!
With testers becoming embedded in development teams and those teams adopting practices such as DevOps, Continuous Delivery and Lean Agile, the need to create tools to assist testing becomes ever more important. Anything you can do to speed up your testing and build a greater understanding of code, architecture and systems can be very beneficial. While developers are sometimes better specialised to help build tools, they are not always particularly motivated to learn the technologies that are more beneficial to testing. Its therefore very handy to build your own skills in learning to use more technical tools and coding to build your own tools or bend them to your needs.
In this talk I hope to share my experiences coming into testing as a competent programmer, the challenging testing situations I have faced and how I've created or used tools to assist my testing. These range from the common tools such as Postman, Selenium, browser dev tools and server logs to the more bespoke or specific examples such as data generators for message queues, automating SIP phone calls and complex data queries against tech such as Elasticsearch. From my experiences teaching and mentoring, I also hope to share my observations of the challenges of learning programming, the common stumbling blocks and tips and tricks for getting started.
- Plenty of ideas for tools they could create or aspects of testing they could automate in future.
- A feeling that programming is something they can learn and isn't as scary as it may seem.
- Encouragement that they absolutely have a place in a very technical environment and there are plenty of ways they can be useful and adapt.
Matthew has been testing software for 7 years, starting as a video games tester and is currently a Test Team Lead. Having graduated in Computer Games Technology, he originally wanted to become a developer but quickly discovered a deep passion for testing. His career has followed the trend of the software industry, going from testing a long distance away from developers and code to pairing with developers and helping them test as they write code. Along the way he has gained a great variety of experience testing telephony exchanges, analytics systems, websites, video games (including motion controls, 3DTVs, augmented reality) and mobile apps.
Through this background in computer science and his experience as a tester, Matthew is keen to help breakdown technical subjects and jargon for testers and expand their arsenal of test techniques!
Of Spies, Fakes and Friends - Help Your Code Lead a Double Life!
You’ve started writing unit tests for your applications but aren't quite sure what mocks and spies are about? You sometimes run into trouble because you have so many dependencies in your tests? You don’t know how to test your code that calls an API? Well, this session could help you out! Find out how test doubles come in handy when you’re test driving your code.
In this talk you’ll learn about the different types of test doubles and their purpose. I’ll demonstrate how they can help you make your test driven life a lot easier. There will be code examples for rolling your own test doubles and also for using doubles provided by one of the popular testing frameworks.
Attendees will learn how test doubles can simplify their tests and make the untestable testable. They will also learn to distinguish between the different types of test doubles, which ones to use in which situation and why what we colloquially call a “mock” isn’t always a mock.
Rabea works as a software developer at 8th Light, a software consulting company that follows the Software Craftsmanship principles. She has worked as a software developer for almost four years now, after changing careers from digital marketing - one of the best decisions she has ever made. Rabea is passionate about encouraging women to join the tech industry and is a voluntary instructor at Code First:Girls.
Digging In: Getting Familiar With The Code To Be A Better Tester
Maybe you’ve been testing the same application for a while, and your rate of finding new bugs has slowed. Or you’re trying to find more ways to figure out what your devs are doing day to day. You have the tools at your disposal, you just need to dig in!
In this talk, Hilary Weaver-Robb shares tools and techniques you can use to take your testing to the next level. See everything the developers are changing, and learn to find the most vulnerable parts of the code. These tools and techniques can help you focus your testing, and track down those pesky bugs!
- tools to do static analysis on the code
- using those tools to find potential bugs
- using commit logs to figure out what's being changed
- that it's helpful to dig into the code of the application under test
Hilary Weaver-Robb is a software quality architect at Detroit-based Quicken Loans. She is a mentor to her fellow testers, makes friends with developers, and helps teams level-up their quality processes, tools, and techniques. Hilary has always been passionate about improving the relationships between developers and testers, and evangelizes software testing as a rewarding, viable career. She runs the Motor City Software Testers user group, working to build a community of quality advocates. Hilary tweets (a lot) as @g33klady, and you can find tweet-by-tweet recaps of conferences she’s attended, as well as her thoughts and experiences in the testing world, at g33klady.com.
Scalable XCUITests within iOS Pipelines
When it comes to iOS app development, Swift is becoming the top choice among iOS developers. It’s because of speed, type safety and simplicity of the Swift programming language. Apple also launched Xcode UI testing framework a.k.a XCUITest to test those apps written in Swift. XCUITest is extension of XCTest framework which is Apple’s unit, network and performance testing framework. Using XCUITest, we can write UI Tests in Swift and put the UI Test code in the same repository as application code which makes collaboration with developers and CI/CD practices much smoother. The traditional tools like Appium and Calabash doesn’t fits well in the native app development with Swift. Although, XCUITest has recorder to get started with UI testing, we need to use some patterns to make XCUITest more scalable. Unlike the Page Object Pattern Or Screenplay Pattern in the web testing, we need to organise XCUITests using some sort of test design pattern to make them scalable.
Swift is designed to be a protocol oriented programming language and it has some awesome features like protocols, extensions, enumerations. The patterns like Page Objects or Screenplay may work somehow but they doesn’t fit in the protocol oriented way of Swift. We can use protocol oriented approach to architect XCUITests that can be scaled easily within iOS CI/CD pipelines. In this talk, we will discuss
- Protocol Oriented architecture for XCUITest and using Swift features like protocols, extensions and enumerations to write XCUITests.
- How to organise XCUIElements using Swift extensions for better reuse
- How to architect XCUITests for both iPhone and iPads without code duplication
- Setting up XCUITests within iOS CI/CD pipelines
- Tips for writing CI friendly XCUITests. e.g Stubs, Accessibility Identifiers, Real Device Tests, Xcode scheme strategy for UI Tests etc
Shashikant is passionated about DevOps, CI/CD and Test Automation practices for iOS apps. He uses native Apple developer tools to automate iOS release pipelines with solid test automation. Currently his toolbox includes Swift, XCTest, Xcode Server, Fastlane, Multiple native Apple developer tools. He blogs regularly on iOS DevOps and Test Automation on his personal blog (XCBlog), Medium and DZone.
The Use and Abuse of Selenium
Like many power-tools, Selenium has a host of features that are designed to be used one way and end up being used another. In this technically focused talk, we'll cover the intended use and the observed abuse of some of these features. We'll cover everything from the proper way to wait, find and interact with elements, and even how to make your test runs fast and stable.
- Better understanding of how Selenium works
- Knowledge on how the waiting strategies interact
- How to use the Actions APIs
- Extending Selenium to better support your testing