Reading:
Mobile App Testing Mnemonic: Reminders & Tips For Testing Mobile Apps
Qase — Test reports you’ll love to share image
Don’t test in the dark. Our sharable, customizable dashboards help QA teams analyze and report on testing data quicker and with deeper insights.

Mobile App Testing Mnemonic: Reminders & Tips For Testing Mobile Apps

Read the latest article from The Testing Planet "Mobile App Testing Mnemonic: Reminders & Tips For Testing Mobile Apps" by Daniel Knott

Content in review

Reminders & Tips For Testing Mobile Apps

By Daniel Knott

Performing a quick research on the Internet with the search term "list of testing mnemonics", one can find more than 30 mnemonics to support their own testing activities. One of the most known heuristics or mnemonics in software testing is SFDPO (San Francisco Depot) created by James Bach. Mnemonics don’t only help software testers to remember important parts to cover during the testing activities, they also may trigger new testing ideas.

MOBILE APP TESTING As A Mnemonic

For my mobile testing activities, I’ve mainly used I SLICED UP FUN created by Jonathan Kohl. It helped me to focus my testing activities on different mobile areas. Today I want to share my own mnemonic which I have created over the last month. It's called "MOBILE APP TESTING"! Easy to remember right? What follows is a detailed description of the mnemonic with explanations, tips and resources to help you with mobile testing.

M - Mobile Device

In the early stages of software development lifecycle, testing on simulators or emulators is fine, but I recommend testing on real devices. The closer it is to the rollout of your app the more a team must test on real devices.

I always recommend selecting the devices for testing that are used by the target audience. If the target audience is unknown, a team should perform some market research based on the target customers. If you don’t have handy market research, you can use the Digital Test Coverage Report from Perfecto Mobile to get more insights about the most used devices in different countries. Once you have a list of devices to focus on, they can be grouped for a better prioritization based on your target audience or target customers.

O - Orientation

Mobile apps can be used in two orientations called portrait and landscape mode. If an app supports both, a tester should perform some orientation checks. Here are some examples of orientation checks:

  • Rotate the device from landscape to landscape (a 180 degree rotation),
  • portrait to landscape (90 degree rotation left, or a 90 degree rotation right) and back.

Perform these kinds of tests, multiple times, in a short timeframe to check if the app is able to redraw the UI and to present everything correctly. I wrote a detailed blog post about orientation testing and QATestlab also wrote a great post on this topic.

B - Mobile Browsers

Not every mobile app is a native implementation. Many "apps" out there are responsive web pages or hybrid apps built with web technologies. In order to test a mobile web app, a team or the tester must perform testing on various mobile browsers, screen sizes, and pixel densities to be sure that the app works as expected.

In an early stage of the development phase, the Google Chrome developer tools can be very useful to change the device type or the screen resolution in a Chrome desktop browser. The final testing should be done on the real device with the browser most used by the target audience or customer. Check out "Mobile Testing 101" by Stephen Janaway, which explains how to use the Chrome developer tools in mobile testing.

I - Interrupts

Interruptions on a mobile device can come in several forms. Examples of interrupts are system or app notifications, alarms, or phone calls. These kinds of interrupts can have a huge impact on your app. Testing these kinds of interrupts are crucial for app performance.  

A simple interrupt test for a mobile app could be calling the device while the application is active. While testing this scenario, a mobile tester should take a look how the app handles the incoming call, and furthermore, how the app handles the end of the incoming call.

  • Is the app on the same view as before?
  • Is the data still the same and correct shown?

This is only one scenario that could affect your app. Depending on the use case, a mobile tester should define possible interrupt scenarios and test them during feature development.

To get a first, and fast, impression of the robustness of the app under test, a mobile tester can use tools like ADB monkey for Android to generate random interrupts and see how the app handles the situation. Antoine Merle has written a great guide on how to setup and use a monkey tool.

Besides software interrupts on a mobile device that need to be tested, there are also hardware interrupts to consider. Here are a some of the hardware interrupts that you should test:

  • Changing the volume via the hardware buttons.
  • Pressing the standby button while the app is active.
  • Pressing the volume buttons in a rapid succession.
  • Press the standby button and activate the phone again to see how the app handles this scenario.

L - Look

It is important to have beautiful screenshots of the app representing the features inside it. The “look” refers to having a well presented and represented application in the app stores. It's important to have descriptive texts about the app to provide the user as much information upfront as possible. It's also a good idea to take a screencast video about the app, or about the product itself.

Imagine the billions of apps available in the different stores. It's a key to success to present the app in the best way a company can, to win new customers. If a store presentation fails, the user will likely not install your app. If the company has a corporate design guide, the mobile development team should check the presentation for alignment with the guide and/or guidelines for the application rollout.

Once the app is live in the app stores, make sure to update the available information from time to time. Think about updating the information to mention new features or when features have been removed. Release notes are important to have for the overall look and feel of the application. For each app release, be sure to provide as much detailed information as possible to give the user a way to follow up with the new release. There are many apps out there writing things like: "Minor bug fixes" or "General improvements" as release notes. Notes with very few details are not helpful to the user.

E - Energy Consumption

The battery life of modern smartphones are roughly around 8 hours of usage (it differs from user to user). However, this is not much considering the main use case of mobile apps are while users are on the move, with no possibility to charge the phone (except if external batteries are available). It's really important that an app is not wasting the battery life of the user’s phone. There should be testing to check the battery consumption of an app during the development phase.

How to test the battery consumption of an app

An easy, but not ideal, way of checking for the battery consumption is to charge a phone to 100 percent, install the app under test, start the app, and leave the phone for a couple of hours. Afterwards, check how the battery state of the has phone changed and if the app is using the largest percentage of the battery. iOS and Android offer some battery usage data in the system settings section under "Battery". If the app is shown in the list, a mobile team should investigate to find the problem.

This is only a really simple check for battery drainage and this test can be influenced by other factors, like system apps, or other apps installed on the phone. To get some more realistic insights into the battery consumption of the app, Apple provides an Energy Diagnostics tool inside instruments. A similar tool and guide is also provided by Google for Android.

A - Automation

As for any software project, test automation is a tool to support the testing. If the automation scripts are written and treated like production code, it will free up a lot of time for the software testers to focus on more complex testing. In the mobile testing world, there are plenty of test automation tools available like Appium, Calabash, Espresso, XCUITest, Earlgrey, just to name a few. It's important for each team to take the time to evaluate the different tools on the market and to find the right fit for their own environment to gain the most out of it. More information about mobile test automation is available in chapter 5 in the book Hands-On Mobile App Testing.

P - Performance

The performance of an app has a huge impact on the user satisfaction. Mobile users have high expectations for mobile apps and expect an app to load within two to three seconds. After this time, the app must be responsive and usable. However, testing for performance is time consuming and not easy to do.

A typical mobile app relies on a backend system for its data. The mobile app requests and responses are transferred over a mobile network to the backend system. To send a simple request there are four parts that can impact the overall performance of an app:

  1. The backend server.
  2. The mobile data network.
  3. The mobile device.
  4. The other installed apps on a device.

A mobile team might only be able to test the performance of the backend systems and the mobile app. The mobile data network is not in their control and might only be simulated during a testing session. Looking only at the app performance, a mobile team can add some measurements to look for performance bottlenecks for scenarios like:

  • App launch time.
  • Time the app needs to display the first screen after login (if there is one).
  • Scroll performance.
  • Transitions between screens.

Apple and Google provide tools and guides on how to optimize the app for performance. Optimizing for performance is not easy and takes time. If a mobile team wants to improve the performance after an app is released, there are some performance SDKs on the market that can be used to get some real performance data from your customers. For example, there is the Firebase Performance SDK or FLOWUP.

P - Personas aka Users

When a team is developing and testing an app, the team should know the target group for the app. If this knowledge is missing, it's very likely that the wrong product will be developed and tested. The app would most likely not fulfill the needs of the customers. It's important to gather data about the target group. This can be done by creating personas. Once personas are in place, a mobile team should find real users (or have a system of getting this information) and ask them specific questions about their needs and how the app will help them solve a problem.

T - Time & Date

If the mobile app relies on the time and date, a mobile tester should test how the app reacts modifications of both on a mobile device. If the time and date are also important for the backend system, which the app relies on, time changes should be tested to see how the system reacts. A mobile tester should also change time zones on a mobile device to check for possible problems with an application’s reliance on date and time, especially concerning where and when data is recorded.

E - Ergonomics

The user interaction design of a mobile app is really important. The ergonomics of your app should be tested because people use their phones in many different ways. Mobile testers should look for poor interactions and ergonomics such as multiple taps to navigate to a search result.

If the font size is too small and the colour is a poor choice for the screen, it can make an app really hard to use. Does the user have to provide frequent inputs via the soft-keyboard in order to get to the right result? If so, you might want to suggest simplifying the application workflow. Are the UI elements placed in an ergonomic way on the display? Test usability for users with smaller hands, or users that are unable to use their hands to navigate.

It's recommended to conduct some interviews and show possible new features to customers to get real user feedback about the application and how they use it. The mobile team can observe customer feedback (if possible), or create a persona of the customer from their feedback to identify potential interaction and ergonomic problems.

S - Security

Security is critical for a software product and the business which owns it. If hackers are stealing customer data, the customer will most likely never use the product again. Security testing should be performed by experts. Security testing is a broad field and a single person cannot oversee all aspects of this kind of testing. If a mobile team wants to take the first steps towards mitigating possible attacks, they can take a look at the "Open Web Application Security Project" for the mobile security testing guide. It's a great starting point, but the business should consider hiring experts to perform security testing.

T - Tracking

Almost every software product, whether it’s web or mobile, has some kind of tracking implemented. Tracking is used for gathering insights on how the product is used. User flows can be tracked to improve the way users are working with the product. However, while user tracking can be useful it can also be misused.

If a mobile app is using tracking, the mobile team should ask themselves, is this really necessary to track and is it helpful for the business? There are laws in many countries that protect user data and how it’s used. The EU General Data Protection Regulation (GDPR) is a new law focusing to unify the data privacy laws across the EU. Companies have to inform their users about the implemented tracking and why and how the data is used. If a company is not following the law, it can be sued and fines can be assessed for violation of the law. Whenever tracking will be implemented, mobile teams should know the regulations for user tracking and data so they do not open their company to liability inadvertently.

I - Inputs

A mobile device has more than one interface where the user can insert data. The most prominent interface is the touch screen. Users can perform different gestures, with more than one finger, to enter data into an app. A mobile tester should know all the different gestures possible and should know which are supported in order to test for them on a particular device. Luke Wroblewski has written a nice overview of possible gestures per platform.

User inputs via touch are not the only way to communicate with an app. Each mobile device has buttons, a camera, and a microphone. With the help of buttons, a user can change the volume of a song while it’s playing. The power button is used to switch off the phone or end a call. On some mobile devices, a double click on the power button can start the camera, as well as perform other functions. The camera and the microphone can be used to interact with the app and to enter data. An example of data entry could be changing a photo on a profile, or sending a voice message. The camera and the microphone can have a huge impact when a mobile team has to test an app using different device vendors. The different hardware chips can lead to different results and problems that a team might need to tackle.

N - Network

Depending on where a phone is located it can be connected to different data networks to fetch and send the data to the Internet. Each data network can have a different speed available for certain users and that speed can impact a mobile app. The following networks are available to different users: GPRS(slowest), EDGE, 3G, LTE/ 4G, and coming soon 5G.  A mobile phone can also operate with Wi-Fi connections which can also present variations in availability and speed.

Using an app in a slow network connection like EDGE could impact the overall experience of the app. The UI is most likely not as responsive as it is in 4G because the data transfer is slower. A slow connection can force the app to show error messages due to timeouts from the backend system. These are just two example scenarios which could impact the usage of an app.

A mobile team should test the app in real life conditions. Performing testing in the wild can provide valuable information. It's important to test the app in the data networks the user will most likely use. When checking for data networks, a team must also check for connection drops or when the internet is unavailable to see how the app reacts (e.g. usage in plane mode). If a mobile team can't leave the office for testing, the team can use tools such as Charles Proxy to throttle the network connection to a slower network to check the behaviour of the app. If you are using the Chrome Developer Tools, you can also simulate throttling the network connection.

G - Platform Guidelines

The user interface is the central element of an app. If the user interface is too complicated and/or overloaded with UI elements, mobile users can get confused and might uninstall the app. Therefore, each app should follow UI guidelines of the respective mobile platform.

If a mobile development team has no dedicated design person on the team, a mobile tester should know where to check for the design guidelines (Android & iOS) to see if the implemented UI elements conform with the platform. If the company has corporate design guidelines, a mobile team should check how those guidelines match the mobile platform guidelines. An absolute no-go is the mix of design elements from other mobile platforms. In other words, an Android app should not look like an iOS app or should use parts of iOS design elements.

Mnemonics Are Fun!

This is my mnemonic Mobile App Testing. I hope it's useful and adds benefits to testing. If you want more mobile testing ideas, print this mobile testing cheat sheet and put it in the team space so that everybody can use it.

Author Bio

Daniel Knott is a mobile testing expert working as Lead Software Test Engineer at XING’s mobile platform team. He started his software testing career in 2003 as a trainee at IBM. After his time at IBM, Daniel studied computer science with a focus on software development and testing. Since 2009, Daniel has worked for companies such as Accenture, AOE and XING. In several agile development projects, he tested web, desktop or mobile applications. However, mobile testing became his passion and since the beginning of 2011, he is working in the mobile development and testing industry. He works with several mobile test automation tools such as Robotium, Calabash for iOS/ Android, Espresso and Keep It Functional. With the help of these tools, he developed a fully automated testing environment for Android and iOS.

Daniel likes to share his knowledge and therefore he started to share his experience on his blog www.adventuresinqa.com as well as in several testing magazines. Daniel is a well-known mobile expert, a speaker at various conferences in Europe and since 2014, he is the author of the book ”Hands-On Mobile App Testing”. You can follow Daniel on Twitter.

Daniel Knott
Head of Software Engineering
Head of Software Engineering, with 16+ years in roles like Product Quality Lead & Senior Manager. Author, blogger, speaker, and YouTuber on software testing topics.
Comments
Qase — Test reports you’ll love to share image
Don’t test in the dark. Our sharable, customizable dashboards help QA teams analyze and report on testing data quicker and with deeper insights.
Explore MoT
Episode Seven: The community's guide to Continuous Quality image
The trend towards continuous quality, rather than shift left
Beginner's Guide To Mobile Testing
Begin your testing journey in mobile by learning the different aspects of testing mobile
This Week in Testing
Debrief the week in Testing via a community radio show hosted by Simon Tomes and members of the community
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.