Insights, Data Science & Analysis Blog | GfK

Mobile Device and Mobile Application Usability Testing: Top 5 FAQs

Written by Pamela Gay | Dec 8, 2011 7:26:41 PM

This article is re-posted from User Centric’s blog.

Having tested over 5,000 mobile device participants and partnered with over 25 mobile clients, User Centric (now GfK's User Experience group) is a global leader in mobile usability testing. Throughout these projects, clients often have common questions.  To honor the recent achievement of completing the 250th project in the mobile space, we have compiled the top five questions clients have throughout usability testing of a mobile device or application:

1. Does a fully functional prototype have to be built before user testing?

It is not necessary to develop a fully functional prototype before usability testing. Not only does building a fully functional prototype take time and money, but the more fully built-out an application is, the more difficult it is to make changes. Primary use cases and paths should be built out, but secondary areas can be addressed via expectations. For example, if a user tries to go down a path that is not yet built out, the moderator will simply ask what they would expect to see after clicking on that item.

• The key is to test designs with users early and often. Test stimuli can range from:

• Low fidelity paper prototypes or HTML mockups

• High fidelity prototypes including digital or touch prototypes on a computer

• Interactive simulators on a touch screen handset

• Beta or post-launch applications

At GfK, rapid iteration studies are often conducted where one to two days of testing is completed, then changes are made to the stimuli based on initial findings, and then another one to two days of testing are conducted to validate those changes. This is a great way to get quick feedback on changes during a single study.

If needed, we can also build and iterate the prototype(s) for testing, including rapid iteration of the prototype(s) between fieldwork days.

2. Should the user’s device be tested or is it better to provide a device?

The goal is to recreate the actual user experience as much as possible; therefore, it is important that learning a new device does not interfere with the mobile experience. There are a few ways to recreate a natural experience in a lab setting, but there are tradeoffs to each.

Option 1: Participants use their own device during the test.  The benefits to this include familiarity of device with no hardware learning and as it is their device, it will contain their applications, settings, and data so there is no need to create dummy accounts.  There are risks to this option as it is possible a participant may forget to bring their mobile device to the testing session. Therefore, backup devices are necessary or sessions could be lost. While participants are reminded to bring their devices when they are recruited and confirmed, there is always a chance they may forget.  Another risk is that participants may have different settings across their devices which may impact their experience. To ensure a consistent experience, additional time may need to be built into the beginning of each session to ensure consistency across device settings.  There is also the possibility that if data usage is required during testing and participants do not have an unlimited data plan, they may not want to use their data during the session.

Option 2: Provide the testing devices and recruit participants who are familiar with those devices. For example, if testing with the Android OS, recruit users who have an Android device (paying attention of course to major differences such as touch screen vs. non-touch screen). Benefits of this option are that it is easier to control device settings, no sessions will be lost if a participant forgets their phone, minimal to no learning, and users do not have to show their actual user data (e.g., bank account information, passwords, Twitter/Facebook feeds, etc.).  The risks are that any differences in experiences due to different user settings will not be captured and tasks that require personalized user data (e.g., Facebook, Twitter) require creation of dummy accounts beforehand and key experiences may be missed if the data is not theirs.

3. What devices and user groups should be tested?

A representative sample of users or intended users should be included. Consider some of the following: Do users and/or intended users fall into certain age groups? Do they own specific types of devices? Do they exemplify specific characteristics like being an early adopter, owning certain devices (such as tablets, DVR, video camera), etc.? We work with clients to develop a screener (a list of questions used during the recruitment process) to ensure intended target users are being tested.

4. Are there differences between iPhone and Android users? Do both need to be included in a study?

In general, there tends not to be much difference between iPhone and Android users in terms of task success. Differences are more subtle and revolve around expectations of OS specific interactions (e.g., swipe vs. long press; menu key on Android). Greater differences are seen between non-touch screen BlackBerry users and touch screen users, regardless of OS. This is because the screen size is much smaller on non-touch screen BlackBerry devices (which means more scrolling) and the input method is different (e.g., touching directly on a target vs. scrolling down to select a target).

When determining what devices to include in the study, consider the following:

• What devices do the users own?

• Is the test stimulus an application being built for a specific OS? If yes, then focus on that specific OS. Include a few owners of other devices if there is interest in determining if potential users can successfully and easily use the application.

• Is the test stimulus a website that will be used on a variety of platforms? If yes, then include a representative sample of device types that current or intended users own as browsers and interactions may differ across different devices.

5. When is lab testing versus remote testing appropriate?

Lab-based research is best suited for testing core phone features and applications and for competitive studies. If the study is seeking to understand pain points, how task flows can be improved, etc. the lab is a good place to capture this type of data. Field studies or studies outside of the lab are best suited for understanding how consumers actually use certain features or applications in their day-to-day lives. Data can be collected outside of the lab in a variety of ways, including contextual interviews, diary methods (text message, picture messages, blogs, online surveys, emails, etc.), phone surveys, remote logging tools on the devices, etc.

Ultimately, when designing a test plan for a mobile usability project, experience is essential to ensure project goals are met. We have a deep seeded passion for the mobile industry as approximately 40% of our work in user experience research is in mobile. Seven out of the eight top mobile manufacturers in the country have trusted us with their usability testing, user-centered design and user research.

Read more about our mobile UX design and research solutions.