Testing in-car infotainment systems forces us to rethink the way we evaluate new features. Not only should new features be easy to learn and use, they also need to be safe to use while driving. But how can we test a feature for its impact on safety?
When considering the user experience (UX) of a product, we usually evaluate qualities like usability, learnability, efficiency and similar attributes. With cars however, the primary quality we are focused on is whether a new feature can be safely operated whilst driving. If not, that feature should be implemented in a way that only allows the driver to use it while in park or the engine is off.
How can we define and measure the level of “distraction”?
Both guidelines agree that “eyes off-road time” is an essential factor when measuring distraction. Also, both documents consider the number of lane exceedances and variation in headway distance to a lead vehicle as being another viable set of measurable data. The guidelines also agree the data should be gathered while respondents drive in a simulator and complete a task involving the new feature. The same respondents then perform a specially selected reference task under the same conditions. These two sets of data are then compared.
But that is where we depart from common ground. There are crucial differences between the two sets of guidelines regarding several aspects of the testing, like the choice of reference task. The reference task involves a time-tested feature to ensure that its effects on driving performance are well-researched. While the NHTSA recommends comparing driving performances by having the test respondents enter a destination into a GPS, the AAM considers tuning the radio absolutely sufficient for this purpose. Should operating the new feature distract drivers significantly more than tuning a standard radio or using the GPS, it fails the test.
Another difference: the NHTSA recommends an even distribution across all age segments, the AAM guidelines suggest a focus on older drivers. Additionally, when it comes to deciding whether a feature passes or fails the test, the guidelines disagree on how much “eyes off-road time” is still acceptable or how a lane exceedance should be defined. It is evident how these differences can have quite an impact on the outcome of a distraction study.
First-hand insights on the AAM guidelines
Although we are in no position to decide whose guidelines are more sensible, we can share some insights based on our past experiences with the AAM guidelines. While the guidelines sound very clear and specific, there are actually quite a few loop holes and gaps that need to be covered by the individual researcher.
A couple areas where the guidelines remain vague is the definition of a simulated test track, and specifying the number of practice trials that respondents should go through during testing. Interestingly, the AAM guidelines also do not specify how exactly the performance numbers are to be calculated, again leaving room for interpretation. However, choosing one way of calculating the numbers over the other can produce significantly different results.
An ongoing debate
While we found our own ways to solve these various issues (mainly by developing special methods to clean, aggregate and analyze the data), we are still closely following the current discussion and certainly look forward to any progress on the topic.
A possible consensus between organizations would be particularly intriguing considering the demand for the guidelines to become (in part) legally binding for all auto manufacturers.
One thing is certain – we’ve got some interesting times ahead in the automotive industry!
Philipp von Kiparski is working as a User Experience Consultant at GfK SirValUse in Munich. He specializes in the psychological studies of learning, perception and communication.