min read

A perfect data paradox in the digital age

by Arno Hummerston , 09.02.2015
Seven essential digital ad effectiveness research considerations

Pretty much most things generate data these days. You get up in the morning and your Fitbit records your first step. You walk down the road and your footfall is measured in at least two ways (most likely by CCTV if you live in the UK, and by your mobile). Even when you go to the toilet (sorry) – the water consumption of the flush is recorded somewhere (I don’t want to know where). Great. Loads of data. As a market researcher I should love it.

When it comes to digital advertising, the same thing happens – every little interaction with a brand or product or service creates a data point that someone owns, records and (hopefully) uses. As a result there are infinite data points generated by so many devices and interested parties. Surely you would think that we could use all of this data and create a paradise of perfect insight to explain the effectiveness, efficiency and return performance? Yes, we could. But we need the right technology, budget and plenty of will.

In the early 2000s digital ad effectiveness as pioneered by Rex Briggs, Nick Nyhan (and me) relied on pop-up (or rather, pop under) surveys fielded after exposure to an ad, with complicated ways of keeping those invites from biasing the results. This was “new stuff” and not reliant on any data other than that collected by the digital, advertising or research agency. Our clients and indeed publishers had to place great trust in us to execute without impact on their business. In one instance in early 2001, an early version of one of the solutions I was working on almost brought down Hotmail (as it was then) when we lost control of the invite frequency. Our agency servers nearly caught fire with the load placed on them. We have come a long way since then and we have much more data from other places. Other places that we, as an agency don’t have control over and that don’t necessarily align with each other. Incidentally, that’s more data than nearly took down one of the most heavily visited sites in 2001.

There is an assumption that because all this data exists, market research agencies can use it all to provide perfect insights. I am sure there are some companies that claim they can, but I urge caution. At GfK we have some of the best minds in the business and we are constantly innovating to analyze increasing amounts of data effectively. We need to go beyond today’s commercialized methodologies and incorporate smartphone and tablet environments, all the while acknowledging the change in consumer behavior.

Let’s move on 13 years to the current challenges we face and the evolved ecosystem we must now deal with. An ethical, considerate and credible agency should be considering privacy, multi-device ad exposure, multi-delivery environments on mobile/tablet, cookie deletion, cookie privileges, browser clash and even sample volume for the trackers that make the ad effectiveness world tick.

Of these seven must-do digital considerations in digital ad effectiveness research, it is easy to address two today. With the right technology and budget we can address a further three. That leaves two that we cannot properly and robustly address practically and at scale. That is not to say that these are not possible on their own, they are. However, combining them all to deliver a holistic, full and accurate picture of an individual’s exposure to a campaign is where the real challenge rests. The paradox is that the data exists but it should be collected, used and made sense of holistically.

Advertising blog graphic

Two we can address today


We need people’s permission to use their data. To use ad exposure data with full permission requires panels or explicit permission granted in surveys. Clearly this is no problem, but you still need to do it.

Cookie deletion

Some people delete their cookies and some of them do so regularly. Deletion can mean inaccurate data and incorrect attribution to exposed or control groups as their history is lost. This can skew results negatively, as control groups can include exposed people, which obviously reduced the test vs. control delta, indicating that the impact was possibly not as large as it actually was.

This is easy to account for: use sufficient sample sizes and only include people that have not deleted their cookies. This means it is essential that you have a way to know who they are, and a sufficient number of them.

Three we can address with budget and technology

Browser clash

Many people use two or more browsers on our devices. Cookies sit in one browser only and so ad exposure across different browsers is recorded in more than one cookie. We can unify this by getting panelists to use all their browsers either at the point of cookie drop or survey. This is difficult to do but definitely possible. It does become slightly more challenging when combining this with an effective solution for dealing with cookie deletion (see above), but with enough budget and a clever tech approach it is possible.

Cookie privileges

This refers to the actual capability to drop cookies and is the first of our issues that is directly relevant to mobile ads. Currently, on iOS (iPhone and iPad) only first party cookies can be set. It is not possible to set third party cookies (but you can read them later on). This means to set a cookie the visitor needs to actually be on your site, and not just served a pixel. As long as you use a panel for mobile ad effectiveness this is not a huge problem. However this could happen with PCs as well: Mozilla was planning to implement something similar in their browser in 2014 – but it has been delayed.

Sample volume

For low volume ad campaigns sample can be a problem. It basically means that not enough people within the panel or the interview sample base have been exposed to any part of the campaign. This is normally caused by rotation issues on large sites, where the propensity to be exposed is reduced still further, and by niche media plans that mean panelists are unlikely to have had the chance to see an ad.

The only way to deal with this in a robust way is to leverage bigger panels. This clearly requires budget, commitment and possibly even multiple panel service providers.

Two that cause me sleepless nights

Multi-device ad exposure

People own more and more connected devices. This means more places to serve ads and more gadgets for advertisers to track delivery on. Unfortunately the usual way of doing this online is using cookies and they do not recognize ownership or share data across device.

In order for us to be able to attribute exposures correctly therefore we would need to be able to reconcile different cookies across all the devices used by an individual. This is really difficult to do – it is not impossible but it is expensive. To date the approaches to try and address the issue have all fallen short in some way or another. One way or another this has to be resolved soon.

Multi-delivery environments on mobile and tablet

Adding to the multi-device headache is the fact that mobile devices and tablets have given the world a new phenomenon: the app. Ads on a single mobile or tablet are delivered in both browsers and in apps, which are completely different environments. In fact, they are almost like different devices. The core of the issue is that apps don’t allow cookies so they cannot be used to identify a single user across the two environments. On the other hand apps use very handy device identifiers which are not known by the browser. So one handset can look like two people. It can deliver the same ad in two places (a mobile web page and an app on the same device) and even the ad server will not know. For a research agency to address and deal with this is a step still further. Approaches to combine these two worlds are very cost intensive and this raises the question of whether it could fit any campaign’s research budget. However, we do see light at the end of the tunnel.

The only way to deal with either of these two issues at the moment is through analysis and that means choosing research agencies that truly understand the nature of digital, the technical limitations and the marketing research implications.

In conclusion, this paper has outlined the seven must-do digital ad effectiveness research considerations. I have tried to lay out the discussion with an appreciation of the opportunities and challenges. If in some small way both clients and research agency suppliers can align our comprehension then this paper will have served its purpose. In the meantime, if you buy research a word of warning: be sure to ask your intended provider lots of questions and ensure they have robust answers to the above challenges.

Research is a blend of art and science and by deploying great digital expertise and appreciation of the implications, challenges, and caveats, we can deliver great insight that will ensure the execution of great campaigns.

For more information contact Arno Hummerston arno.hummerston@gfk.com.