Measuring Attribution: What’s working?

“Half my advertising spend is wasted; the trouble is, I don’t know which half,” is a well-worn lament from frustrated marketers. However, as Graham Medcalf discovers, today the science of marketing attribution is attempting to make this quote redundant.

Marketing campaigns are often complex and seldom the same. Some elements will keep leads in the funnel and drive conversions, while others might cause them to fall out. The only way you can know which of your marketing elements are effective is by using marketing attribution.

The goal of attribution is to determine which channels and messages had the greatest impact on the decision to convert or take the desired next step. In the era of 20th century direct marketing, constant testing of multiple elements of a campaign made it possible to attribute success to certain choices. In the digital age, attribution is fundamental to any campaign’s online, social media, and mobile advertising.

Tracking activity through a typical customer journey is essential in attributing success to individual elements of a marketing strategy. Digital ads have built-in trackers, but neither advertising in traditional media nor other marketing activities (be it trade shows, customer service or any other customer touchpoint) have those tools. The trick is to implement traditional efforts, while digitising them for attribution.

Good marketers are regularly running reports on web traffic in Google Analytics, HubSpot or a plethora of great marketing attribution software options currently available. They also take inventory of various touchpoints and score them based on available data.

The best place to commence attribution is with Google Analytics, and those with hefty budgets may prefer to build their own custom attribution models. But in between, there is a lot of easily available attribution software to help along the customer journey. 

The problem is, attribution models are historically based, they can only tell you what has succeeded in the past, they cannot tell you what might work in the future. It’s a bit like rugby pundits telling you how the All Blacks won two World Cups but can’t tell you how to get the team out of its recent malaise. This is because competitors are constantly upgrading their strategies to defeat the strategy that worked for you in the past.

Be that as it may, it’s still helpful to learn from your previous mistakes and your prior successes. Data is the key, but data alone doesn’t solve your problems. There is still a need for creativity and planning one step ahead of the pack. Marketing attribution should be considered as the foundation upon which future success can be built.

Many companies, are continuing to evolve their approach to marketing attribution (and the models used), given privacy policy changes and the declining reliability of 3rd party cookies or device IDs.

Gone are the days when one could extract granular log file data (along with a user identifier) from ad-serving platforms, to model the performance of campaigns. We’ve entered an era where there are gaps in the data sets available for analysis, which poses limitations on advertisers’ abilities to query this data.

Whilst this has led to some disruption, it has also pushed the industry to better understand the methodologies used to measure campaigns, and question where it could be doing better. A key example of this has been the shift from digital pathway models which assumes a linear sequence of digital media exposures on a path to conversion.

“From a GroupM perspective,” says Christophe Spencer, Chief Digital Officer of GroupM NZ, “‘last view/click’ and ‘data-driven attribution’ models will continue to play a role across parts of clients’ marketing programmes, to help guide short-term campaign optimisation. These are also being accompanied by a strong focus on making Marketing Mix Modelling easily accessible to our client base to ensure we are able to provide all clients with a more complete picture of marketing performance.”

Christophe Spencer.

Simon Bird, PHD’s Chief Strategy and Product Officer, believes attribution modelling was created by people who think advertising is a direct response mechanism, when most of the time it is not. “We typically do not use that methodology,” he says, “because it is so fundamentally flawed – it misses out really important variables such as non-digital channels, competitor spending, distribution, pricing, consumer confidence, seasonality, weather etc.”

In fact, Simon is appalled that it still gets used as widely as it does: “You would not get published in a scientific journal if you used attribution methodology to ‘prove’ your work. Furthermore, attribution modelling can’t even conclude your marketing isn’t working because it’s built on an assumption that is it working.” 

Simon points to the example, where Uber turned off two-thirds of its annual advertising budget – around $100m – but, because of ad fraud, attribution fraud in particular, the   ride-share company found basically no change in the effectiveness of its advertising campaigns.

Christophe admits the industry’s initial attempts at attribution modelling could at times be clumsy. “It sometimes involves analysis of a digital pathway to conversion, without taking into account marketing drivers such as non-digital media presence, competitor activity, seasonality and pricing.”

The approach of analysing data ‘in silo’, without understanding a client’s broader marketing programme, is a major driver of correlation bias, but GroupM’s approach to overcoming this is through three key areas: Strong collaboration between data science and broader comms planning/media teams to help ensure the right balance between interpretation of a data model and the context of clients’ businesses; building out the breadth of data points being used to model the relationship between media and sales; and having a global database of client econometric model outcomes that can be used to validate against insights. 

Independent media, data and technology agency, Together, has been shifting clients towards measurement design which Bridget Bucknell-Whalley, Head of Strategy and Planning, believes is more likely to remove any bias, such as a probabilistic model approach. 

Bridget Bucknell-Whalley.

“These models use statistical methodologies to calculate the probable impact of each event on the final conversion. As it works on a macro level and doesn’t need log-level data from ad platforms – this means we avoid any bias in the data itself due to it being digitally focused or, even riskier, manually weighting data.”

PHD uses econometrics to look at macro drivers and optimisation and probabilistic models for shorter term optimisation (and also A/B testing in other circumstances). Its approach to analytics is guided by the British statistician, George E.P. Box, quote “all models are wrong, some are useful” meaning the modelling is interpreted and discussed with a broad team of people to ensure an understanding of the category, and buyer behaviour contributes as much to the analysis as the person who did the analytics or built the model. 

“We have been doing this in house for a number of years,” reports Simon. “But we also work with Analytic Partners, with some of our clients who have a direct relationship with them. Some of the easy wins have come from seeing a very low (compared to other countries) contribution from digital and social channels and realising it was not a channel issue per se, it was a ‘not fit for purpose’ creative issue.

PHD also had value from understanding product and brand halo effects onto other products, which made the decisions of where to cut budget much more informed and, similarly, it has been able to show clients that there is room for an increase in spend in many circumstances. 

“We always model in long term brand contribution, and we also model direct and indirect effects of advertising. We can also then put client specific response curves into our media planning tools,” says Simon. “We build our models in house using R (the multiple correlation coefficient) most of the time, and we have a global community of analysts who are constantly sharing new techniques.”

Getting measurement right can be a huge competitive advantage for clients but getting it right starts with good KPI setting, good (and stable) data, and a team of people building and analysing the model(s).

Marketing Mix Models (MMM), or econometric modelling, became widely used in the 2000s by consumer packaged-goods companies to help them determine ideal media mix and marketing investment levels. Today, in digital with the development of Multi-Touch Attribution (MTA) the industry has steadily moving beyond last click or first click attribution and factoring the different touchpoints along the customer journey and their impact on sales. 

However, while a number of MarTech firms or global brands are selling the holy grail of fully integrated marketing and customer data attribution, for most New Zealand marketers it will remain out of reach and too complex to implement. Many local brands just don’t have the scale of marketing activity and spend to justify this level of tracking and marketing optimisation.  

Together advocates for picking the most suitable measurement solutions for the task, which means using different models and methods across a brand or campaign to gain valuable insight.

“We are working to remove reliance on last-click attribution in isolation, for example, and towards using econometric models that look to determine the true impact of the channel, placement, and strategy,” says Bridget. “This is work we are scaling with our newly formed Data and Technology team, creating bespoke solutions for our clients.”

Some of these models are using scaling – such as MMMs and probabilistic models, which aim to answer bigger, chunkier client questions such as the longer-term impact of brand building on conversion.  

Media and digital agency, MBM, uses a combination of econometric modelling, matched market testing (geo experimentation), and digital attribution modelling depending on the use case. A recent client example saw MBM undertake causal inference analysis (a type of matched market testing). This highlighted the relationship between YouTube and paid search on business outcomes, despite YouTube showing relatively poor return metrics when measured in isolation. 

“As a result of this analysis,” says Alysha Delany, MBM’s Managing Partner – Digital, “we upweighted the YouTube investment and decreased the overall campaign budget. The subsequent campaign had a 20 percent lower total budget but achieved a significant (5 percent+) lift in key business metrics.”

Alysha Delany.

When Media Labs Antony Young was in the US, he worked with budgets in the tens and hundreds of millions of dollars, so the upside of measuring all media and making adjustments could be significant. Here in New Zealand, he found many brands focusing on a handful of media channels and tactics, so testing and refining marketing plans presented fewer upsides.  

“Companies will need to make a substantial strategic investment in resources and expertise to collect, structure, analyse and activate their customer data bases to the point that they could be integrated back into marketing,” says Antony. “Few in my view beyond a handful of the biggest firms, would have the appetite to put that in place, particularly given we are heading into a challenging business environment 

Domestically, the vast majority of consumer products and retail still take place in a physical store versus online. This dynamic makes it harder to connect customer data and marketing. Does McDonald’s have any meaningful customer data of people coming in and out of their restaurants? Food brands sell 90 percent of their product through supermarkets who are the ones that actually own the customer data. Attributing the value of branding on TV, out of home, radio vs promotions vs digital channels is still a long way off. That level of granularity is going to be beyond a lot of marketers.

Marketing attribution is going to continue to be centred around digital channels and e-Commerce heavy sectors, and currently, digital is about half of the media budgets of national advertisers. With the desecration of cookies, most notably Google shutting them down next year, brands are not going to be able to rely on site data and will need to capture, and work with their first party (i.e., customer) data more effectively to be smarter with their digital marketing.

Google Analytics 4 (GA4) introduced earlier this year and set to replace Google Analytics UA is going to make Multiple Touch Attribution (MTA) more accessible 
and enable advertisers to organise their digital touchpoints and some limited 
customer data. More personalisation of digital advertising by incorporating customer data will be where a lot of focus will go. 

Trying to achieve correct attribution continues to be problematic, and the issues faced by marketers like Sean Wiggans, General Manager at Turners, the auto retail company is typical. 

“My team and I spent a huge amount of time trying to get to a decent level of confidence in our attribution between 2018 and 2020 before we made a call to move on. The data is not there to do it remotely accurately, and the effort required to try and do even the digital aspects accurately was severely impacting other much more productive work that we needed to do. We’re a small team that has to cover a lot of ground – not unusual in New Zealand. And then of course there is our digital spend. It’s just not that big for all the effort required to fine tune it.”  

Sean Wiggans.

Media and platform fragmentation, a vast array of (sometimes contradictory) proxy and business metrics to optimise with, and declining visibility into marketing performance, given changes made by Google and Apple, means there is a greater risk today of wasted advertising spend – if one were to rely on marketing attribution software and tools alone.

Undoubtedly, data science will continue to be a hugely important part of clients’ marketing programmes. In saying that, marketing attribution will need to be accompanied by a strong understanding of how models are built, shift from black box solutions provided by third-party vendors, and most importantly, possess a view as to how data and marketing theory should come together to generate strong outcomes.  

Following Google’s initial announcement in 2020 regarding deprecation of cookies, GroupM received a really interesting piece of feedback from a client along the lines of: ‘It feels as though we need to go back to being marketers again.’ Never a truer word spoken.

Together, for example, is focusing its time and energy with a new Data and Technology team to build a product suite in this space. “There is no silver bullet,” says Bridget. “So, it is more about knowing the suitable model for the client’s requirements and building bespoke. In many cases, the key is to set the measurement framework and design the solution as early as possible in campaign planning versus scrambling to resolve questions late in the piece.”

Those marketers who focus on macro data collection and use models to both measure and predict outcomes over time will be well placed to understand the drivers impacting their marketing investment and will also be well placed to capitalise on understanding the impact of new and emerging external factors.  

This article was originally published in the September/October 2022 issue of NZ MarketingClick here to subscribe.

Graham Medcalf

About Graham Medcalf

Graham Medcalf is former Editor of NZ Marketing and regular contributor.

Leave a Reply

Your email address will not be published. Required fields are marked *