fbpx

Simon Bird: Let’s not measure ourselves into obscurity

Simon Bird, group strategy director at PHD.

Over the last few years in advertising, media and marketing it has felt that there has been somewhat of a civil war going on.

That sounds a bit dramatic but it has been a pretty constant battle between art and science, creative and big data, digital and traditional, mass and targeted, salience and optimised and emotion and information.

Generalising a little, there’s basically a ‘pro digital side’ believing mass advertising is inefficient and full of wastage and that marketing today is all about right message/right place/right time or DM at scale. And an ‘anti-digital side’ believing digital advertising is guilty of spurious numbers and spurious models of marketing and is the online version of the tactical retail stuff from your letterbox that you put straight in the bin.

There hasn’t been much in the middle. Not least of which because it doesn’t tend to make good headlines.

Both sides of the argument have some valid points. However, one side is winning this war. One side has the lion’s share of the press articles, the lion’s share of new media spend and all of the big technology companies on its side. It has all the modern language too: real-time optimisation, dynamic trading, always on, programmatic trading etc.

Many people will say this is a good thing and is the industry modernising. ‘Advertising 2.0’ or such like. Of course, some level of change is good and completely necessary to help us exploit the new digital ecosystems but maybe big data sounds a lot more precise than perhaps it should and, potentially, for the good of our industry this marketing civil war is one that we cannot afford for one side to win.  

A Harvard Business Review article from late last year helps highlight the issues that will likely occur if we continue to keep biasing the ‘real-time measurement and optimisation’ side over the other.

The article starts by pointing out the staggering fact that public companies, which are typically those with all the advantages of scale, are disappearing faster than ever. The current rate is six times faster than only 40 years ago and listed companies now have a one in three chance of being delisted in the ensuing five years.

It then goes on to talk about how some of this precariousness is explained by three fundamental changes in the business environment. These being the speed of technological change, a much more interconnected world and the business environment being more diverse than ever. All of which is pretty understandable however the next part of the article goes on to explain how companies are inadvertently magnifying the effects of these new forces. That is, companies are actually complicit in creating an environment that makes them more likely to fail not less.

The core point is that as the business environment gets more and more complex it also gets less predictable; yet, the corporate structures and processes, designed for more stable and predictable times, are preventing companies from adapting to the complexity of their environment. As the system gets more complex the negative effects of this get larger.

Towards the end of the article, they suggest some solutions to the issue such as firms being realistic about what they can really predict and control, being more expecting of unpredictable events, looking more at events outside the firm or category and not to be so reliant on statistical models.

An article in the Wall Street Journal, also last year, mirrors the general sentiment of the HBR piece.

They’re both largely based on the work of a theoretical physicist called Geoffrey West. It sounds a little odd for a scientist to be writing so insightfully about corporate survival but it turns out businesses are ‘complex adaptive systems’ of which the sciences have many – cities, bacteria, evolution and economies to name just a few.

Unlike companies which are getting less stable, most of the above examples become more stable as they adapt. When West was asked why cities are so stable and companies so fragile (despite both being full of people whose behaviour is typically tricky to predict) he answered simply: “It’s easy, cities tolerate crazy people and companies don’t.” In essence, the central point of both the HBR and WSJ articles – an inability to understand how best to operate with unpredictability (crazy people) is one of the key reasons companies are becoming increasingly less stable.  

The above is obviously an extremely abridged version of what is a long and complicated field of study; however, these points provide clear warnings for our own industry and perhaps how we should and shouldn’t behave moving forward.

Marketing (and advertising/media) exists in the aforementioned complex adaptive business ecosystem. One could argue ours is more complex further still due to the even stronger influence of new technology in our industry and the constant stream of new media channels that our target customers are using and adapting to.

Yet much of our industry is currently obsessed with looking for more and more precision from models. Are we really sure our models are as good as we think? Perhaps we are also guilty of thinking we can measure more accurately than we should.

We are not the only field that has grappled with the philosophy of how much to measure and how precisely. Physics went through this around 100 years ago when they moved on from the precision of Newtonian physics into the far less precise field of quantum physics.

Indeed, its most famous principle is aptly named the Heisenberg uncertainty principle. It boldly states that we can know where an atom is or how fast it’s going but not both. Quantum physics’ lack of precision has by no means made the field any less important or scientific. Many would say quite the opposite is true.

Economics has been battling with this area also. Their models have become more and more sophisticated over the last 50 years to the point where half the syllabus of most economic degrees is statistics and only ten to 15 percent is conceptual thinking and real-world examples. Despite the models using extremely advanced mathematics they have regularly failed to predict economic crisis – indeed their models were complicit in exacerbating the GFC due to so much misplaced confidence in what they could measure and predict. They would do well to remember John Maynard Keynes’ description of economics as being “the science of thinking in terms of models joined to the art of choosing models that are relevant to the real world”.

Interestingly, one of the most profound recent advancements in economics, that of Behavioural Economics, came out of Psychology departments which are far less reliant on advanced measurement techniques. One of the giants in this field, Gerd Gigernezer, wisely pointed out if you work with complex systems you should use simple models, if you work with simple systems, use complex models. It is clear we work in an extremely complex system but it’s not so clear that we are always using the right models.

The ‘big data approach’ requires fast, direct feedback loops to work. But mass, traditional, emotion-based, brand advertising doesn’t provide very good fast, direct feedback loops. However, just because it isn’t as precise to measure, like Quantum Physics, or provide as fast a feedback, should not devalue its role, but this seems to be exactly what has been happening.

One of the metrics that short-term campaigns most focus on, for good reason, is ROI. However, one of the often forgotten issues with ROI is that it is mostly an efficiency metric. When people say they’ve improved ROI by 30 percent it rarely means they’ve sold 30 percent more product. It could do but more often it means that they’ve reached customers that are more efficient to reach. This is the issue P&G fell into when they targeted their fabric cleaner to a ‘better’ audience (households with pets) driving ROI up but sales down. Overconfidence in what they were measuring resulted in them ignoring the less efficient to reach but still important ‘households without pets’.

It’s not hard to see how this could happen when measurement approaches like last click attribution are used. It’s easy to make it appear effective but all it is measuring is correlation. The oft used simile being the football team that associates all the goals with the goal scorer and at worst concludes the other 10 players aren’t doing anything (and gets rid of them) and at best employs a team full of strikers.

It will always be correlated with sales or website engagement because the audience of ‘clickers’ is a biased data set of people actively interested in a product or service. We cannot and should not infer from last click data that the last click is more important than any other marketing channels (especially ones not being measured, i.e non-digital channels). Mistaking correlation with causation is almost inevitable if we focus too much on measuring and not enough on meaning or thinking. The other ten players on the pitch are probably quite important and measuring their output by goal scoring is likely not that helpful.

This error is one we need to be ever more vigilant of as the world continues to produce more data. More data inevitably means more correlations but just because death from being tangled in bed sheets and US per capita cheese consumption were highly correlated over a period of ten years (yes this is actually true) obviously does not mean one caused the other regardless of how strong the correlation is.

This is also a good reminder of the value of human intelligence – artificial intelligence is great at identifying data patterns but, at least currently, not good at ‘thinking’ about them.

None of this is to say that last click data has no value or that it is not worth measuring, just that it isn’t measuring as much or as precisely as it is so often presented. And if we overestimate how much and what we are measuring we often end up being more efficient but less effective. Inadvertently doing our bit to help businesses become even more unstable.

The types of data that the digital half of our civil war tend to use, click-through rates, sales, ROI etc are all short-term metrics and are much easier to measure than long-term metrics. This is because in the short term marketing operates in a fairly stable, not too complex system. There is a certain amount of people in the market at any one time looking for a product and the marketing is designed to capture as many of them as possible.

While measuring what is easy to measure is seductive, it isn’t necessarily good for marketers or agencies in the long term. Many clients will end up selling less and/or selling product at a lower margin. If this happens clients will make less money, marketing budgets will drop and then agency budgets and profits also drop.

Longer-term metrics are much harder to measure because the marketing system becomes more complex over longer timeframes. In most categories, we don’t know with any great precision when the vast majority of potential customers might come into market (one reason click-through rates from banner ads remain so low) but when they do we want them to think our product is better and worth paying more for. This is a far more complex and unstable system than short-term sales and, accordingly, we should not expect to be able to measure it the same way or with the same precision.

One of the main things that emotion-based, mass-targeted, brand marketing tends to do better than anything is justify a price premium (over time). Consumers, despite what they might say in focus groups, also tend to feel better about the products and services that are better branded (unless the product is terrible, which is rare in 2017) and are happy to pay a premium. They believe these products taste better, look better and function better. The big issue is that this type of marketing takes a long time to work and is hard to measure so it can often look like it has done nothing at all. Especially if it is measured over short time frames.

So, for all the ‘this versus that’ in the industry, the civil war is really one of measurability. Of what we can measure precisely and what we can’t. And over what time frames do we measure. If we think we need to measure everything precisely and quickly, then this will continue to limit the value the wider marketing industry delivers to businesses. It will make us complicit in adding to the growing instability of our clients’ businesses and ultimately we will become less relevant, less useful and less paid.

This is not to say that we shouldn’t be using all the digital tools available to us and optimising what can be optimised but we need to understand that this philosophy must not be applied to all marketing. The business ecosystem is complex and the behaviour of the people we are trying to influence is more complex still – as David Oglivy once wryly observed, “People don’t think what they feel, don’t say what they think and don’t do what they say”

To end the industry war, we need to find the middle ground. To get there we need to remember that statistics is primarily about the estimation and measurement of uncertainty not certainty. Ultimately to be as valuable as we can be to our clients we need to move toward a framework where we can use complex models to measure, analyse and optimise as much as possible in the short term but also be content with using simple, far less precise and unoptimisable models in the mid and longer term.  

For those who remain unconvinced and think it is merely a matter of computing power, consider the American Bureau of Meteorology, which measures weather, another complex adaptive system. It has over $60 million worth of supercomputers (more than any agency group or even holding company). It also has around 12,000 analysts, many of whom have PHDs (again more than any agency group or holding company). Yet, its advanced statistical weather models can only predict temperature more accurately than a simple long-term average within a nine-day window. After this, the error in the model gets too big for the model to be useful. It still looks fancy and produces a predicted temperature; it’s just that the predictions aren’t all that accurate. So they use their complex models in the short term and the (extremely) simple, average ‘model’ over longer time frames – the best of both worlds.

So, much as Physics could only develop the less precise but extremely valuable area of Quantum Physics after accepting that they could not measure everything. Perhaps we need a marketing version of this? To paraphrase the Heisenberg uncertainty principle it could be something like this: we can build precise models, that tell us a lot, about a few people, for a short period of time, or general models, that tell us a little, about a lot of people, over a long time frame, but we can’t do both. This would free us to use as much ‘big data’ analysis, segmentation and optimisation as we need to in the short term but also allow us to use far less precise, ‘small data’ and non-optimisable frameworks in the longer term.

At the very least let’s not measure ourselves out of our jobs. The crazy people in the city need a place to work.

Simon Bird is the group strategy director at PHD. 

About Author

Comments are closed.