fbpx

Are we automating discrimination?

Let’s face it, humans have been known to make some highly questionable decisions. It’s no wonder ostensibly infallible artificial intelligence that promises to remove our human margin of error has been seeping into every industry by osmosis, ours being no exception. But interested parties should proceed with caution.

The nature of AI means it inherently amplifies the voices that feed it. Those are the only voices it knows how to use. What happens then, to the voices that don’t get a chance to feed in? The potential of their input is lost, creating a rift between society and the AI designed to reflect it. The debate is already unfolding in courts and HR departments, where AI dedicated to predicting an individual’s propensity to re-offend or perform in a role are displaying unacceptable racial and gender biases.

Work is already happening to minimise the silencing of marginal voices. In the United States, Y&R is sending strategists from its coastal cities to discover ‘Middle America’ in a series of live-in secondments, in order to better understand the communities they serve. And the work is benefitting. So why are we actively working backward towards a homogeneous view of our audiences, an inhuman blur of amalgamated data?

Obviously, there are ethical implications of further silencing groups of society that media is already liable to ignore. But as with any fundamental cultural and societal shift, the tension of marginalised voices and AI will inevitably impact upon brands too. Marketers drawn by the prospect of handing key decision making to an AI should be aware of who is really making those decisions. It will quickly become easy to alienate large groups of your audiences, especially when those audiences differ in the slightest way to those who feed your AI.

To give a worryingly mundane example of how this might pan out, imagine a non-luxury car brand that builds a path-to-purchase map based on historical data, but the majority of its dealerships are in Auckland. The Auckland voices overwhelm those from the regions, and especially those from rural areas.

Sure there’s a big market in Auckland, but the way we think about and select a new car could be worlds apart from how a farmer goes about buying a new ute. The shiny new AI-generated path-to-purchase journey doesn’t speak to the farmer buying a ute, and you fail to connect as a result. Farmers don’t represent the majority of your customers, but they previously accounted for 30 percent of your sales. Your business is now in serious trouble.

Is your marketing team capable of raising a dissenting voice to an omniscient AI that presumes to speak on behalf of every one of your customers? If not, you’re losing the uniquely human curiosity to ask why over, and over, and over until we can find and follow the less obvious paths. It’s down those paths that human insights live. The other paths lead us toward the potential to alienate or worse, to that place no brand should go: irrelevance.

The bad news is that there’s no panacea. AI may shortcut complicated decision making, but it needs to be kept in check by a breathing human with the ability to consider, reflect, compare, deliberate, debate. The good news is, that’s what effective marketers do.

We can’t outsource our humanity. Trying to do so may end up being our most questionable decision of all.

Katy Holden is a planner at Track NZ.

About Author

Comments are closed.