Teach your AI to do the right thing – put bias aside
In media, dinner conversations and professional contexts, discussions on AI are catching everyone’s attention: how will it change our lives for the better – or potentially for the worse? One commentary which gives me pause for thought is Weapons of Math Destructionby Cathy O’Neil. It’s an excellent exposé of, as the author puts it, “the dark side of Big Data”.
O’Neil balances the potential value of AI and machine learning to business and society with warnings of how incorrector unscrupulous use of data-based “smart” algorithms – her “WMDs” – can lead us towards a dystopian future.
The danger of being blind to bias
WMDs are typically opaque models, unaccountable for their decisions yet accepted without question, designed to operate at scale and “optimise” for profit or other measures of success without taking concepts such as fairness into account. Their fundamental flaw is usually bias; by being selective in the data they learn from, using proxies for behaviours that can’t be measured directly, and constantly learning through feedback loops that confuse the models’ performance with reality, WMDs are often implicitly discriminatory, perpetuate previous bad practice, and compound these issues by reinforcing their flawed decision making over time.
The effects of the WMDs O’Neil cites are horrendous: good teachers fired due to evaluation by simplistic and nonsensical models; discrimination in court sentencing due to recidivism models that indirectly perpetuate human biases; car insurance pricing that gives much greater weight to creditworthiness than to driving records; unfair treatment of those applying for jobs or for loans; and so on. The learning is, that blind trust is not a valid option when we work with AI models; it is crucial that we apply critical scrutiny to how they are created and how they operate.
What does this mean in a marketing context?
Are there WMDs operating in marketing? Perhaps. O’Neill gives an example of targeting “predatory products” – in this case, exorbitant loans – at the needy, driving them from a bad situation to a desperate one. Generally, as reputable marketers, we should consider ourselves fortunate that deploying “bad” models in our work is not likely to cause such dire consequences.
However, there is another kind of bias at play in marketing, which we should consider.
When applying AI to marketing, we’re not just applying computer “muscle” to scale what we do; we unleash the “brain” power of machine intelligence to make our decisions better and more effective. And yet, we constrain it by having it learn only from what we’ve done and what we’ve achieved. We might show it data from past campaigns and have it refine and improve our targeting, or take our existing segmentation to much greater levels of granularity and detail. This can produce valuable results and high returns, but when it comes down to it we are limiting the algorithms to learning how to do what we already do, but “better” (at scale, with greater precision and accuracy). This is also bias; we narrow the algorithm’s view, so it considers only the scope of our existing efforts.
So, what can we do about this?
Most importantly, we should be cognisant of the issue – and beware of any data that only perpetuates our current practice. Focusing on our success and trying to fine-tune that can bring substantial rewards, but consider: are we possibly missing out on other business opportunities?
Seeing past marketing bias
My call to marketers is to be more willing to run experiments and tests. Send for example offers to a sample of customers who fall outside your standard marketing selection; adding their responses to the data you use to train your AI model lets it learn any patterns that indicate who should be additionally included. And extend this process to the model’s ongoing learning: if you send offers to a sample of customers who were not targeted by the model and include their response data, it will have the opportunity to learn to correct its “false negatives” as well as false positives.
Take experimentation as far as you wish. Rather than start a new offer with standard marketing selections, maybe run a test campaign against a random sample of customers – then have AI learn from scratch, bias free, where it works and where it doesn’t.
In the same spirit, use your data and machine learning to find new microsegments – and thereby new market opportunities – that might be overlooked by traditional segmentation. Clustering algorithms can automatically find groupings in data in as many dimensions as you wish. Have them find “natural” clusters among your customers, then let advanced analytics help you understand how they relate to key metrics and behaviours. Use these microsegments both as a targeting aid – look for microsegments with high purchase propensity – and to understand customer characteristics that can help you shape better marketing content.
We all know the key to value is to maximise customer engagement throughout the customer lifecycle. AI lets us take the right actions, at the right time, for each customer, and that enables us to get the maximum value out of our customer base as a whole.
Break the chain of subconsciously inherited bias!
While it doesn’t hold the same potential for inflicting damage as most of O’Neil’s “WMDs”, bias in marketing presents a real risk. Marketers looking to reap the benefits of AI may find their success limited simply because AI amplifies the limitations of their current marketing approach. Machine intelligence is tremendously powerful, and – like a child – needs to be taught. But if we can avoid passing our marketing biases on to the AI models, they have the potential to evolve to a level well beyond what we ourselves have managed to achieve!
Author: Colin Shearer