A company is suing its own AI chatbot for providing misleading information to a client. I wish I could make up a headline from the future that was better than this – it’s actually a real story from February 2024 involving Air Canada.

The practical lesson is that we need to be careful of automating our workforces, rather than augmenting them. People who use AI are better than people who don’t. AI, though, is better when it is used by people than when it is left on its own.

Learn from Air Canada’s mistake.


TRANSCRIPT

A company is suing its own AI Chatbot for giving incorrect information and bad advice to its customers.

That’s a news headline from the future. This is ThrowForward Thursday, I am Graeme Codrington, and we jump into the future every week and find out what’s going on there and what it means for us today.

In the future, when a company provides an AI Chatbot as a way of interacting with their clients, what will it do if that AI Chatbot goes a little bit rogue and gives bad information and incorrect advice to its customers? The customers then get upset and maybe they even sue the company because of that advice, and the company’s response might be, oh, the Chatbot made its own decisions, the Chatbot is a separate legal entity, sue the Chatbot, not us. Or maybe the company even sues the Chatbot themselves.

This falls under the headline of, you can’t make it up because it has already happened. Yes, this is Air Canada, Air Canada. Those people who fly regularly will know that Air Canada is not one of the world’s greatest airlines in terms of customer service, but recently they took it to a new high, or shall we call it low. When a certain Mr. Moffat, who was attempting to find out what the policies are around bereavement flying.

Air Canada, like some airlines, provides special fares for people who have had family members die and who need to travel urgently. He went onto the website, the Chatbot engaged with him, gave him bad advice. On the basis of that advice, he purchased a ticket, flew to his grandmother’s funeral, and then later was told that the Chatbot had given him incorrect information and he was not going to be able to get a refund.

He decided to sue Air Canada and Air Canada’s response was to say, our Chatbot is a separate entity which has a mind of its own, and he should have double checked the website, not trusting the Chatbot. Of course, the courts did the right thing. I say, of course, because it seems obvious to me that the Chatbot is part of Air Canada’s own systems, and Air Canada have to take responsibility for what its AI Chatbot said, and they were forced to then give the refund and do the right thing. Air Canada have since shut down their Chatbot.

Companies need to realise that if they are going to use a system like a large language model, we call it AI, it isn’t, we don’t have true artificial intelligence yet. We have large language models that basically make things up as they go along. And if companies are going to replace their call centre operatives with these large language models, they are going to have to take responsibility for the things that these large language models say, even if, and especially when these large language models make things up.

This is not really a news bulletin from the future, it’s more of a public service announcement. Don’t rely on large language models until you are absolutely certain that those large language models are only trained on your data will never make anything up and know when to refer somebody to a human operator. Until you’ve got those things in place large language models should only be used as an augmentation for human capacity, not as an automation thereof.

There endeth the lesson, really a lesson from yesterday rather than the future, but one that a lot of people still need to learn. Thank you, as always, for joining me at ThrowForward Thursday, I look forward to seeing you next week again in the future or maybe just for another wake-up call.

 

 

At TomorrowToday Global, we help clients around the world analyse major global trends, developing strategies and frameworks to help businesses anticipate and adapt to market disruption in an ever-changing world.

Subscribe to our team’s weekly newsletter filled with insights and practical resources to help you succeed in the future of work.

For all enquiries, please use this email: [email protected]

Graeme Codrington, is an internationally recognized futurist, specializing in the future of work. He helps organizations understand the forces that will shape our lives in the next ten years, and how we can respond in order to confidently stay ahead of change. Chat to us about booking Graeme to help you Re-Imagine and upgrade your thinking to identify the emerging opportunities in your industry.

For the past two decades, Graeme has worked with some of the world’s most recognized brands, travelling to over 80 countries in total, and speaking to around 100,000 people every year. He is the author of 5 best-selling books, and on faculty at 5 top global business schools.