ChatGPT, CoPilot, Llama, Bard, Grok, and all the other Generative AI Large Language Models have demonstrated that we can engage with our technology using natural language. Now we need to get our devices to understand the heuristics of requests and actions we ask them to do.

Rabbit R1 was launched to much fanfare in December 2023, as a first model of a device that can do this. I doubt it can deliver yet, but it’s definitely heading in the right direction. And I love their coining of the LAM: Large ACTION models.

I think that Apple’s AI play later this year will integrate Siri with early algorithm capabilities that take another step down this path of getting our devices to understand how the world actually works and be able to do a lot of repetitive and mundane tasks for us.

Your company should already be working on which tasks could be given to LAMs, building the algorithm systems to make these work seamlessly, and deciding how humans and machines will continue to become more bionic as we work together in the future. If you’re just thinking about automation you’re not thinking correctly about this. It’s so much more than that!

 

TRANSCRIPT

Welcome to the Future. This is ThrowForward Thursday, my name is Graeme Codrington, and come with me…

Well, normally we go to the future, but I have to take you to the past for a moment, to December 2023.  When a little product called the Rabbit R1 was launched, this is a little handheld device, and the promise of this device is what is called a Large Action Model.

Now, ChatGPT and generative AI are large language models. These are models that are based on databases of words, and they can take your words and give back words that sound beautiful. And we call it AI. It’s not really. It’s just manipulation of language.

But we want to now go a step further. We now know that we can use natural language to communicate to machines, and machines can communicate back to us in natural language. We want to now add a layer of action and insight on top of that language. And that’s what a large action model is, an L-A-M. And what Rabbit intended to do was to take a little handheld device, a nice little red square that fits in the palm of your hand. And what you can do is just using natural language. There’s no keyboard or anything, it just listens to you talking, but you can give quite detailed instructions about something you want to do in your life and Rabbit will use language processing to work out what are the steps involved in making that happen?

So for example, if I want to attend a sporting event, the Olympics are coming up and I want to get tickets to the Olympics. If I’m thinking that in my head, what I’m also then thinking at the same time is I’m going to have to get accommodation in Paris if I want to go live. I’m thinking that there must be thousands of options on each of the sporting codes on different days for me to be able to see the Olympics. But I know in my head which ones I really like. So, I’m not even going to be bothering getting tickets for the fencing or the horse jumping or the show jumping, whatever the case may be. So, there’s a whole lot of things I already know about myself. And I know that it’s in June, in Paris, and so on. And then I’m going to have to get plane tickets and hotel tickets, and I’m going to have to get tickets for each of those events.

I then probably am not going to be able to do that without my wife. That would be a little bit marriage-limiting. So, a system, my brain at the moment is going to alert me to probably 20 or 30 decisions, many of them in structured steps. We call this a heuristic or an algorithm. And what the promise of the large action model is, is that it will actually have a database of those steps available for almost anything you want to ask.

You could point a camera at your fridge and say, this is what’s in my fridge. What’s the best meal I could have for dinner this evening? And the system would then be able to analyse what’s in your fridge, work out which recipes are most appropriate, and then give you the correct recipe step by step by step.

And so, it’s the next step in our engagement with our devices, where we remove the keyboard, firstly, because it’s all voice generated, where, secondly, we are relying on a system that is able to break more complicated tasks up into their step-by-step heuristics, algorithms, component parts. And then, ideally, we’d be able to have access to various apps and systems that would be able to perform each of those steps autonomously without us getting involved.

So, we can just say, book tickets for this, make the meal, or at least tell me which meal I can make myself. And if I have a robot assistant, can tell the robot to take instructions from the rabbit and make sure my meal is ready by 6:30 PM.

So why did we go backwards today? Because that is what Rabbit R1 promised in December last year. Now, I’m pretty certain that’s a little bit of vaporware at the moment. I’m pretty certain they’re not going to be able to deliver on the promise, but they’ve given us the promise. They’ve shown us what it might look like, and we know that it will come. So, all the best to them in getting the system delivered and making sure that it works and integrates with our existing systems.

If I am going to take you to the future, I’m pretty sure that this is what Apple is working on. Apple is a bit behind the game at the moment. With AI, it hasn’t Microsoft has got copilot and has worked with OpenAI and ChatGPT since the beginning. Apple often is late to the party in developing something, but when it does release it, it’s integrated and big, and I suspect that sometime in the second half of this year, normally September, at their Apple launch events, we might see something that is a significant jump forward for what Apple calls Siri at the moment. And it will give us one of these large action models, or at least move quite quickly in that direction.

Personally, I think that what we probably just need is the ability to programme these algorithms ourselves initially. So, we will be the ones who say, right, if I’m wanting to book tickets for the Olympics, the following nine steps are required, and we programme those steps, and then it becomes a function or an algorithm that we can call on. And then that starts to build a massive ecosystem database that other people can learn from, and we get generalised algorithms for these large action models in the future.

A little bit more of a technical ThrowForward Thursday today, but I’m really excited about a future in which our devices become properly intelligent in being able to, in the real world, work out step-by-step automations and actions to help us to live better, more streamlined lives. It is coming, and it’s called an L-A-M.

What you can probably start thinking about today for your business is in what ways can you begin to build the algorithms and the heuristics of the key functions in your business. Probably at the moment, you rely on individual human beings to be able to do this based on their experience and their abilities and expertise. But you should probably begin to be documenting decision trees and algorithms and processes in such a way that those parts of it that can be automated are ready for whenever we get the technology to be able to do that. I think it’s too early yet to automate people out of the system. And I think that there will always be things for people to do while the machines do the very mundane repetitive work that’s boring for most of us to do anyway. So something to think about.

As always, if our team at Tomorrow Today can help your team to think these things through and make sure that you are future-proofed and future-ready, well, please contact us. This is what we do, and we’d be happy to help you. Otherwise, have a great week. I’ll see you next week in the future again.

 

 

At TomorrowToday Global, we help clients around the world analyse major global trends, developing strategies and frameworks to help businesses anticipate and adapt to market disruption in an ever-changing world.

Subscribe to our team’s weekly newsletter filled with insights and practical resources to help you succeed in the future of work.

For all enquiries, please use this email: [email protected]

 

Graeme Codrington, is an internationally recognised futurist, specialising in the future of work. He helps organisations understand the forces that will shape our lives in the next ten years, and how we can respond in order to confidently stay ahead of change. Chat to us about booking Graeme to help you Re-Imagine and upgrade your thinking to identify the emerging opportunities in your industry.

For the past two decades, Graeme has worked with some of the world’s most recognised brands, travelling to over 80 countries in total, and speaking to around 100,000 people every year. He is the author of 5 best-selling books, and on faculty at 5 top global business schools.

TomorrowToday Global