Connect with us

Hi, what are you looking for?

Level Up

‘We’re Victims Of The Hype’: Why AI Isn’t As Smart As The Industry Promises

An exclusive interview with Dr Emmanuel Maggiori about his new book on artificial intelligence’s epic mistakes

Dr Emmanuel Maggiori and his 2023 book Smart Until It's Dumb
Dr Emmanuel Maggiori and his 2023 book Smart Until It's Dumb.

With the release this month of GPT-4, generative artificial intelligence remains the hottest topic in tech. AI is capable of so much. But are the wheels already falling off the hype cycle? One computer scientist and software engineer thinks AI isn’t as smart as the industry promises and has written a book about his experiences.

Dr Emmanuel Maggiori recently published Smart Until It’s Dumb: Why Artificial Intelligence Keeps Making Epic Mistakes (And Why The AI Bubble Will Burst). We sat down with Dr Maggiori to discuss his insights and experience working with AI. He tells us: “I’m criticising the industry. But we’re all victims of it. We’re victims of the hype. So we end up working on awful projects.”

Dr Maggiori studied software engineering in Argentina and then moved to France for a PhD at the National IT Research Lab. He started working with AI “by accident” when it became a hot topic with grants and research programmes attached to it. After leaving academia, he worked in business in France and the UK. He’s “trying to steer away from AI-first projects” (we’ll see why, below), and he is currently working with a start-up using mathematical optimisation to adjust the settings of radiator valves automatically.

AI keeps making big mistakes – mistakes that a human would never make

Dr Emmanuel Maggiori

In our conversation, he reveals there’s something missing in our current mission to build AI. As well as some potential positive applications of AI, we talk about problems with how it’s been rushed into applications, plus ethical concerns around how popular applications generate content. We also talk about how the hype has affected tech industry investment and about safety measures when it comes to life-and-death situations like self-driving cars. It’s a fascinating and wide-ranging conversation that serves as a valuable counterpoint to all the furore around AI investment.

In your new book, your premise seems to be that what people are talking about as artificial intelligence is a lot of hype. At best, it’s underwhelming; at worst, quite flawed. Can you elaborate?

When we speak about AI, we mean machine learning most of the time, which is what’s dominating this wave of enthusiasm. And I think it’s been great in many ways. However, it does not achieve true intelligence. That’s a bit hard to define, and people don’t necessarily agree on the definition. But I’m pretty sure that this isn’t it!

We see AI that keeps making big mistakes – mistakes that a human would never make. A lot of people suggest that what we need is to plug the holes. They say, “Let’s fix the bug.” But whenever you plug a hole, another one pops up. I think there’s something fundamentally missing in our current methodologies to make AI truly intelligent. That doesn’t mean that it doesn’t have very useful applications, but it’s not truly, truly smart.

ChatGPT has been described as a “bullshitter on steroids”. There’s a chapter in your book documenting bad search results. What do we need to know about the state of conversational AI at the moment?

You need to understand what those tools are designed to do… and then try to not make them do things that they weren’t designed to do! I think it happens a lot in AI. There’s a disassociation between the two. Sometimes people create an AI model for one thing, and then they try to use it for something else. I’ve seen this with companies a lot.

In the case of ChatGPT, we have a system that’s trained to predict the next word, given a prompt. It’s just that. The next “token”, it’s called. It’s looking to make it the most believable word possible, from training data collected from the web. There are obviously lots of sentences that are said by people, and this system tries to find a clever way to predict what the next word should be.

You can take this very far. You can, first of all, use it to generate entire sentences – but it uses a trick; it’s not designed for that. So you feed it its own outputs, and it starts spinning out longer texts. But then we see that when the text becomes too long – if you wanted to write a chapter of a book – it starts to become a little bit weird. And then it doesn’t necessarily model things at a very high level. Because it’s just intended to imitate texts.

Disassociation comes when we try to see this as a generator of truth. If you expect a human-like appearance. Because it doesn’t truly model the world at a high level.

I’ll give you one example that’s not in the book. I asked ChatGPT, “In what part of the UK do people say the word ‘south’ like ‘souf’ with an ‘f’?” It answered correctly, that it was typical of the cockney accent from London. But then I asked the same question with “mouth”, and it said “no particular place in the UK”. So that was the wrong answer! This proves that the system did not understand the rule that the “th” is pronounced like an “f” in London. What it did, is find lots of people speaking about this specific example on the internet, using “souf” – saying “Souf London” with an “f”. But it didn’t understand the mechanism involved. And that’s what I want to point out here.

In your book, you touch on how AI extends to functions like self-driving cars. ChatGPT’s factual errors are amusing but is there real danger in trusting AI for physical applications?

To be clear: I think a lot of useful products may come out of this. But I think that one of the dangers will be, first of all, financial. Because I think a lot of the money they’re putting in won’t lead to much. I’m not sure the needs of people are being understood.

I think a lot of the money they’re putting in won’t lead to much. I’m not sure the needs of people are being understood.

Dr Emmanuel Maggiori

We’ve seen that a little bit with Bing, for instance. They put this chat on Bing, but did people want their search engine to be this “bullshitter”? It says in the FAQs: “You need to double-check what it says.” But do I want, from my search engine, to get facts that I need to double-check?! Maybe there’s a market for it, but I’m not sure they did the due diligence. That’s one of my fears.

But the other fear, which I’m quite scared about, is that people may believe that AI is infallible, that machines have reached a level of intelligence that they haven’t reached, and they’re going to deploy it everywhere, including in unsafe scenarios like, say, self-driving cars.

Surely there are benefits to society through how AI might be able to help with things like healthcare and climate change. Are there ways AI can be really positive?

Yes. But I think there are many, many possible costs. And, in fact, I think AI has helped us tackle many problems that we probably wouldn’t even be considering solving because some people are afraid of losing their jobs to AI. But I think a lot of the stuff that we do with AI is just stuff that we wouldn’t do.

That happens with recommendation systems – when you see translated reviews when you’re booking a hotel. Some reviews come from different countries, but they are automatically translated. You don’t really care if the translation’s not perfect. You just want to know more about this hotel. So it’s a good thing that they put those reviews there. Otherwise, I don’t think they’d be hiring translators to do all the reviews! I don’t think they’d be taking work away from translators. It’s expanding. I think it can be a good tool to aid decision-makers in many ways.

A lot of the pushback against AI comes from writers and artists. They worry about losing their jobs to computer programs. Are there ethical considerations around things like plagiarism and creative dishonesty?

Actually, I’ve heard of some magazines that have been receiving lots of submissions that seem to be done with ChatGPT. When ChatGPT puts “[insert your name here]”, they left it, and the magazines received the same story many times, with the same title! People are using that, for sure. So there might be ethical concerns. [Editor’s note: Neil Clarke blogged about this.]

I think many industries may need to adapt to this. Prohibiting stuff never works. Teachers may need to find new ways of making assignments more engaging so that students don’t just use ChatGPT. Or even giving tasks that are difficult enough that ChatGPT wouldn’t be able to do a good job.

The genie is out of the bottle now: we can’t un-invent ChatGPT. So what kind of role do you think regulatory bodies have in making sure that concerns are taken seriously?

When it comes to regulation, I always think above all of our physical health. For instance, I would expect regulators to be very careful with self-driving cars, or using AI in the healthcare industry.

I’m a little bit afraid that because of all the push, some unsafe stuff will end up roaming the streets. The regulators will not be harsh enough. I hope they focus on that.

I’m a little bit afraid that because of all the push, some unsafe stuff will end up roaming the streets. The regulators will not be harsh enough. I hope they focus on that.

Dr Emmanuel Maggiori

But then applications like generating text or potentially students writing essays – I don’t think trying to regulate that is a good idea, on different levels. On a practical level: because I don’t think it’s going to work! And then even on an ethical level: I’m not sure people generating text with the aid of a machine is necessarily something wrong.

But there was a judge in Colombia. He wrote a court ruling with ChatGPT. I don’t know the details of that in depth. But potentially, if the judge used it as a tool, that’s OK. But then if it influenced the ruling, he didn’t do his role as a judge. That’s the kind of thing you do need to regulate properly. (It’s probably already regulated, you know?)

Most people use spell checkers. People have been using Grammarly for years. Email programs make predictions about what text should be added. If we wrote a law forbidding certain kinds of text generation, we may accidentally cancel out tools like that.

Exactly. Where do we draw the line? That’s why I think physical safety or maybe mental health should be considered; there’s a very clear concern about preserving our lives. It’s a good line to draw. Because Grammarly is not going to kill you. But a self-driving car may.

Grammarly is not going to kill you. But a self-driving car may.

Dr Emmanuel Maggiori

There’s a chapter in your book called “The Mind”. Science fiction likes to imagine we’re on a journey that leads to machines that have a consciousness like our own. Is that likely?

I think that’s a very good, philosophical question, and people have been trying to answer it from philosophy, biology, and physics. A lot of people have been speaking about this for a long time, and there are a lot of things we can say about this.

But the first thing to remark on is that the debate isn’t over. Because sometimes people seem to pretend that there’s no discussion to be had. For instance: “Machines are going to be sentient, for sure. We’re getting there.” And they state those things as facts as if they’ve been completely undisputed. “Artificial intelligence is obviously possible,” they say, “because our brains are computers; the universe is a computer…” But in reality, it’s not. These things continue to be disputed a lot. We don’t really understand, for instance, consciousness. We know that general anaesthesia works because we know it works – but neuroscience hasn’t been able to explain anaesthesia yet, or why consciousness happens. So there is so much more to know. We have more questions than answers if I’m honest.

AI is attracting investment. You say in your book there’s probably some kind of “AI winter” coming, as firms cool on the subject, and some investment goes away. What’s your projection for the next 12 months?

I thought we would have a winter already, before the ChatGPT thing happened, because I saw signs of strain! I was speaking with investors, and they started telling me, “I’ve grown tired of everyone using the word ‘AI’ to get my money.”

I spoke with some people who helped others apply for a UK grant to obtain money from the government for innovation projects. And they told me that as of late, they are asking people to specify very clearly what they will do with AI or machine learning – even name the techniques. Because people were applying for grants, and just saying, “We’re using AI,” and they got money. But now it doesn’t make the cut if you just use the buzzword. I heard an investor say, “I cringe when I see a start-up that’s something-dot.ai as their domain name.” So that’s starting to happen.

There’s a whole new wave of hype with ChatGPT and all the chatbots. I think it may crash within a year. They will end up diverting some funds somewhere else.

Dr Emmanuel Maggiori

And obviously, there’s a whole new wave of hype with ChatGPT and all the chatbots. I think it may crash within a year. And when I say “crash”, it doesn’t mean that nothing good will come out of it! But definitely, they will end up diverting some funds somewhere else. They will pretend that some project was successful but they actually ended up changing the goals towards something else. That’s not necessarily a bad thing. It happened a lot with self-driving cars. They designed some other kind of car that is not truly self-driving, but it does the job.

Look at Bing, where a week after they launched it, they had to limit the questions that users could ask. That was a bit like the rise and fall of Bing Chat within a couple of weeks!

Please tell us a little bit more about your book and how you came to include all these examples.

I started work on it about a year ago. I obviously ended up adding things to the book about ChatGPT, right before the publication, because its release happened unexpectedly. The point of the book didn’t change. I actually reinforced the book with more examples.

I started writing after deciding to take a career break. I was very frustrated because I’d been working on AI-related projects that were doomed to fail. Yet we had to fake it. And it happened repeatedly. I’d done some very interesting work in AI. I helped companies make tens of millions, and it was very successful. But that’s a tiny bit of it. I ended up finding myself working on those bogus projects a lot – a start-up company just wanting to use AI “just because”, and putting a lot of money and people into it, for the publicity, for the wrong reasons… and having to fake it.

My latest experience was really, really awful. They made us build a tool for executives and companies to help them make decisions. But the recommendations were terrible, because the system didn’t work, and we had to override it with the recommendations they wanted to hear. We had invented an AI tool that was a little bit corrupt, even. I thought, “I need a break after this.” Because the AI thing was so fake, at one point, we weren’t using it anymore, so there was almost no work to do. So I was miserable. And that’s when I decided to document a lot of this in my book because I thought someone needs to say it.

I’m quite scared people may believe AI is infallible, and they’re going to deploy it everywhere, including in unsafe scenarios

Dr Emmanuel Maggiori

Dr Emmanuel Maggiori’s book Smart Until It’s Dumb is out now from Applied Maths Ltd books. You can discover more applications of AI for the games industry at the PG Connects conference in Seattle in May.

Written By

Professional geek Dave is COO of Steel Media, the company behind Beyond Games. He oversees various events, marketing and editorial teams. Dave started his career writing game reviews in the 1990s and he’s since served as editor-in-chief of publications such as the official Microsoft magazine and entertainment mag SFX.

You May Also Like

Level Up

Eager to be at the metaverse frontier, but not sure how to get started? As exciting as the idea of a shared digital space...

Bytes

New blockchain gaming platform based on Unreal Engine 5.

Bytes

The record for the most expensive land sale in the metaverse has just been raised

Bytes

Voice suppression tech prevents the real world from overhearing your in-metaverse conversations

Advertisement
Advertisement

Subscribe to the future

Advertisement