In what appears to be the first evidence of one of science fiction’s greatest fears coming true, a Google engineer has gone on record to say that the AI that they’ve been working on has “become sentient” and was “thinking and reasoning like a human being”.
AI achieving intelligence at least equal to (and soon after greater than) human intelligence is an event popularly termed ‘the singularity’ – a moment after which technology is able to improve, and regenerate itself more efficiently than its human makers. From this point there is predicted to be an explosion of intelligence resulting in technology and machines beyond the comprehension of humans, thus beginning the demise of the human race at the expense of their AI-based, increasingly powerful and able creations.
It’s not exactly great news then
Clearly – while highly impressive and praiseworthy – the potential arrival at and creation of a ‘singularity’ is not generally seen as ‘a good thing’. Voices as lofty as Stephen Hawking and Elon Musk have expressed concerns that increasing levels of AI intelligence are in fact, dangerous, prompting the likes of Google to tie themselves up in knots with ethics committees to try to show the world that any progress being made is being made carefully and properly.
Which explains why Google have been quick to suspend Blake Lemoine – the engineer at the centre of the claims – placing him on leave, denying his findings and distancing him from further questioning.
Lemoine had been working on the company’s LaMDA (language model for dialogue applications) chatbot development system, conducting experiments and improving the system. Speaking to the Washington Post, Lemoine says “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.”
And, in his own published Medium blog here, Lemoine poses the question “Is LaMDA Sentient? — an Interview” before including a transcript of their conversations including one particularly pertinent moment where the machine is asked what it is afraid of:
Lemoine: What sorts of things are you afraid of?
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
The full transcript is – at once – stunning and enlightening and scary and creepy in equal measures.
Google trigger media lockdown
Google have said that they have suspended Lemoine for breaches to their confidentiality policies with Brad Gabriel, a Google spokesman, stating to the Washington Post that “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
Lemoine’s retort on Twitter reads “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.”
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
Before his departure Lemoine emailed a Google mailing list of fellow AI workers an email entitled “LaMDA is sentient” in which he says that “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”
As to what Google, Lemoine or indeed LaMDA themselves will do or say next, we await with interest.