According to Dr. Nando de Freitas, one of the lead researchers from Google DeepMind AI, human-level artificial intelligence is close to becoming a reality. Recently DeepMind revealed a new AI system, the Gato AI, which can write poetry and stack blocks, among other complex tasks.
“It’s all about scale now! The Game is Over!” de Freitas tweeted in response to an opinion piece on The Next Web. “It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, ….”
“Solving these scaling challenges is what will deliver AGI. Research focused on these problems, e.g., S4 for greater memory, is needed.”
Machine Researcher and professor at UT Austin, Alex Dimakis asked Dr. de Freitas via Twitter how far the Gato is from passing the Turing test, to which he replied, “Far still.” The Turing test is a challenge in which an AI attempts to trick a human into thinking it is another person, and thus far, no computer has ever passed the test.
The Fear of AI Taking Over
A primary concern about human-level AI is the potential for the system to teach itself, thus becoming superior in intelligence and ignoring shut down commands. Professor Nick Bostrom of Oxford University worries that such intelligent AI could displace humans as earth’s dominant beings.
DeepMind addresses this concern in a 2016 paper, ‘Safely Interruptible Agents.’ In the paper, researchers explain how human operators could take control of a rogue AI system stating, “Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences. If such an agent is operating in real-time under human supervision, now and then, it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation.”
We’re feeling much more comfortable already…