Meta’s president of global affairs Nick Clegg has compared the current risks of artificial intelligence to the ‘moral panic’ over video games in the 1980s. He also cautioned international politicians and regulators who attended the two-day AI Safety Summit in the UK.
Clegg mentioned that AI is stuck in a, “Great hype cycle” and cautioned that new technologies can lead to both extreme excitement and negativity. British officials used the summit which took place at Bletchley Park, to kickstart a regulatory process that saw 28 countries become part of the Bletchley Declaration.
According to the former UK deputy prime minister, “New technologies always lead to hype. They often lead to excessive zeal amongst the advocates and excessive pessimism amongst the critics.
“I remember the 80s. There was this moral panic about video games. There were moral panics about radio, the bicycle, the internet,” said Clegg. Speaking about predictions regarding a powerful AI system that could have significant consequences, Clegg says that, “These predictions about what’s going to happen next, what’s going to happen just around the corner, often doesn’t quite turn out as those who are most steeped in it believe.”
Make safe AI
However, many disagree including Stuart Russell, a professor of computer science at the University of California, Berkeley, who also attended the AI Summit with Clegg and warned that the current approach to building AI systems was unsafe.
“We need to stop thinking about making AI safe, and start thinking about making safe AI,” said Russell. “We build the AI and then we have a safety team to stop it from behaving badly – that hasn’t worked and it’s never going to work.”
Similarly, Connor Leahy, the CEO of AI safety research company Conjecture, urged for an end to the liberty that privately owned AI companies have in developing, ‘Deadly machines’ that could take control of the future.
“There is nothing more important than people knowing the truth of a small group of unelected, unaccounted, private companies are running a deadly experiment on you and your families, without your consent or your knowledge,” said Leahy.
While many have called for the suspension of AI advancements beyond GPT-4, and have called on other AI labs to tune down their efforts in creating the most advanced models, Clegg disagrees by saying it’ll be a challenge.
“In this area, it is really important to allow innovators, builders – people who are ingenious in the way in which they ultimately entrepreneurially develop these technologies – to do so without immediately assuming that whatever they do next is going to pose some existential risk,” said Clegg.