The scientist who authored the open letter advocating for a halt in the development of potent artificial intelligence systems has stated that tech executives did not cease their efforts because they’re too engaged in a, “Race to the bottom.”
In March this year, Max Tegmark, one of the co-founders of the Future of Life Institute, coordinated an open letter advocating for a six-month pause in the advancement of massive AI systems.
Despite garnering support from over 30,000 signatories, including notable figures like Elon Musk and Apple co-founder Steve Wozniak, the initiative was unable to secure a pause in the development of the most ambitious AI systems.
The chatbot wars
Six months after the letter’s release, Tegmark told The Guardian newspaper that he didn’t think the letter would put an end to tech companies’ efforts in pursuing AI models surpassing GPT-4 due to the intensifying competition in the field. “I felt that privately a lot of corporate leaders I talked to wanted [a pause] but they were trapped in this race to the bottom against each other. So no company can pause alone,” he said.
The letter that asked, “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” called upon governments to step in if an agreement to impose a moratorium on the creation of systems surpassing the capabilities of GPT-4 could not be reached among leading AI companies, including Google, OpenAI, and Microsoft.
However, Tegmark who is also a professor of physics and AI researcher at the Massachusetts Institute of Technology said the letter was a success to him. “The letter has had more impact than I thought it would,” said Tegmark. He adds that it led to a growing political awareness regarding AI, including hearings in the US Senate involving tech executives and the UK government’s organisation of a global summit on AI safety scheduled for November this year.
Concerns around AI advancement
“I felt there was a lot of pent-up anxiety around going full steam ahead with AI, that people around the world were afraid of expressing for fear of coming across as scare-mongering luddites. The letter legitimised talking about it; the letter made it socially acceptable. So you’re getting people like [letter signatory] Yuval Noah Harari saying it, you’ve started to get politicians asking tough questions,” added Tegmark.
Concerns regarding AI advancement encompass a broad spectrum of issues, ranging from immediate concerns like the creation of deepfake videos and the proliferation of misinformation to more existential fears related to super-intelligent AIs that could elude human oversight or make irreversible and profoundly impactful choices.
Tegmark believes a pause is still necessary
Tegmark also cautioned against characterising the development of digital, “God-like general intelligence” as a distant future concern, highlighting that some AI experts believe it could materialise in just a few years.
The scientist went on to express his approval for the upcoming UK AI safety summit next month at Bletchley Park, describing it as a positive development. His think tank has outlined three key objectives for the summit: fostering a shared comprehension of the gravity of AI-related risks, acknowledging the necessity of a united global response, and embracing the urgency of government intervention.
Tegmark remains supportive of the idea of halting the development of AI systems beyond GPT-4 until safety standards have been set in place. “Making models more powerful than what we have now, that has to be put on pause until they can meet agreed-upon safety standards,” he said.