Connect with us

Hi, what are you looking for?


OpenAI CEO Tells Congress, ‘All Jobs Will Be Taken Over By AI’

Sam Altman disagrees with US senators on pausing artificial intelligence innovation, says his company is not currently working on a newer AI model

OpenAI CEO Sam Altman has expressed his support in favour of increased regulations to mitigate the potential negative impacts of AI. Altman specifically emphasises the need for oversight in relation to large-language models (LLMs) such as generative AI.

Speaking to the US Congress as reported by Evening Standard, Altman says, “My worst fear is that we, the industry, cause significant harm to the world. I think, if this technology goes wrong, it can go quite wrong and we want to be vocal about that and work with the government on that.”

Aside from Altman, two additional individuals testified before the senators: Professor Gary Marcus from New York University, and Christina Montgomery, the chief privacy and trust officer at IBM.

AI could be the perfect storm

As the US elections draw close, Professor Marcus spoke about the potential dangers of AI systems unintentionally persuading individuals to alter their beliefs, citing a recent Wall Street Journal article.

The professor says, “We don’t know what ChatGPT 4 is trained on… how these systems might lead people about very much depends on what it is trained on. We need scientists doing analysis on what the political impact of these systems might be.”

“We have this issue of potential manipulation… hyper-targeting is definitely going to come. Maybe it will be open-source models, I don’t know, but the tech is able to do that and will certainly get there.”

When questioned by senators on prof Marcus’s point, Altman responded, “Given we’re going to face an election next year… this is a concern. I do think some regulation on this would be wise. We need some rules on what’s expected in terms of disclosure from a company providing these models. I’m nervous about it.”

Prof Marcus added that he was concerned about people’s safety when it comes to AI, warning that the US was now facing, “The perfect storm” and cautioned lawmakers to learn from what happened with social media.

Regulating AI

When questioned about how AI can be regulated, Professor Marcus proposed the establishment of an ‘international agency’ that would involve collaboration among multiple governments to oversee and monitor the development of AI.

“Some genies are out of the bottle, some are not — we don’t have machines that can self-improve yet, for example. But there are other genies to be concerned about. We need to have some meetings very soon about how you build international agencies very quickly,” said Marcus.

When asked what she thought about the proposed AI Act by the EU, IBM’s Montgomery said, “Absolutely that approach makes a ton of sense,” Montgomery said. “Guardrails need to be in place. We don’t want to slow down regulation to address real risks right now. We have existing regulatory authorities right now… a lot of the issues we’re talking about span multiple domains.”

OpenAI’s Altman emphasised the importance of training AI models on a globally derived system of ‘values’ established by individuals worldwide.

The CEO also advocates for the establishment of an independent commission composed of experts who can assess the compliance of AI models with regulations. This commission would also possess the authority to grant and revoke licenses as necessary.

“Groups of people who have historically been underrepresented or not had much access to this technology — this technology seems to have the ability to lift them up,” said Altman.

Stringent procedures for potential AI harms

One of the biggest negative impacts that the adoption of AI systems is likely to have is job automation and displacements. When asked by senators whether AI will take over all our jobs, Altman responded, “Eventually, all jobs will be taken over by AI, but we are not that close to AI general intelligence now.”

The concept of AGI or ‘AI general intelligence’ pertains to a form of AI that does not currently exist but has the potential to exhibit cognitive capabilities comparable to human thinking.

Senator Josh Hawley raised the question, “Why don’t we just let people sue you?” as a response to concerns voiced by other senators regarding the understaffing of US regulators.

He proposed that any individual or company who believes they have suffered harm due to AI should have the option to seek justice through the court system, thereby reducing the workload on regulators.

Another proposal, as highlighted by senators was for the technology industry to temporarily halt all AI development until it becomes feasible to thoroughly assess all the risks and solutions. However, Altman stood against this idea but confirmed that OpenAI is not training any AI models for now.

Both IBM’s Montgomery and Senator Richard Blumenthal also disagreed with the idea to pause AI innovation. “Safeguards and protections yes, but a flat stop sign, sticking our heads in the sand, I would advocate against that,” said Blumenthal.

You can read more about Altman’s thoughts on the special treatment and coordination that superintelligence will require to mitigate the risks of AI technology in this recent blog post on the OpenAI site.

Written By

Isa Muhammad is a writer and video game journalist covering many aspects of entertainment media including the film industry. He's steadily writing his way to the sharp end of journalism and enjoys staying informed. If he's not reading, playing video games or catching up on his favourite TV series, then he's probably writing about them.

You May Also Like

Level Up

Eager to be at the metaverse frontier, but not sure how to get started? As exciting as the idea of a shared digital space...


New blockchain gaming platform based on Unreal Engine 5.


The record for the most expensive land sale in the metaverse has just been raised


Fashion brand teams up with proto-metaverse for two new eyewear options, the Helux and Hydra


Subscribe to the future