Connect with us

Hi, what are you looking for?

Bytes

ChatGPT Rival Claude 2 Aims To Be Safer And More Ethical

The chatbot is designed to summarise big blocks of text such as novels and extensive reports

Artificial intelligence company Anthropic has developed a new AI chatbot called Claude 2 which can summarise large blocks of text like novels. The new chatbot is designed with safety principles from sources including the Universal Declaration of Human Rights.

Claude 2 is well positioned to be a ChatGPT competitor as not many other AI chatbots can summarise novels aside OpenAI’s popular chatbot. Anthropic described Claude 2‘s safety method as ‘Constitutional AI’. This involves using a set of principles to make judgements on the text it produces.

The company trained the AI on principles garnered from documents such as the 1948 UN declaration as well as Apple’s terms of service, which covers current issues like data privacy and impersonation. An example of the AI’s principle based on the UN declaration is: “Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood.”

Novel-summarising chatbot

Dr Andrew Rogoyski, a researcher at the University of Surrey’s Institute for People-Centred AI, compared Anthropic’s approach to the Three Laws of Robotics by science fiction author Isaac Asimov. These laws instruct robots not to harm humans.

“I like to think of Anthropic’s approach bringing us a bit closer to Asimov’s fictional laws of robotics, in that it builds into the AI a principled response that makes it safer to use,” Rogoyski said.

Anthropic’s CEO, Dario Amodei, met with UK’s prime minister Rishi Sunak and US vice president Kamala Harris to discuss the safety of AI models. This meeting was part of a series of meetings with senior tech delegations held at Downing Street and the White House.

Amodei is also one of the signatories of a statement by the Center for AI Safety that calls for the global mitigation of the risk of extinction from AI. The statement argues that the potential risks of AI are on par with pandemics and nuclear wars.

The company has made Claude 2 publicly available in the US and UK. While the chatbot can summarise up to 75,000 words, Claude 2 is also prone to hallucinations and factual errors.

Written By

Isa Muhammad is a writer and video game journalist covering many aspects of entertainment media including the film industry. He's steadily writing his way to the sharp end of journalism and enjoys staying informed. If he's not reading, playing video games or catching up on his favourite TV series, then he's probably writing about them.

You May Also Like

Level Up

Eager to be at the metaverse frontier, but not sure how to get started? As exciting as the idea of a shared digital space...

Bytes

New blockchain gaming platform based on Unreal Engine 5.

Bytes

The record for the most expensive land sale in the metaverse has just been raised

Bytes

Voice suppression tech prevents the real world from overhearing your in-metaverse conversations

Advertisement
Advertisement

Subscribe to the future

Advertisement