While OpenAI sorts it’s chaos, rival Anthropic’s Claude chatbot is evolving

As OpenAI navigates what’s essentially self-created turbulence within the company, rival artificial intelligence (AI) companies aren’t sitting around with a box of popcorn and some Diet Coke. The Google-backed Anthropic, one of their main rivals has released Claude 2.1, which is the latest update for its large language model (LLM). It is, whichever way we look at it, a significant step forward for AI models.

Claude 2.1 can now analyse as many as 1,50,000 words in a single prompt. (Official photo)
Claude 2.1 can now analyse as many as 1,50,000 words in a single prompt. (Official photo)

Claude 2.1 can now analyse as many as 1,50,000 words in a single prompt, which the company claims is an industry first. Anthropic insists there’s as much as a 30 percent reduction in errors and the hallucination rate has been slashed by half. The increased word understanding limit is significant because that roughly translates to as many as 200,000 tokens that Claude 2.1 can handle in a single query. Or as many as 500 pages of learning material.

In comparison, OpenAI’s ChatGPT has a 32,000 token upper limit for its premium GPT-4 model. Since underpinnings are the same, this also holds true for a wider spectrum of AI assistant products, including Microsoft’s Copilot and Snap’s upcoming smart spectacle lenses that integrate GPT for conversational AI. Having said that, the ability to handle more info doesn’t necessarily translate into better context and responses, something GPT-4 still sets the standard for in the chatbot ecosystem.

Also Read:Sam Altman, sacked OpenAI CEO, to join Satya Nadella’s Microsoft: What happened in 3 days? ChatGPT-maker row explained

“Processing a 200K length message is a complex feat and an industry first. While we’re excited to get this powerful new capability into the hands of our users, tasks that would typically require hours of human effort to complete may take Claude a few minutes,” the company says in a statement, though sounding a word of caution along the way. “We expect the latency to decrease substantially as the technology progresses,” they add.

When we asked the Claude 2.1 chatbot what it makes of the significant updates that it now integrates, it feigned ignorance. “I do not have detailed information about the specific changes and updates included in Claude version 2.1. As an AI assistant created by Anthropic, my knowledge comes from what I’ve been trained on rather than being manually updated with release notes or change logs,” it told us. Nevertheless, it must be noted that the capability of processing 200,000 tokens for context is limited to the Pro tier users, for now.

Those who pay for a premium Claude 2.1 chatbot, and higher usage volumes coupled with lower latency in responses, will shell out $20 per month. Claude Pro subscriptions were announced in September. This is similar to ChatGPT Plus pricing, which is also $20 per month.

Anthropic suggests the updated model will benefit from 2x decrease in false statements. “We tested Claude 2.1’s honesty by curating a large set of complex, factual questions that probe known weaknesses in current models,” says the company. They give the example of using a rubric that distinguishes incorrect claims (“The fifth most populous city in Bolivia is Montero”) from admissions of uncertainty (“I’m not sure what the fifth most populous city in Bolivia is”), and Claude 2.1 is more likely to hesitate rather than provide incorrect information.

This ties in with a claimed 30 percent reduction in incorrect answers in decoding long documents and will now be less vehement about claims that a certain document may or may not support a particular theory or opinion.

Leave a Reply

Your email address will not be published. Required fields are marked *