A French AI start-up may have commenced an AI revolution, silently

Days ago, Google announced its new Gemini artificial intelligence (AI) model amidst much fanfare, and a heavily edited video that made it seem more multimodal capable than it was.

Mistral AI is also releasing beta versions for Mistral 7B and Mixtral 8x7B models, which will be available in three sizes (Mistral AI)
Mistral AI is also releasing beta versions for Mistral 7B and Mixtral 8x7B models, which will be available in three sizes (Mistral AI)

A French AI start-up, Mistral, has taken exactly the opposite approach. It quietly put out a post on X, formerly Twitter, to share a download link for its latest model (which it turns out, is incredibly capable), followed by an official post detailing the features of the curiously named Mixtral 8x7B to the world. This model is open-source, which should worry some of its AI peers.

Stay tuned with breaking news on HT Channel on Facebook. Join Now

It’s been a somewhat intriguing week for the Paris-based Mistral, which also saw the start-up raise $415 million, or around €385 million, in Series A funding. This, according to estimates, puts the valuation of the company in the vicinity of $2 billion. The funding will provide a springboard for Mistral’s commercial progression, which begins with Mixtral 8x7B.

Also Read: Google Gemini is a big step forward for AI models

This model follows the Mixtral 7B released in September. That was just the beginning. In the benchmark comparison numbers shared by Mistral AI, the Mixtral 8x7B is significantly superior to its immediate rivals, including Meta’s Llama 2 family and OpenAI’s GPT 3.5. In the MMLU, or Massive Multitask Language Understanding benchmark, Mixtral 8x7B scores 70.6%, followed by GPT 3.5 (70%) and Llama 2 (69.9%). At the launch of the troika of Gemini models, Google claimed it was the first to achieve the 90% mark on this test, making it the first AI model to do so.

It is largely a similar theme for the rest of the benchmarks too, with Mixtral 8x7B taking the lead with the ARC Challenge (or Abstract and Reasoning Corpus; this is a common sense reasoning test) MBPP or Mostly Basic Pylon Problems (this includes programming problems to be solved) and the GSM-8K in which the AI models must solve diverse grade school math word problems. While Llama 2 scored the most in the WinoGrade large-scale dataset problem benchmark and GPT-3.5 won the MT Bench test with multi-turn questions, Mistral’s model wasn’t far behind in either.

Mixtral’s context size is 32,000 tokens per query, which is similar to GPT-3.5. Last month, Google-backed Anthropic released the Claude 2.1 large language model (LLM) that analyse as many as 1,50,000 words in a single prompt, roughly translating to an ability to handle as many as 200,000 tokens in a query.

Also Read: While OpenAI sorts it’s chaos, rival Anthropic’s Claude chatbot is evolving

Using the TruthfulQA/BBQ/BOLD tests for calculation of a habit of hallucination and bias, the French start-up says the Mixtral 8x7B is more truthful (73.9% over 50.2% of Meta’s Llama 2) and has more positive sentiment with responses. Mistral said Mixtral 8x7B has mastered French, German, Spanish, Italian, and English.

The fact that such a powerful and capable AI model is available open-source to developers, must worry the likes of OpenAI, Microsoft, Meta and Google.

“We release Mixtral 8x7B Instruct alongside Mixtral 8x7B. This model has been optimised through supervised fine-tuning and direct preference optimisation (DPO) for careful instruction following. On MT-Bench, it reaches a score of 8.30, making it the best open-source model, with a performance comparable to GPT3.5,” said Mistral AI in a statement.

The open-source position isn’t one that Mistral AI is likely to change anytime soon. “Since the creation of Mistral AI in May, we have been pursuing a clear trajectory: that of creating a European champion with a global vocation in generative artificial intelligence, based on an open, responsible and decentralised approach to technology,” Arthur Mensch, co-founder and CEO of Mistral AI, said in the statement, announcing the latest round of funding.

Also Read: Is decentralising Artificial Intelligence the way forward?

Mistral AI is also releasing beta versions for Mistral 7B and Mixtral 8x7B models, which will be available in three sizes depending on the use case and available computing power – the Mistral-tiny (based on Mistral 7B), Mistral-small (based on the new Mixtral 8x7B ) and Mistral-medium.

“We have worked on consolidating the most effective alignment techniques (efficient fine-tuning, direct preference optimisation) to create easy-to-control and pleasant-to-use models. We pre-train models on data extracted from the open Web and perform instruction fine-tuning from annotations,” the company said.

Mistral-tiny and Mistral-small are free downloads for developers. In contrast, Mistral-medium can only be accessed via paid API, or application programming interface, which other companies or developers will use to plug.

Leave a Reply

Your email address will not be published. Required fields are marked *