Mistral CEO confirms ‘leak’ of new open source AI model nearing GPT-4 performance


The open source AI community has been abuzz with the emergence of a new large language model (LLM) known as “miqu-1-70b,” which was posted on HuggingFace by a user named “Miqu Dev.” This model, which shares a prompt format with Mistral’s Mixtral 8x7b, has shown exceptional performance, rivaling OpenAI’s GPT-4. Speculation arose that “Miqu” could be a quantized version of a Mistral model, either leaked or released covertly. Quantization allows AI models to run on less powerful hardware by simplifying numerical sequences.

The mystery was partially resolved when Mistral’s CEO, Arthur Mensch, acknowledged on social media that an overzealous customer’s employee leaked a quantized version of an older Mistral model. Mensch hinted that Mistral is progressing towards a model that could match or surpass GPT-4’s performance. This development could mark a significant moment for open source AI, potentially challenging OpenAI’s dominance and its subscription-based model, as open source alternatives gain traction. OpenAI may still hold advantages with its specialized models, but the open source community is rapidly catching up, posing a competitive threat to the current leader in LLMs.
Read more at VentureBeat…