Ollama 0.1.16 adds Mixtral support

The latest release of Ollama, version v0.1.16, adds support for Mixtral and other models based on the Mixture of Experts (MoE) architecture. This update includes new models like Mixtral, a high-quality mixture of experts model, and Dolphin Mixtral, an uncensored model optimized for coding tasks. It’s important to note that these models require at least 48GB of memory. Additionally, a fix was implemented for an issue related to load_duration in the /api/generate response. For full details, visit the GitHub release page.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Emsi's feed

Subscribe now to keep reading and get access to the full archive.

Continue reading