DBRX: A New State-of-the-Art Open LLM


Databricks has unveiled DBRX, a new open-source large language model (LLM) that has set a new benchmark for performance among established open LLMs. DBRX boasts superior capabilities in general-purpose tasks and coding, outperforming specialized models like CodeLLaMA-70B. It utilizes a fine-grained mixture-of-experts (MoE) architecture, which results in a model that is both smaller and faster—up to twice as fast as LLaMA2-70B in inference and 40% smaller than Grok-1 in parameter count. Additionally, DBRX is more FLOP-efficient during training compared to dense models.

The model’s weights are accessible on Hugging Face under an open license, and Databricks customers can use DBRX via APIs or train their own models using Databricks’ tools. DBRX has already been integrated into GenAI-powered products, showing promising results in applications like SQL.

DBRX’s training process leveraged Databricks’ suite of tools, including Apache Spark™ and MLflow, and was conducted on NVIDIA H100 GPUs. The model’s efficiency extends to inference, with optimized serving infrastructure allowing up to 150 tokens per second per user. Databricks emphasizes the importance of enterprises having control over their data and AI destiny, positioning DBRX as a key component in their next-generation GenAI products. The development of DBRX was a collaborative effort, drawing on contributions from across the Databricks team and the wider AI community.
Read more at Databricks…