FireFunction V1 – Fireworks’ GPT-4-level function calling model – 4x faster than GPT-4 and open weights


Fireworks has unveiled FireFunction-v1, an enhanced function calling model that integrates external knowledge into large language model (LLM) applications. Building on the success of its alpha version, FireFunction-v1 offers substantial improvements, including optimized performance for structured output generation and decision-making, multilingual input accuracy, and the ability to force function calls. It is based on the high-quality Mixtral 8x7B model and is available as open-weights or via the Fireworks platform.

FireFunction-v1 outperforms its predecessor and rivals GPT-4 in accuracy for real-world use cases, with significantly faster response times and the flexibility of open-source software. The model excels in structured output, adhering to complex JSON specifications, and in routing decision-making, where it can dynamically choose from multiple functions based on input.

Developers can access FireFunction-v1 on Hugging Face and use it on the Fireworks platform, with a simple one-line code change from OpenAI’s API. Currently free during its beta period, Fireworks encourages feedback and participation from developers to shape future iterations of function calling models.
Read more at FireFunction V1 – Fireworks’ GPT-4-level function calling model – 4x faster than GPT-4 and open weights…

Discover more from Emsi's feed

Subscribe now to keep reading and get access to the full archive.

Continue reading