OpenOrca fine-tuned the Llama2-13B surpasses Microsoft Research’s Orca Paper

OpenOrca has fine-tuned the Llama2-13B model using its own dataset and OpenChat packing, surpassing the performance of Microsoft Research’s Orca Paper. The model achieved this with less than 1/10th of the compute requirement and less than 20% of the dataset size. The model is expected to top the HuggingFaceH4 Open LLM Leaderboard and the GPT4ALL Leaderboard for 13B models. The training was significantly more efficient, thanks to the OpenChat MultiPack algorithm.
Read more…

Discover more from Emsi's feed

Subscribe now to keep reading and get access to the full archive.

Continue reading