unsloth: 5X faster 50% less memory LLM finetuning


Unsloth, a new technology, offers 80% faster and 50% less memory-consuming local QLoRA finetuning. It uses OpenAI’s Triton language and supports NVIDIA GPUs since 2018+. The technology maintains accuracy without requiring hardware changes and supports 4bit and 16bit LoRA finetuning. It also allows training Slim Orca fully locally in 260 hours, a significant reduction from 1301 hours. The open-source version offers 5x faster training, while Unsloth Pro and Max provide 30x faster training.
Read more at GitHub…