DeepSeek Coder, a series of code language models, offers state-of-the-art performance in coding capabilities. Trained on 2T tokens, the models range from 1B to 33B versions and support project-level code completion and infilling. The models outperform existing open-source code LLMs on multiple benchmarks, including HumanEval, MultiPL-E, MBPP, DS-1000, and APPS.
Read more at GitHub…