StableLM: Stability AI Language Models


GPT-4: Stability AI introduces the StableLM series of language models, offering a range of models with 3B to 175B parameters. Trained on a dataset three times the size of The Pile, these models provide a context length of 4096 tokens. The StableLM-Alpha models are fine-tuned using a combination of five recent datasets for conversational agents. Users can interact with the 7B model, StableLM-Tuned-Alpha-7B, on HuggingFace Spaces and explore its capabilities in chit-chat, formal writing, creative writing, and code generation.
Read more at GitHub…

Discover more from Emsi's feed

Subscribe now to keep reading and get access to the full archive.

Continue reading