Update README.md
Browse files
README.md
CHANGED
|
@@ -5,4 +5,15 @@ datasets:
|
|
| 5 |
base_model:
|
| 6 |
- HuggingFaceTB/SmolLM2-360M
|
| 7 |
pipeline_tag: text-generation
|
| 8 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
base_model:
|
| 6 |
- HuggingFaceTB/SmolLM2-360M
|
| 7 |
pipeline_tag: text-generation
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
Model Summary
|
| 12 |
+
|
| 13 |
+
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
|
| 14 |
+
|
| 15 |
+
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using UltraFeedback.
|
| 16 |
+
|
| 17 |
+
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by Argilla such as Synth-APIGen-v0.1.
|
| 18 |
+
|
| 19 |
+
For more details refer to: https://github.com/huggingface/smollm. You will find pre-training, post-training, evaluation and local inference code.
|