IMU-1: Sample-Efficient Pre-training of Small Language Models
Paper
•
2602.02522
•
Published
•
5
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Pre-tokenized training data for Stage 1 (stable phase) of IMU-1, a sample-efficient 430M parameter language model.
| Property | Value |
|---|---|
| Tokens | ~29B |
| Format | Memory-mapped NumPy (.npy) |
| Tokenizer | SmolLM2-360M |
| Vocab size | 49,152 |
High-quality filtered web data including:
huggingface-cli download thepowerfuldeez/1218_imu1_base_stable_corpus --repo-type=dataset
# Clone training framework
git clone https://github.com/thepowerfuldeez/sample_efficient_gpt
cd sample_efficient_gpt
# Install dependencies
export UV_TORCH_BACKEND=auto
uv pip install setuptools uv_build maturin
uv sync
# Train Stage 1
uv run torchrun --nproc_per_node 8 train.py \
--config configs/imu1_base.yaml \
--config-key stable
| Parameter | Value |
|---|---|
| Schedule | WSD (stable phase) |
| Iterations | 100,000 |
| Batch size | 384 |
| Context length | 768 |
| Muon LR | 1.1e-2 |
| Warmup | 2,500 steps |
@misc{grigorev2026imu1sampleefficientpretrainingsmall,
title={IMU-1: Sample-Efficient Pre-training of Small Language Models},
author={George Grigorev},
year={2026},
eprint={2602.02522},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2602.02522},
}