Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

IMU-1 Stage 1 Training Corpus (Stable Phase)

Pre-tokenized training data for Stage 1 (stable phase) of IMU-1, a sample-efficient 430M parameter language model.

Dataset Details

Property Value
Tokens ~29B
Format Memory-mapped NumPy (.npy)
Tokenizer SmolLM2-360M
Vocab size 49,152

Data Sources

High-quality filtered web data including:

  • DCLM-edu (educational content filtered from DCLM)
  • FineWeb-edu
  • Curated web sources

Download

huggingface-cli download thepowerfuldeez/1218_imu1_base_stable_corpus --repo-type=dataset

Usage with sample_efficient_gpt

# Clone training framework
git clone https://github.com/thepowerfuldeez/sample_efficient_gpt
cd sample_efficient_gpt

# Install dependencies
export UV_TORCH_BACKEND=auto
uv pip install setuptools uv_build maturin
uv sync

# Train Stage 1
uv run torchrun --nproc_per_node 8 train.py \
    --config configs/imu1_base.yaml \
    --config-key stable

Training Configuration (Stage 1)

Parameter Value
Schedule WSD (stable phase)
Iterations 100,000
Batch size 384
Context length 768
Muon LR 1.1e-2
Warmup 2,500 steps

Related Resources

Citation

@misc{grigorev2026imu1sampleefficientpretrainingsmall,
      title={IMU-1: Sample-Efficient Pre-training of Small Language Models}, 
      author={George Grigorev},
      year={2026},
      eprint={2602.02522},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2602.02522}, 
}
Downloads last month
140

Paper for thepowerfuldeez/1218_imu1_base_stable_corpus