MathSmith-HC-Problem-Synthesizer-Qwen3-8B

MathSmith: Towards Extremely Hard Mathematical Reasoning by Forging Synthetic Problems with a Reinforced Policy

Paper Project Page License Python GitHub

Overview

MathSmith is a framework for synthesizing challenging mathematical problems to enhance LLM reasoning. This model is a reinforced policy-based synthesizer optimized to generate novel, Olympiad-level mathematical problems from scratch.

The model generates <rationale><problem> pairs, where:

  • <rationale>: structured reasoning describing concept integration and difficulty design strategies.
  • <problem>: a single Olympiad-level mathematical question that admits a verifiable numeric or symbolic answer.

MathSmith-HC (High Consistency) combines complexity and consistency as difficulty rewards during reinforcement learning, producing more stable problems than the version optimized solely for complexity.


MathSmith Pipeline

The MathSmith framework consists of four main stages:

  1. Concept Collection: Randomly sample concept–explanation pairs from PlanetMath to ensure data independence and avoid benchmark contamination.
  2. Supervised Fine-tuning (SFT): Train the model on collected concept–explanation pairs to establish foundational understanding of problem generation.
  3. Reinforcement Learning (RL): Optimize the model using GRPO with rewards based on structural validity, reasoning complexity (trace length), and answer consistency.
  4. Weakness-Focused Self-Improvement: Iteratively identify and address model weaknesses by generating targeted problem variants for specific mathematical concepts.

Dependence

  • Transformers 4.52.4
  • Pytorch 2.7.0+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1

Citation

If you find this work useful, please cite:

@article{zhan2025mathsmith,
  title={MathSmith: Towards Extremely Hard Mathematical Reasoning by Forging Synthetic Problems with a Reinforced Policy},
  author={Zhan, Shaoxiong and Lai, Yanlin and Lu, Ziyu and Lin, Dahua and Yang, Ziqing and Tan, Fei},
  journal={arXiv preprint arXiv:2508.05592},
  year={2025}
}
Downloads last month
22
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Jasaxion/MathSmith-HC-Problem-Synthesizer-Qwen3-8B

Finetuned
Qwen/Qwen3-8B
Finetuned
(1121)
this model
Quantizations
2 models

Dataset used to train Jasaxion/MathSmith-HC-Problem-Synthesizer-Qwen3-8B

Paper for Jasaxion/MathSmith-HC-Problem-Synthesizer-Qwen3-8B