YongganFu commited on
Commit
ceab131
·
verified ·
1 Parent(s): 81a7d5f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -29
README.md CHANGED
@@ -1,18 +1,44 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
4
  ---
5
 
6
- # Nemotron-Flash-3B-Instruct
7
 
8
- Nemotron-Flash is a new hybrid SLM model family that outperforms Qwen models in accuracy (math, coding, and commonsense), batch-size-1 latency, and throughput. More details are in our NeurIPS 2025 [paper](https://drive.google.com/drive/folders/17vOGktwUfUpRAJPGJUV6oX8XwLSczZtv?usp=sharing).
 
 
9
 
10
- Docker path: `/lustre/fsw/portfolios/nvr/users/yongganf/docker/megatron_py25_fast_slm.sqsh` on NRT.
11
 
 
12
 
13
- ## Chat with Nemotron-Flash-3B-Instruct
14
 
15
- We wrap the model into CUDA Graph for fast generation:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  ```
18
  from transformers import AutoModelForCausalLM, AutoTokenizer
@@ -24,9 +50,6 @@ tokenizer = AutoTokenizer.from_pretrained(repo_name, trust_remote_code=True)
24
  model = AutoModelForCausalLM.from_pretrained(repo_name, trust_remote_code=True)
25
  model = model.cuda().to(torch.bfloat16)
26
 
27
-
28
- max_new_tokens = 256
29
-
30
  print('Initializing generation state...')
31
  generation_state = model.init_cuda_graph_generation(
32
  max_new_tokens=max_new_tokens,
@@ -34,25 +57,55 @@ generation_state = model.init_cuda_graph_generation(
34
  device='cuda',
35
  )
36
 
37
- while True:
38
- prompt = input("User:")
39
- if prompt.lower() == "exit":
40
- break
41
-
42
- inputs = tokenizer(prompt, return_tensors="pt").to('cuda')
43
-
44
- print(f"Generating with CUDA graph acceleration...")
45
- outputs = model.generate_with_cuda_graph(
46
- input_ids=inputs["input_ids"],
47
- generation_state=generation_state,
48
- max_new_tokens=max_new_tokens,
49
- temperature=0,
50
- top_k=50,
51
- eos_token_id=tokenizer.eos_token_id,
52
- profiling=False,
 
 
 
 
 
 
 
 
 
53
  )
 
 
 
 
 
54
 
55
- response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
56
-
57
- print(f"Response: {response}")
58
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ license: other
4
+ license_name: cc-by-nc-4.0
5
+ pipeline_tag: text-generation
6
  ---
7
 
8
+ # Nemotron-Flash-3B Instruct Model
9
 
10
+ <p align="center">
11
+ 🗞️ <a href="https://arxiv.org/pdf/2511.18890">Paper</a>&nbsp&nbsp| &nbsp&nbsp 🤗 <a href="https://huggingface.co/nvidia/Nemotron-Flash-1B">Nemotron-Flash-1B</a> | &nbsp&nbsp 🤗 <a href="https://huggingface.co/nvidia/Nemotron-Flash-3B">Nemotron-Flash-3B</a> | &nbsp&nbsp 🤗 <a href="https://huggingface.co/nvidia/Nemotron-Flash-3B-Instruct">Nemotron-Flash-3B-Instruct</a> &nbsp
12
+ </p>
13
 
14
+ ## Model Overview
15
 
16
+ Nemotron-Flash is a new hybrid small language model family designed around real-world latency rather than parameter count. It features latency-optimal depth–width ratios, hybrid operators discovered through evolutionary search, and training-time weight normalization. See our <a href="https://arxiv.org/pdf/2511.18890">NeurIPS 2025 paper</a> for more technical details.
17
 
18
+ The models achieve SOTA accuracy in math, coding, and commonsense reasoning at the 1B and 3B scales, while delivering decent small-batch latency and large-batch throughput. For example, Nemotron-Flash-1B achieves +5.5% accuracy, 1.9× lower latency, and 45.6× higher throughput compared with Qwen3-0.6B; and Nemotron-Flash-3B achieves +2% / +5.5% accuracy over Qwen2.5-3B / Qwen3-1.7B with 1.3× / 1.7× lower latency and 6.4× / 18.7× higher throughput, respectively.
19
 
20
+
21
+ <div align="center">
22
+ <img src="https://huggingface.co/nvidia/Nemotron-Flash-3B/resolve/main/images/nemotron_flash_result.png" alt="Compare with SOTA SLMs" width="800">
23
+ </div>
24
+
25
+
26
+ ## Environment
27
+ ```bash
28
+ torch<=2.9.1
29
+ transformers<=4.56.2
30
+ causal-conv1d
31
+ flash-attn<=2.7.3
32
+ mamba-ssm
33
+ flash-linear-attention
34
+ ```
35
+ We provide a <a href="https://huggingface.co/nvidia/Nemotron-Flash-3B/resolve/main/setup.sh">script</a> to build the conda environment: `bash setup.sh`.
36
+
37
+
38
+
39
+ ## Chat with Nemotron-Flash
40
+
41
+ We integrated the attention kernel from <a href="https://nvidia.github.io/TensorRT-LLM/torch/auto_deploy/auto-deploy.html">TRT-LLM AutoDeploy</a> to enable generation with CUDA Graph:
42
 
43
  ```
44
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
50
  model = AutoModelForCausalLM.from_pretrained(repo_name, trust_remote_code=True)
51
  model = model.cuda().to(torch.bfloat16)
52
 
 
 
 
53
  print('Initializing generation state...')
54
  generation_state = model.init_cuda_graph_generation(
55
  max_new_tokens=max_new_tokens,
 
57
  device='cuda',
58
  )
59
 
60
+ prompt = input("User:")
61
+ prompt = "User: " + prompt + "\nAssistant:"
62
+ inputs = tokenizer(prompt, return_tensors="pt").to('cuda')
63
+
64
+ print(f"Generating with CUDA graph acceleration...")
65
+ outputs = model.generate_with_cuda_graph(
66
+ input_ids=inputs["input_ids"],
67
+ generation_state=generation_state,
68
+ max_new_tokens=256,
69
+ temperature=0,
70
+ eos_token_id=tokenizer.eos_token_id,
71
+ )
72
+
73
+ response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
74
+ print(f"Response: {response}")
75
+ ```
76
+
77
+ Another option is to perform generation w/o CUDA Graph:
78
+
79
+ ```
80
+ outputs = model.generate_with_cache(
81
+ input_ids=inputs["input_ids"],
82
+ max_new_tokens=256,
83
+ temperature=0,
84
+ eos_token_id=tokenizer.eos_token_id,
85
  )
86
+ ```
87
+
88
+ ## Finetune Nemotron-Flash
89
+
90
+ To finetune Nemotron-Flash models, switch the attention kernel to FlashAttention2 when loading the model:
91
 
92
+ ```
93
+ from transformers import AutoConfig, AutoModelForCausalLM
94
+ repo_name = "nvidia/Nemotron-Flash-3B-Instruct"
95
+
96
+ config = AutoConfig.from_pretrained(repo_name, trust_remote_code=True)
97
+ setattr(config, "attention_implementation_new", "flash_attention_2")
98
+ model = AutoModelForCausalLM.from_pretrained(repo_name, config=config, torch_dtype=torch.bfloat16, trust_remote_code=True)
99
+ ```
100
+
101
+ ## Citation
102
+ ```
103
+ @misc{fu2025nemotronflash,
104
+ title={Nemotron-Flash: Towards Latency-Optimal Hybrid Small Language Models},
105
+ author={Yonggan Fu and Xin Dong and Shizhe Diao and Matthijs Van keirsbilck and Hanrong Ye and Wonmin Byeon and Yashaswi Karnati and Lucas Liebenwein and Hannah Zhang and Nikolaus Binder and Maksim Khadkevich and Alexander Keller and Jan Kautz and Yingyan Celine Lin and Pavlo Molchanov},
106
+ year={2025},
107
+ eprint={2511.18890},
108
+ archivePrefix={arXiv},
109
+ primaryClass={cs.LG},
110
+ url={https://arxiv.org/abs/2511.18890},
111
+ }