Add pipeline_tag and library_name metadata

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -1,11 +1,14 @@
1
  ---
2
- license: apache-2.0
3
  datasets:
4
  - openbmb/InfLLM-V2-data-5B
5
  language:
6
  - en
7
  - zh
 
 
 
8
  ---
 
9
  <div align="center">
10
 
11
  <h1>NOSA: Native and Offloadable Sparse Attention</h1>
@@ -33,26 +36,23 @@ language:
33
 
34
  **NOSA** is a trainable sparse attention mechanism designed for KV-cache offloading with an explicit locality constraint, paired with an inference system (**NOSI**) to realize its efficiency. It improves long-context/long-generation quality over prior offloading baselines while boosting decoding throughput by up to **5.04×** vs **FullAttn**, **1.92×** vs **InfLLMv2**, and **1.83×** vs **ShadowKV** on **1B/3B/8B** LLMs.
35
 
36
-
37
 
38
  ## Models
39
 
40
- We train 1B, 3B, and 8B models FullAttn, InfLLMv2, DMA, and NOSA, resulting in a total of 12 models. The following models have been released on Hugging Face.
41
 
42
  |Model|Link|
43
  |:-:|:-:|
44
- |NOSA-1B | [NOSA-1B](huggingface.co/openbmb/NOSA-1B) |
45
- |NOSA-3B | [NOSA-3B](huggingface.co/openbmb/NOSA-3B) |
46
- |NOSA-8B | [NOSA-8B](huggingface.co/openbmb/NOSA-8B) |
47
 
48
  Please reach out to us if additional baseline models (FullAttn, InfLLMv2, or DMA) are needed. You may open an issue or contact us directly via email (our email addresses are provided in the paper).
49
 
50
-
51
-
52
-
53
  ## Citation
54
 
55
- ```
56
  @article{huang2025nosa,
57
  title={NOSA: Native and Offloadable Sparse Attention},
58
  author={Huang, Yuxiang and Wang, Pengjie and Han, Jicheng and Zhao, Weilin and Su, Zhou and Sun, Ao and Lyu, Hongya and Zhao, Hengyu and Wang, Yudong and Xiao, Chaojun and Han, Xu and Liu, Zhiyuan},
 
1
  ---
 
2
  datasets:
3
  - openbmb/InfLLM-V2-data-5B
4
  language:
5
  - en
6
  - zh
7
+ license: apache-2.0
8
+ pipeline_tag: text-generation
9
+ library_name: transformers
10
  ---
11
+
12
  <div align="center">
13
 
14
  <h1>NOSA: Native and Offloadable Sparse Attention</h1>
 
36
 
37
  **NOSA** is a trainable sparse attention mechanism designed for KV-cache offloading with an explicit locality constraint, paired with an inference system (**NOSI**) to realize its efficiency. It improves long-context/long-generation quality over prior offloading baselines while boosting decoding throughput by up to **5.04×** vs **FullAttn**, **1.92×** vs **InfLLMv2**, and **1.83×** vs **ShadowKV** on **1B/3B/8B** LLMs.
38
 
39
+ For more details, please refer to the paper: [NOSA: Native and Offloadable Sparse Attention](https://arxiv.org/abs/2510.13602).
40
 
41
  ## Models
42
 
43
+ We train 1B, 3B, and 8B models FullAttn, InfLLMv2, DMA, and NOSA, resulting in a total of 12 models. The following models have been released on Hugging Face.
44
 
45
  |Model|Link|
46
  |:-:|:-:|
47
+ |NOSA-1B | [NOSA-1B](https://huggingface.co/openbmb/NOSA-1B) |
48
+ |NOSA-3B | [NOSA-3B](https://huggingface.co/openbmb/NOSA-3B) |
49
+ |NOSA-8B | [NOSA-8B](https://huggingface.co/openbmb/NOSA-8B) |
50
 
51
  Please reach out to us if additional baseline models (FullAttn, InfLLMv2, or DMA) are needed. You may open an issue or contact us directly via email (our email addresses are provided in the paper).
52
 
 
 
 
53
  ## Citation
54
 
55
+ ```bibtex
56
  @article{huang2025nosa,
57
  title={NOSA: Native and Offloadable Sparse Attention},
58
  author={Huang, Yuxiang and Wang, Pengjie and Han, Jicheng and Zhao, Weilin and Su, Zhou and Sun, Ao and Lyu, Hongya and Zhao, Hengyu and Wang, Yudong and Xiao, Chaojun and Han, Xu and Liu, Zhiyuan},