Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ tags:
|
|
| 9 |
---
|
| 10 |
# LLaDA2.0-flash-CAP
|
| 11 |
|
| 12 |
-
**LLaDA2.0-flash-CAP** is an enhanced version of LLaDA2.0-flash that incorporates **Confidence-Aware Parallel (CAP) Training** for significantly improved inference efficiency. Built upon the 100B-A6B Mixture-of-Experts (MoE) diffusion architecture, this model achieves faster parallel decoding while maintaining strong performance across diverse benchmarks. Experience the models at ZenMux(https://zenmux.ai)
|
| 13 |
|
| 14 |
---
|
| 15 |
|
|
|
|
| 9 |
---
|
| 10 |
# LLaDA2.0-flash-CAP
|
| 11 |
|
| 12 |
+
**LLaDA2.0-flash-CAP** is an enhanced version of LLaDA2.0-flash that incorporates **Confidence-Aware Parallel (CAP) Training** for significantly improved inference efficiency. Built upon the 100B-A6B Mixture-of-Experts (MoE) diffusion architecture, this model achieves faster parallel decoding while maintaining strong performance across diverse benchmarks. Experience the models at ZenMux( https://zenmux.ai )
|
| 13 |
|
| 14 |
---
|
| 15 |
|