LG-AI-EXAONE commited on
Commit
9ceb270
·
1 Parent(s): e3f42b1

Update arXiv link & citation

Browse files
Files changed (1) hide show
  1. README.md +10 -3
README.md CHANGED
@@ -25,7 +25,7 @@ library_name: transformers
25
 
26
  We introduce EXAONE Deep, which exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research. Evaluation results show that 1) EXAONE Deep **2.4B** outperforms other models of comparable size, 2) EXAONE Deep **7.8B** outperforms not only open-weight models of comparable scale but also a proprietary reasoning model OpenAI o1-mini, and 3) EXAONE Deep **32B** demonstrates competitive performance against leading open-weight models.
27
 
28
- For more details, please refer to our [documentation](https://lgresearch.ai/data/upload/EXAONE_Deep__Model_Card.pdf), [blog](https://www.lgresearch.ai/news/view?seq=543) and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep).
29
 
30
  <p align="center">
31
  <img src="assets/exaone_deep_overall_performance.png", width="100%", style="margin: 40 auto;">
@@ -103,7 +103,7 @@ else:
103
 
104
  ## Evaluation
105
 
106
- The following table shows the evaluation results of reasoning tasks such as math and coding. The full evaluation results can be found in the [documentation](https://lgresearch.ai/data/upload/EXAONE_Deep__Model_Card.pdf).
107
 
108
  <table>
109
  <tr>
@@ -268,7 +268,14 @@ The model is licensed under [EXAONE AI Model License Agreement 1.1 - NC](./LICEN
268
 
269
  ## Citation
270
 
271
- TBU
 
 
 
 
 
 
 
272
 
273
  ## Contact
274
  LG AI Research Technical Support: [email protected]
 
25
 
26
  We introduce EXAONE Deep, which exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research. Evaluation results show that 1) EXAONE Deep **2.4B** outperforms other models of comparable size, 2) EXAONE Deep **7.8B** outperforms not only open-weight models of comparable scale but also a proprietary reasoning model OpenAI o1-mini, and 3) EXAONE Deep **32B** demonstrates competitive performance against leading open-weight models.
27
 
28
+ For more details, please refer to our [documentation](https://arxiv.org/abs/2503.12524), [blog](https://www.lgresearch.ai/news/view?seq=543) and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep).
29
 
30
  <p align="center">
31
  <img src="assets/exaone_deep_overall_performance.png", width="100%", style="margin: 40 auto;">
 
103
 
104
  ## Evaluation
105
 
106
+ The following table shows the evaluation results of reasoning tasks such as math and coding. The full evaluation results can be found in the [documentation](https://arxiv.org/abs/2503.12524).
107
 
108
  <table>
109
  <tr>
 
268
 
269
  ## Citation
270
 
271
+ ```
272
+ @article{exaone-deep,
273
+ title={EXAONE Deep: Reasoning Enhanced Language Models},
274
+ author={{LG AI Research}},
275
+ journal={arXiv preprint arXiv:2503.12524},
276
+ year={2025}
277
+ }
278
+ ```
279
 
280
  ## Contact
281
  LG AI Research Technical Support: [email protected]