Text Generation
Transformers
Safetensors
qwen3
turkish
türkiye
reasoning
ai
lamapi
gemma3
next
next-x1
open-source
14b
large-language-model
llm
transformer
artificial-intelligence
machine-learning
nlp
multilingual
instruction-tuned
chat
generative-ai
optimized
trl
sft
cognitive
analytical
enterprise
conversational
text-generation-inference
Update README.md
Browse files
README.md
CHANGED
|
@@ -97,29 +97,29 @@ Unlike vision-based models, **Next 14B focuses on pure cognitive performance**,
|
|
| 97 |
<td><strong>Next 14B</strong></td>
|
| 98 |
<td><strong>94.6</strong></td>
|
| 99 |
<td><strong>83.2</strong></td>
|
| 100 |
-
<td><strong>
|
| 101 |
-
<td
|
| 102 |
</tr>
|
| 103 |
<tr>
|
| 104 |
<td>Next 12B</td>
|
| 105 |
-
<td>
|
| 106 |
-
<td>
|
| 107 |
-
<td>
|
| 108 |
-
<td>
|
| 109 |
</tr>
|
| 110 |
<tr>
|
| 111 |
-
<td>GPT-
|
| 112 |
-
<td>
|
| 113 |
-
<td>
|
| 114 |
-
<td>
|
| 115 |
-
<td>
|
| 116 |
</tr>
|
| 117 |
<tr>
|
| 118 |
-
<td>Claude
|
| 119 |
-
<td>~
|
| 120 |
-
<td>
|
| 121 |
-
<td>
|
| 122 |
-
<td>
|
| 123 |
</tr>
|
| 124 |
</tbody>
|
| 125 |
</table>
|
|
|
|
| 97 |
<td><strong>Next 14B</strong></td>
|
| 98 |
<td><strong>94.6</strong></td>
|
| 99 |
<td><strong>83.2</strong></td>
|
| 100 |
+
<td><strong>98.8</strong></td>
|
| 101 |
+
<td>92.7</td>
|
| 102 |
</tr>
|
| 103 |
<tr>
|
| 104 |
<td>Next 12B</td>
|
| 105 |
+
<td>92.7</td>
|
| 106 |
+
<td>84.4</td>
|
| 107 |
+
<td>95.3</td>
|
| 108 |
+
<td>87.2</td>
|
| 109 |
</tr>
|
| 110 |
<tr>
|
| 111 |
+
<td>GPT-5</td>
|
| 112 |
+
<td>92.5</td>
|
| 113 |
+
<td>87.0</td>
|
| 114 |
+
<td>98.4</td>
|
| 115 |
+
<td><strong>96.0</strong></td>
|
| 116 |
</tr>
|
| 117 |
<tr>
|
| 118 |
+
<td>Claude Opus 4.1 (Thinking)</td>
|
| 119 |
+
<td>~92.0</td>
|
| 120 |
+
<td>87.8</td>
|
| 121 |
+
<td>84.7</td>
|
| 122 |
+
<td>95.4</td>
|
| 123 |
</tr>
|
| 124 |
</tbody>
|
| 125 |
</table>
|