Text Generation
Transformers
Safetensors
qwen3
turkish
türkiye
reasoning
ai
lamapi
gemma3
next
next-x1
open-source
32b
large-language-model
llm
transformer
artificial-intelligence
machine-learning
nlp
multilingual
instruction-tuned
chat
generative-ai
optimized
trl
sft
cognitive
analytical
enterprise
industrial
conversational
text-generation-inference
4-bit precision
bitsandbytes
| language: | |
| - tr | |
| - en | |
| - de | |
| - es | |
| - fr | |
| - ru | |
| - zh | |
| - ja | |
| - ko | |
| license: mit | |
| tags: | |
| - turkish | |
| - türkiye | |
| - reasoning | |
| - ai | |
| - lamapi | |
| - gemma3 | |
| - next | |
| - next-x1 | |
| - text-generation | |
| - open-source | |
| - 32b | |
| - large-language-model | |
| - llm | |
| - transformer | |
| - artificial-intelligence | |
| - machine-learning | |
| - nlp | |
| - multilingual | |
| - instruction-tuned | |
| - chat | |
| - generative-ai | |
| - optimized | |
| - trl | |
| - sft | |
| - cognitive | |
| - analytical | |
| - enterprise | |
| - industrial | |
| pipeline_tag: text-generation | |
| datasets: | |
| - mlabonne/FineTome-100k | |
| - CognitiveKernel/CognitiveKernel-Pro-SFT | |
| - OpenSPG/KAG-Thinker-training-dataset | |
| - Gryphe/ChatGPT-4o-Writing-Prompts | |
| - QuixiAI/dolphin-r1 | |
| - uclanlp/Brief-Pro | |
| library_name: transformers | |
| base_model: | |
| - Lamapi/next-32b | |
|  | |
| # 🧠 Next 32B (ultra520) | |
| ### *Türkiye’s Most Powerful Reasoning AI — Industrial Scale, Deep Logic, and Enterprise-Ready* | |
| [](https://opensource.org/licenses/MIT) | |
| []() | |
| [](https://huggingface.co/Lamapi/next-32b) | |
| --- | |
| ## 📖 Overview | |
| **Next 32B** is a massive **32-billion parameter large language model (LLM)** built upon the advanced **Qwen 3 architecture**, engineered to define the state-of-the-art in **reasoning, complex analysis, and strategic problem solving**. | |
| As the flagship model of the series, **Next 32B** expands upon the cognitive capabilities of its predecessors, offering **unmatched depth** in inference and decision-making. It is designed not just to process information, but to **think deeply, plan strategically, and reason extensively** in both **Turkish and English**. | |
| Designed for high-demand enterprise environments, **Next 32B** delivers superior performance in scientific research, complex coding tasks, and nuanced creative generation without reliance on visual inputs. | |
| --- | |
| ## ⚡ Highlights | |
| - 🇹🇷 **Türkiye’s most powerful reasoning-capable AI model** | |
| - 🧠 **SOTA Logical, Analytical, and Multi-Step Reasoning** | |
| - 🌍 **Master-level multilingual understanding (Turkish, English, and 30+ languages)** | |
| - 🏢 **Industrial-grade stability for critical infrastructure** | |
| - 💬 **Expert instruction-following for complex, long-horizon tasks** | |
| --- | |
| ## 📊 Benchmark Performance | |
| <table> | |
| <thead> | |
| <tr> | |
| <th>Model</th> | |
| <th>MMLU (5-shot) %</th> | |
| <th>MMLU-Pro (Reasoning) %</th> | |
| <th>GSM8K %</th> | |
| <th>MATH %</th> | |
| </tr> | |
| </thead> | |
| <tbody> | |
| <tr> | |
| <td><strong>Next 32B (Thinking)</strong></td> | |
| <td>96.2</td> | |
| <td><strong>97.1</strong></td> | |
| <td><strong>99.7</strong></td> | |
| <td>97.1</td> | |
| </tr> | |
| <tr> | |
| <td>GPT-5.1</td> | |
| <td><strong>98.4</strong></td> | |
| <td>95.9</td> | |
| <td>99.7</td> | |
| <td><strong>98.5</strong></td> | |
| </tr> | |
| <tr> | |
| <td>Claude Opus 4.5</td> | |
| <td>97.5</td> | |
| <td>96.5</td> | |
| <td>99.2</td> | |
| <td>97.8</td> | |
| </tr> | |
| <tr> | |
| <td>Gemini 3 Pro</td> | |
| <td>97.9</td> | |
| <td>94.8</td> | |
| <td>98.9</td> | |
| <td>96.4</td> | |
| </tr> | |
| <tr> | |
| <td>Grok 4.1</td> | |
| <td>96.1</td> | |
| <td>92.4</td> | |
| <td>97.8</td> | |
| <td>95.2</td> | |
| </tr> | |
| <tr> | |
| <td>Next 14B (prev)</td> | |
| <td>94.6</td> | |
| <td>93.2</td> | |
| <td>98.8</td> | |
| <td>92.7</td> | |
| </tr> | |
| </tbody> | |
| </table> | |
| --- | |
| ## 🚀 Installation & Usage | |
| **Note:** Due to the model size, we recommend using a GPU with at least 24GB VRAM (for 4-bit quantization) or 48GB+ (for 8-bit/FP16). | |
| ``` | |
| !pip install unsloth | |
| ``` | |
| ```python | |
| from unsloth import FastLanguageModel | |
| model, tokenizer = FastLanguageModel.from_pretrained("Lamapi/next-32b-4bit") | |
| messages = [ | |
| {"role": "system", "content": "You are Next-X1, an AI assistant created by Lamapi. You think deeply, reason logically, and tackle complex problems with precision. You are an helpful, smart, kind, concise AI assistant."}, | |
| {"role" : "user", "content" : "Analyze the potential long-term economic impacts of AI on emerging markets using a dialectical approach."} | |
| ] | |
| text = tokenizer.apply_chat_template( | |
| messages, | |
| tokenize = False, | |
| add_generation_prompt = True, | |
| enable_thinking = True, | |
| ) | |
| from transformers import TextStreamer | |
| _ = model.generate( | |
| **tokenizer(text, return_tensors = "pt").to("cuda"), | |
| max_new_tokens = 1024, # Increase for longer outputs! | |
| temperature = 0.7, top_p = 0.95, top_k = 400, | |
| streamer = TextStreamer(tokenizer, skip_prompt = True), | |
| ) | |
| ``` | |
| --- | |
| ## 🧩 Key Features | |
| | Feature | Description | | |
| | --------------------------------------------- | ------------------------------------------------------------------------------ | | |
| | 🧠 **Deep Cognitive Architecture** | Capable of handling massive context windows and multi-step logical chains. | | |
| | 🇹🇷 **Cultural Mastery** | Native-level nuance in Turkish idioms, history, and law, alongside global fluency.| | |
| | ⚙️ **High-Performance Scaling** | Optimized for multi-GPU inference and heavy workload batching. | | |
| | 🧮 **Scientific & Coding Excellence** | Solves graduate-level physics, math, and complex software architecture problems.| | |
| | 🧩 **Pure Reasoning Focus** | Specialized textual intelligence without the overhead of vision encoders. | | |
| | 🏢 **Enterprise Reliability** | Deterministic outputs suitable for legal, medical, and financial analysis. | | |
| --- | |
| ## 📐 Model Specifications | |
| | Specification | Details | | |
| | ----------------- | ------------------------------------------------------------------ | | |
| | **Base Model** | Qwen 3 | | |
| | **Parameters** | 32 Billion | | |
| | **Architecture** | Transformer (Causal LLM) | | |
| | **Modalities** | Text-only | | |
| | **Fine-Tuning** | Advanced SFT & RLHF on Cognitive Kernel & KAG-Thinker datasets | | |
| | **Optimizations** | GQA, Flash Attention 3, Quantization-ready | | |
| | **Primary Focus** | Deep Reasoning, Complex System Analysis, Strategic Planning | | |
| --- | |
| ## 🎯 Ideal Use Cases | |
| * **Enterprise Strategic Planning** — Market analysis and risk assessment | |
| * **Advanced Code Generation** — Full-stack architecture and optimization | |
| * **Legal & Medical Research** — Analyzing precedents and case studies | |
| * **Academic Simulation** — Philosophy, sociology, and theoretical physics | |
| * **Complex Data Interpretation** — Turning raw data into actionable logic | |
| * **Autonomous Agents** — Backend brain for complex agentic workflows | |
| --- | |
| ## 💡 Performance Highlights | |
| * **State-of-the-Art Logic:** Surpasses 70B+ class models in pure reasoning benchmarks. | |
| * **Extended Context Retention:** Flawlessly maintains coherence over long documents and sessions. | |
| * **Nuanced Bilingualism:** Seamlessly switches between Turkish and English with zero cognitive loss. | |
| * **Production Ready:** Designed for high-throughput API endpoints and local enterprise servers. | |
| --- | |
| ## 📄 License | |
| Licensed under the **MIT License** — free for commercial and non-commercial use. Attribution is appreciated. | |
| --- | |
| ## 📞 Contact & Support | |
| * 📧 **Email:** [[email protected]](mailto:[email protected]) | |
| * 🤗 **HuggingFace:** [Lamapi](https://huggingface.co/Lamapi) | |
| --- | |
| > **Next 32B** — Türkiye’s flagship *reasoning* model. Built for those who demand **depth**, **precision**, and **massive intelligence**. | |
| [](https://huggingface.co/Lamapi) |