| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | base_model: |
| | - prithivMLmods/Elita-1 |
| | pipeline_tag: text-generation |
| | library_name: transformers |
| | tags: |
| | - open-llm |
| | - math |
| | - jolt |
| | - text-generation-inference |
| | - jolt-v0.1 |
| | model-index: |
| | - name: Jolt-v0.1 |
| | results: |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: IFEval (0-Shot) |
| | type: wis-k/instruction-following-eval |
| | split: train |
| | args: |
| | num_few_shot: 0 |
| | metrics: |
| | - type: inst_level_strict_acc and prompt_level_strict_acc |
| | value: 50.92 |
| | name: averaged accuracy |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FJolt-v0.1 |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: BBH (3-Shot) |
| | type: SaylorTwift/bbh |
| | split: test |
| | args: |
| | num_few_shot: 3 |
| | metrics: |
| | - type: acc_norm |
| | value: 50.03 |
| | name: normalized accuracy |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FJolt-v0.1 |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: MATH Lvl 5 (4-Shot) |
| | type: lighteval/MATH-Hard |
| | split: test |
| | args: |
| | num_few_shot: 4 |
| | metrics: |
| | - type: exact_match |
| | value: 35.88 |
| | name: exact match |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FJolt-v0.1 |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: GPQA (0-shot) |
| | type: Idavidrein/gpqa |
| | split: train |
| | args: |
| | num_few_shot: 0 |
| | metrics: |
| | - type: acc_norm |
| | value: 17.34 |
| | name: acc_norm |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FJolt-v0.1 |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: MuSR (0-shot) |
| | type: TAUR-Lab/MuSR |
| | args: |
| | num_few_shot: 0 |
| | metrics: |
| | - type: acc_norm |
| | value: 20.49 |
| | name: acc_norm |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FJolt-v0.1 |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: MMLU-PRO (5-shot) |
| | type: TIGER-Lab/MMLU-Pro |
| | config: main |
| | split: test |
| | args: |
| | num_few_shot: 5 |
| | metrics: |
| | - type: acc |
| | value: 48.74 |
| | name: accuracy |
| | source: |
| | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FJolt-v0.1 |
| | name: Open LLM Leaderboard |
| | --- |
| | |
| |  |
| | # **Jolt-v0.1** |
| |
|
| | Jolt-v0.1 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. It has been fine-tuned on a synthetic dataset based on math and cot datasets, further optimizing its chain-of-thought (CoT) reasoning and logical problem-solving abilities. The model demonstrates significant improvements in context understanding, structured data processing, and long-context comprehension, making it ideal for complex reasoning tasks, instruction-following, and text generation. |
| |
|
| | ### **Key Improvements** |
| | 1. **Enhanced Knowledge and Expertise**: Improved mathematical reasoning, coding proficiency, and structured data processing. |
| | 2. **Fine-Tuned Instruction Following**: Optimized for precise responses, structured outputs (e.g., JSON), and generating long texts (8K+ tokens). |
| | 3. **Greater Adaptability**: Better role-playing capabilities and resilience to diverse system prompts. |
| | 4. **Long-Context Support**: Handles up to **128K tokens** and generates up to **8K tokens** per output. |
| | 5. **Multilingual Proficiency**: Supports over **29 languages**, including Chinese, English, French, Spanish, Portuguese, German, and more. |
| |
|
| | ### **Quickstart with Transformers** |
| |
|
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | model_name = "prithivMLmods/Jolt-v0.1" |
| | |
| | model = AutoModelForCausalLM.from_pretrained( |
| | model_name, |
| | torch_dtype="auto", |
| | device_map="auto", |
| | trust_remote_code=True |
| | ) |
| | tokenizer = AutoTokenizer.from_pretrained(model_name) |
| | |
| | prompt = "Give me a short introduction to large language models." |
| | messages = [ |
| | {"role": "system", "content": "You are an advanced AI assistant with expert-level reasoning and knowledge."}, |
| | {"role": "user", "content": prompt} |
| | ] |
| | text = tokenizer.apply_chat_template( |
| | messages, |
| | tokenize=False, |
| | add_generation_prompt=True |
| | ) |
| | model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
| | |
| | generated_ids = model.generate( |
| | **model_inputs, |
| | max_new_tokens=512 |
| | ) |
| | generated_ids = [ |
| | output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
| | ] |
| | |
| | response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
| | print(response) |
| | ``` |
| |
|
| | ### **Intended Use** |
| | - **Advanced Reasoning & Context Understanding**: Designed for logical deduction, multi-step problem-solving, and complex knowledge-based tasks. |
| | - **Mathematical & Scientific Problem-Solving**: Enhanced capabilities for calculations, theorem proving, and scientific queries. |
| | - **Code Generation & Debugging**: Generates and optimizes code across multiple programming languages. |
| | - **Structured Data Analysis**: Processes tables, JSON, and structured outputs, making it ideal for data-centric tasks. |
| | - **Multilingual Applications**: High proficiency in over 29 languages, enabling global-scale applications. |
| | - **Extended Content Generation**: Supports detailed document writing, research reports, and instructional guides. |
| |
|
| | ### **Limitations** |
| | 1. **High Computational Requirements**: Due to its **14B parameters** and **128K context support**, it requires powerful GPUs or TPUs for efficient inference. |
| | 2. **Language-Specific Variability**: Performance may vary across supported languages, especially for low-resource languages. |
| | 3. **Potential Error Accumulation**: Long-text generation can sometimes introduce inconsistencies over extended outputs. |
| | 4. **Limited Real-World Awareness**: Knowledge is restricted to training data and may not reflect recent world events. |
| | 5. **Prompt Sensitivity**: Outputs can depend on the specificity and clarity of the input prompt. |
| | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
| | Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Jolt-v0.1-details)! |
| | Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FJolt-v0.1&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)! |
| |
|
| | | Metric |Value (%)| |
| | |-------------------|--------:| |
| | |**Average** | 37.23| |
| | |IFEval (0-Shot) | 50.92| |
| | |BBH (3-Shot) | 50.03| |
| | |MATH Lvl 5 (4-Shot)| 35.88| |
| | |GPQA (0-shot) | 17.34| |
| | |MuSR (0-shot) | 20.49| |
| | |MMLU-PRO (5-shot) | 48.74| |
| |
|
| |
|