Qwen2.5-Coder-1.5B LoRA Fine-tuned (DEEP Dataset)

Bu model, Qwen2.5-Coder-1.5B-Instruct base modeli kullanılarak DEEP dataset üzerinde LoRA ile fine-tune edilmiş ve base model ile merge edilmiştir.

🎯 Model Açıklaması

  • Base Model: Qwen/Qwen2.5-Coder-1.5B-Instruct
  • Dataset: Naholav/CodeGen-DEEP-5K
  • Training Step: 891
  • Method: LoRA (Low-Rank Adaptation)
  • Merge Status: Base model ile merge edildi

📊 Training Hyperparameters

Learning Rate: 2e-4
LoRA Rank: 16
LoRA Alpha: 32
LoRA Dropout: 0.05
Target Modules: None
Batch Size: 8
Epochs: 3
Context Length: 1024
Optimizer: paged_adamw_8bit
Scheduler: Cosine
Weight Decay: 0.01
Warmup Ratio: 0.03

Kullanım

Basit Kullanım

from transformers import AutoModelForCausalLM, AutoTokenizer

# Model ve tokenizer'ı yükle
model = AutoModelForCausalLM.from_pretrained(
    "MehmetDORA/20251201-202706_deep_full_e3",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("MehmetDORA/20251201-202706_deep_full_e3")

# Kod üret
prompt = "Write a Python function to calculate the factorial of a number"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_length=512,
    temperature=0.7,
    top_p=0.95,
    do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

System Prompt ile Kullanım

messages = [
    {"role": "system", "content": "You are an expert Python programmer. Please read the problem carefully before writing any Python code."},
    {"role": "user", "content": "Write a function to check if a string is a palindrome"}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

📈 Evaluation Results

  • Validation Loss: 0.XXX
  • Test Loss: 0.XXX
  • Pass@1: XX%

💾 Model Size

  • Parameters: ~1.5B
  • Size: ~3GB (FP16)

⚠️ Limitations

  • Model, 1024 token context length ile eğitilmiştir
  • Sadece Python kod üretimi için optimize edilmiştir
  • Reasoning trace'leri içermez (sadece solution field kullanıldı)
Downloads last month
21
Safetensors
Model size
2B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MehmetDORA/20251201-202706_deep_full_e3

Base model

Qwen/Qwen2.5-1.5B
Adapter
(61)
this model