MTU Chatbot - Qwen Fine-tuned
This model is a fine-tuned version of Qwen/Qwen2.5-1.5B-Instruct specifically trained on Mother Teresa University (MTU) data.
Model Details
- Base Model: Qwen/Qwen2.5-1.5B-Instruct
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Training Data: MTU institutional documents and information
- Use Case: University chatbot for student and staff inquiries
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Indrifazliji/mtu-chatbot-qwen")
model = AutoModelForCausalLM.from_pretrained("Indrifazliji/mtu-chatbot-qwen")
# Generate response
input_text = "Tell me about MTU programs"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Training Configuration
- LoRA Rank: 16
- LoRA Alpha: 32
- Learning Rate: 0.0002
- Epochs: 1
- Batch Size: 4
Limitations
This model is specifically trained for MTU-related queries and may not perform well on general topics outside the university domain.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support