llama400m-climblab-reasoning-5k-bm25s-dora-merged
DoRA fine-tuned LLaMA 400M model on bm25s_filtered 5k data from reasoning_eval dataset using LMFlow
Model Details
This model is a DoRA-finetuned version of data4elm/Llama-400M-12L. The standalone adapter is available at leonzc/llama400m-climblab-reasoning-5k-bm25s-dora-adapter.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Option 1: Load the complete model directly
model = AutoModelForCausalLM.from_pretrained("leonzc/llama400m-climblab-reasoning-5k-bm25s-dora-merged")
tokenizer = AutoTokenizer.from_pretrained("leonzc/llama400m-climblab-reasoning-5k-bm25s-dora-merged")
# Option 2: Load just the adapter with the base model
base_model = AutoModelForCausalLM.from_pretrained("data4elm/Llama-400M-12L")
tokenizer = AutoTokenizer.from_pretrained("data4elm/Llama-400M-12L")
model = PeftModel.from_pretrained(base_model, "leonzc/llama400m-climblab-reasoning-5k-bm25s-dora-adapter")
# Example usage
input_text = "What is the capital of France?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for leonzc/llama400m-climblab-reasoning-5k-bm25s-dora-merged
Base model
data4elm/Llama-400M-12L