In this experiment, i distilled a 1B Llama model, with training samples drawn from the wikimedia, minipile and fineweb-edu dataset. I've used the SM3 optimizer w/ cosine scheduler.
I've release this initial experimental checkpoint as a foundation for further exploration and I plan to conduct more experiments with different optimization strategies(https://github.com/HomebrewML/HeavyBall) and will update the model weights accordingly.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "aloobun/minini-140m-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
**inputs,
max_length=100,
do_sample=True,
temperature=0.8,
top_p=0.95,
pad_token_id=tokenizer.eos_token_id,
)
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
- Downloads last month
- 106
