Flex-Judge: Think Once, Judge Anywhere
Paper
•
2505.18601
•
Published
•
27
Based on Flex-Judge
📖 Usage Instructions:
============================================================
To use this LoRA adapter:
import torch
from transformers import AutoTokenizer, Qwen2_5_VLForConditionalGeneration
from peft import PeftModel
# Load base model
base_model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-VL-32B-Instruct",
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "sungnyun/Flex-VL-32B-LoRA")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("sungnyun/Flex-VL-32B-LoRA")
Base model
Qwen/Qwen2.5-VL-32B-Instruct