naver/xprovence-reranker-bgem3-v2 converted to FP16

How to use

from transformers import AutoModel, XLMRobertaConfig

config = XLMRobertaConfig.from_pretrained("mamei16/xprovence-reranker-bgem3-v2-fp16")
xprovence = AutoModel.from_pretrained("mamei16/xprovence-reranker-bgem3-v2-fp16",
                                      trust_remote_code=True, config=config)
Downloads last month
2
Safetensors
Model size
0.6B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support