For T2T task of Workshop on Asian Translation(2025), these are the fine-tuned models with NLLB-200-XB as base model, with WAT + 100k samanantar pairs.
Debasish Dhal
DebasishDhal99
AI & ML interests
None yet
Recent Activity
liked
a model
about 7 hours ago
rednote-hilab/dots.ocr
upvoted
a
paper
15 days ago
mHC: Manifold-Constrained Hyper-Connections
upvoted
an
article
22 days ago
The Optimal Architecture for Small Language Models