Model Card for nllb-600m-formosan-all-finetune
Model Details
nllb-200-distilled-600M finetune on all formosan data (klokah, fb ilrdf dict, formosan_db, formosan_org, ithuan_formosan_text, and formosan_bible) without samples only one word.
Training Details
- learning rate: 0.0001
- batch size per gpu: 4
- grad accumulation steps: 1
- epochs: 5
- warmup ratio: 0.1
Uses
please refer https://huggingface.co/docs/transformers/model_doc/nllb
Demo
- Downloads last month
- 108
Model tree for ithuan/nllb-600m-formosan-all-finetune-v2
Base model
facebook/nllb-200-distilled-600MSpace using ithuan/nllb-600m-formosan-all-finetune-v2 1
Evaluation results
- ami_Xiug -> zho_Hant (zh) on ithuan/klokah_asr_evalself-reported8.060
- zho_Hant -> ami_Xiug (13a) on ithuan/klokah_asr_evalself-reported7.350
- trv_Tegu -> zho_Hant (zh) on ithuan/klokah_asr_evalself-reported9.180
- zho_Hant -> trv_Tegu (13a) on ithuan/klokah_asr_evalself-reported9.290
- trv_Truk -> zho_Hant (zh) on ithuan/klokah_asr_evalself-reported13.110
- zho_Hant -> trv_Truk (13a) on ithuan/klokah_asr_evalself-reported13.740