Model Card for nllb-600m-formosan-all-finetune

Model Details

nllb-200-distilled-600M finetune on all formosan data (klokah, fb ilrdf dict, formosan_db, formosan_org, ithuan_formosan_text, and formosan_bible) without samples only one word.

Training Details

  • learning rate: 0.0001
  • batch size per gpu: 4
  • grad accumulation steps: 1
  • epochs: 5
  • warmup ratio: 0.1

Uses

please refer https://huggingface.co/docs/transformers/model_doc/nllb

Demo

https://huggingface.co/spaces/ithuan/formosan-translation

Downloads last month
108
Safetensors
Model size
0.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ithuan/nllb-600m-formosan-all-finetune-v2

Finetuned
(213)
this model

Space using ithuan/nllb-600m-formosan-all-finetune-v2 1

Evaluation results

  • ami_Xiug -> zho_Hant (zh) on ithuan/klokah_asr_eval
    self-reported
    8.060
  • zho_Hant -> ami_Xiug (13a) on ithuan/klokah_asr_eval
    self-reported
    7.350
  • trv_Tegu -> zho_Hant (zh) on ithuan/klokah_asr_eval
    self-reported
    9.180
  • zho_Hant -> trv_Tegu (13a) on ithuan/klokah_asr_eval
    self-reported
    9.290
  • trv_Truk -> zho_Hant (zh) on ithuan/klokah_asr_eval
    self-reported
    13.110
  • zho_Hant -> trv_Truk (13a) on ithuan/klokah_asr_eval
    self-reported
    13.740