yhavinga commited on
Commit
bca8a7a
·
1 Parent(s): 5b6c617

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -31,10 +31,10 @@ For a demo of the model, head over to the Hugging Face Spaces for the [Netherfor
31
  1. **CNN DailyMail** translated to Dutch with MarianMT.
32
  2. **XSUM** translated to Dutch with MarianMt.
33
  3. News article summaries distilled from the nu.nl website.
34
-
35
  ## Training
36
 
37
- The pre-trained model [t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch) was fine-tuned with a constant learning rate of 0.0005, a batch size of 64, for 10.000 steps.
38
  The performance of this model can be improved with longer training. Unfortunately due to a bug, an earlier training script would not save intermediate checkpoints, and had been started for 6 epochs, and would finish past the availability of the TPU-VM. Since there was limited time left, the fine-tuning was restarted without evaluation and for only half an epoch (10.000 steps).
39
 
40
 
 
31
  1. **CNN DailyMail** translated to Dutch with MarianMT.
32
  2. **XSUM** translated to Dutch with MarianMt.
33
  3. News article summaries distilled from the nu.nl website.
34
+
35
  ## Training
36
 
37
+ The pre-trained model [t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch) was fine-tuned with a constant learning rate of 0.0005 and a batch size of 64 for 10.000 steps.
38
  The performance of this model can be improved with longer training. Unfortunately due to a bug, an earlier training script would not save intermediate checkpoints, and had been started for 6 epochs, and would finish past the availability of the TPU-VM. Since there was limited time left, the fine-tuning was restarted without evaluation and for only half an epoch (10.000 steps).
39
 
40