1ea3e77587a769d98bb55cbf63fbb152
This model is a fine-tuned version of google/umt5-base on the Helsinki-NLP/opus_books [es-fi] dataset. It achieves the following results on the evaluation set:
- Loss: 2.9909
- Data Size: 1.0
- Epoch Runtime: 22.6085
- Bleu: 3.0276
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Bleu |
|---|---|---|---|---|---|---|
| No log | 0 | 0 | 14.1670 | 0 | 2.2825 | 0.0400 |
| No log | 1 | 83 | 13.9621 | 0.0078 | 2.7123 | 0.0444 |
| No log | 2 | 166 | 13.8706 | 0.0156 | 2.9007 | 0.0483 |
| No log | 3 | 249 | 14.0058 | 0.0312 | 3.9104 | 0.0487 |
| 0.578 | 4 | 332 | 13.5798 | 0.0625 | 4.6265 | 0.0465 |
| 0.578 | 5 | 415 | 13.2520 | 0.125 | 6.1050 | 0.0427 |
| 0.578 | 6 | 498 | 12.4261 | 0.25 | 8.1859 | 0.0465 |
| 3.0514 | 7 | 581 | 11.4465 | 0.5 | 12.8473 | 0.0810 |
| 11.0976 | 8.0 | 664 | 7.6243 | 1.0 | 21.9064 | 0.1305 |
| 8.3957 | 9.0 | 747 | 4.5427 | 1.0 | 21.2292 | 0.9970 |
| 5.6798 | 10.0 | 830 | 3.9434 | 1.0 | 22.1189 | 0.8173 |
| 4.9084 | 11.0 | 913 | 3.5901 | 1.0 | 22.2857 | 1.1404 |
| 4.6296 | 12.0 | 996 | 3.4043 | 1.0 | 21.2481 | 1.5943 |
| 4.2624 | 13.0 | 1079 | 3.3149 | 1.0 | 23.0058 | 1.8751 |
| 4.0565 | 14.0 | 1162 | 3.2505 | 1.0 | 23.5358 | 2.0493 |
| 3.9554 | 15.0 | 1245 | 3.1943 | 1.0 | 24.1467 | 2.1809 |
| 3.7762 | 16.0 | 1328 | 3.1451 | 1.0 | 21.6642 | 2.4197 |
| 3.6523 | 17.0 | 1411 | 3.1145 | 1.0 | 22.2990 | 2.4832 |
| 3.6047 | 18.0 | 1494 | 3.0874 | 1.0 | 23.6836 | 2.5480 |
| 3.4779 | 19.0 | 1577 | 3.0682 | 1.0 | 24.3783 | 2.6775 |
| 3.3998 | 20.0 | 1660 | 3.0451 | 1.0 | 22.1055 | 2.6308 |
| 3.3042 | 21.0 | 1743 | 3.0210 | 1.0 | 23.4489 | 2.6541 |
| 3.2199 | 22.0 | 1826 | 3.0148 | 1.0 | 24.0244 | 2.7412 |
| 3.1597 | 23.0 | 1909 | 3.0017 | 1.0 | 22.3515 | 2.7230 |
| 3.1211 | 24.0 | 1992 | 2.9940 | 1.0 | 22.4391 | 2.7134 |
| 3.0261 | 25.0 | 2075 | 2.9855 | 1.0 | 22.3969 | 2.8133 |
| 2.9905 | 26.0 | 2158 | 2.9877 | 1.0 | 22.3190 | 2.8429 |
| 2.9268 | 27.0 | 2241 | 2.9913 | 1.0 | 23.0228 | 2.8606 |
| 2.8712 | 28.0 | 2324 | 2.9972 | 1.0 | 22.1533 | 2.8658 |
| 2.8487 | 29.0 | 2407 | 2.9827 | 1.0 | 21.6873 | 2.8885 |
| 2.8161 | 30.0 | 2490 | 2.9852 | 1.0 | 23.1436 | 2.8758 |
| 2.7398 | 31.0 | 2573 | 2.9970 | 1.0 | 23.1308 | 2.9793 |
| 2.7125 | 32.0 | 2656 | 2.9911 | 1.0 | 24.0320 | 3.0432 |
| 2.6314 | 33.0 | 2739 | 2.9909 | 1.0 | 22.6085 | 3.0276 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.2.0
- Tokenizers 0.22.1
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for contemmcm/1ea3e77587a769d98bb55cbf63fbb152
Base model
google/umt5-base