--- library_name: transformers license: mit base_model: deepset/gbert-large tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: flausch_span_gbert-large results: [] --- # flausch_span_gbert-large This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6725 - Model Preparation Time: 0.0057 - Precision: 0.5075 - Recall: 0.6548 - F1: 0.5718 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:------:|:------:| | 0.5882 | 1.0 | 517 | 0.5842 | 0.0057 | 0.4539 | 0.6280 | 0.5270 | | 0.3879 | 2.0 | 1034 | 0.5720 | 0.0057 | 0.5194 | 0.6646 | 0.5831 | | 0.256 | 3.0 | 1551 | 0.6075 | 0.0057 | 0.4985 | 0.6445 | 0.5622 | | 0.1477 | 4.0 | 2068 | 0.6725 | 0.0057 | 0.5075 | 0.6548 | 0.5718 | ### Framework versions - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1