Model Card for new-finetune-flant5-base-model
This model is a fine-tuned version of google/flan-t5-base. It has been trained for multi-task purposes, likely related to job assistance or QA tasks (JobEase project).
Model Details
- Model Type: Text-to-Text Generation (Seq2Seq)
- Base Model: google/flan-t5-base
- Library: Transformers
- Language: English
Training Hyperparameters
The model was fine-tuned with the following hyperparameters:
- Optimization Goal: Multi-task learning
- Epochs: 10
- Batch Size: 8
- Learning Rate: 0.0003
- Transformers Version: 4.57.1
Usage
You can use this model directly with the Hugging Face transformers library:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_id = "MMohammad/new-finetune-flant5-base-model"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
input_text = "Your input text here"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Installation
pip install transformers torch
- Downloads last month
- 10
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for MMohammad/new-finetune-flant5-base-model
Base model
google/flan-t5-base