You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for new-finetune-flant5-base-model

This model is a fine-tuned version of google/flan-t5-base. It has been trained for multi-task purposes, likely related to job assistance or QA tasks (JobEase project).

Model Details

  • Model Type: Text-to-Text Generation (Seq2Seq)
  • Base Model: google/flan-t5-base
  • Library: Transformers
  • Language: English

Training Hyperparameters

The model was fine-tuned with the following hyperparameters:

  • Optimization Goal: Multi-task learning
  • Epochs: 10
  • Batch Size: 8
  • Learning Rate: 0.0003
  • Transformers Version: 4.57.1

Usage

You can use this model directly with the Hugging Face transformers library:

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

model_id = "MMohammad/new-finetune-flant5-base-model"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)

input_text = "Your input text here"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Installation

pip install transformers torch
Downloads last month
10
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MMohammad/new-finetune-flant5-base-model

Finetuned
(894)
this model