Model Card for smolvlm2-256m-FoodExtract-Vision-v2-without-peft-stage-2
This model is the Stage 2 (Final) version of a full fine-tuning pipeline based on HuggingFaceTB/SmolVLM2-256M-Video-Instruct. It is specialized in structured food extraction, returning valid JSON containing food classification, titles, and item lists.
Training Strategy: This model was trained using a Full Fine-Tuning (Without PEFT) approach.
- Stage 1: (Previous step) Trained with a frozen vision encoder to align the LLM with the JSON structure.
- Stage 2 (This Model): Unfrozen Vision Encoder. The entire model (Vision + LLM) was fine-tuned to significantly improve the visual recognition of specific food items and ingredients.
Quick start
This model relies on a specific system prompt and user prompt structure to output the correct JSON format.
import torch
from transformers import AutoProcessor, AutoModelForImageTextToText
from PIL import Image
import requests
# 1) Load fine-tuned model and processor
model_id = "berkeruveyik/smolvlm2-256m-FoodExtract-Vision-v2-without-peft-stage-2" # Replace with your model ID
print("Loading model and processor...")
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model.eval()
print("Model ready!")
# 2) Prompts
SYSTEM_MESSAGE = """You are an expert food and drink image extractor.
You provide structured data to visual inputs classifying them as edible food/drink or not.
as well as titling the image with a simple simple food/drink related caption.
Finally you extract any and all visible food/drink items to lists."""
USER_PROMPT = """Classify the given input image into food or not, and if edible food or drink items are present, extract them into lists. If no food/drink items are visible, return an empty list.
Only return valid JSON in the following form:
```json
{
"is_food": 0,
"image_title": "",
"food_items": [],
"drink_items": []
}
```"""
# 3) Load image
image_url = "https://img.freepik.com/free-psd/roasted-chicken-dinner-platter-delicious-feast_632498-25445.jpg"
print(f"\nLoading image from: {image_url}")
resp = requests.get(image_url, stream=True, headers={"User-Agent": "Mozilla/5.0"})
resp.raise_for_status()
image = Image.open(resp.raw).convert("RGB")
# 4) Prepare inputs
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": SYSTEM_MESSAGE + "\n\n" + USER_PROMPT},
],
}
]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=False,
)
inputs = processor(
text=text,
images=image,
return_tensors="pt",
)
# Move tensors to model device and dtype
inputs = {k: v.to(model.device) for k, v in inputs.items()}
inputs = {
k: (v.to(dtype=model.dtype) if torch.is_floating_point(v) and v.dtype == torch.float32 else v)
for k, v in inputs.items()
}
# 5) Generate
print("\nGenerating output...")
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=256, do_sample=False)
# 6) Decode only the newly generated tokens
prompt_len = inputs["input_ids"].shape[1]
output_text = processor.batch_decode(
generated_ids[:, prompt_len:],
skip_special_tokens=True
)[0]
print("\n" + "="*60)
print("OUTPUT:")
print("="*60)
print(output_text)
print("="*60)
Training procedure
This model represents the second stage of a full fine-tuning process. By unfreezing the vision encoder in this stage, the model learns to correlate fine-grained visual features (textures, shapes of specific dishes) with the textual food items, resulting in higher accuracy than Stage 1 or frozen-encoder approaches.
Framework versions
- TRL: 0.27.2
- Transformers: 5.0.0
- Pytorch: 2.9.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.2
Citations
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{[https://github.com/huggingface/trl](https://github.com/huggingface/trl)}}
}
- Downloads last month
- 50
Model tree for berkeruveyik/smolvlm2-256m-FoodExtract-Vision-v2-without-peft-stage-2
Base model
HuggingFaceTB/SmolLM2-135M