Qwen3-VL-4B Tool Calling Fine-tune
A LoRA adapter fine-tuned on top of unsloth/qwen3-vl-4b-instruct-bnb-4bit for tool calling / function calling tasks. The model supports structured <tool_call> / <tool_response> XML format and can handle both text and image inputs.
Model Details
| Property | Value |
|---|---|
| Base Model | unsloth/qwen3-vl-4b-instruct-bnb-4bit |
| Model Type | Vision-Language (Qwen3-VL), Causal LM |
| Fine-tune Method | LoRA (PEFT) |
| LoRA Rank (r) | 16 |
| LoRA Alpha | 16 |
| LoRA Dropout | 0 |
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Adapter Size | ~143 MB |
| PEFT Version | 0.18.1 |
| License | Apache 2.0 |
| Developed by | Mustafaege |
Intended Use
This model is designed for agentic and tool-use pipelines where an LLM needs to:
- Select and invoke tools/functions based on user queries
- Parse tool responses and continue reasoning
- Handle multimodal inputs (text + images) alongside tool use
Direct Use
Load the adapter and use it for tool-calling inference in any framework that supports PEFT/LoRA.
Out-of-Scope Use
- General-purpose chat without tool schemas (use the base instruct model instead)
- Tasks requiring models larger than 4B parameters for complex reasoning
How to Get Started
Installation
pip install transformers peft unsloth torch
Basic Inference
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model_id = "unsloth/qwen3-vl-4b-instruct-bnb-4bit"
adapter_id = "Mustafaege/Qwen3-VL-4B-tool-calling-ft"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto")
model = PeftModel.from_pretrained(model, adapter_id)
model.eval()
Tool Calling Example
tools = [
{
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
]
messages = [
{"role": "user", "content": "What's the weather like in Istanbul?"}
]
text = tokenizer.apply_chat_template(
messages,
tools=tools,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.1)
response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(response)
# Expected: <tool_call>
# {"name": "get_weather", "arguments": {"location": "Istanbul", "unit": "celsius"}}
# </tool_call>
Using with Unsloth (Faster Inference)
from unsloth import FastVisionModel
model, tokenizer = FastVisionModel.from_pretrained(
model_name="Mustafaege/Qwen3-VL-4B-tool-calling-ft",
load_in_4bit=True,
)
FastVisionModel.for_inference(model)
Tool Call Format
The model uses XML-style tags for tool interactions:
Tool invocation (model output):
<tool_call>
{"name": "function_name", "arguments": {"param1": "value1"}}
</tool_call>
Tool response (user message):
<tool_response>
{"result": "..."}
</tool_response>
Training Details
Training Data
Fine-tuned on a custom tool-calling dataset covering diverse function schemas and multi-turn tool-use conversations.
Training Procedure
- Framework: Unsloth + PEFT
- Method: LoRA (Low-Rank Adaptation)
- Base quantization: 4-bit (BitsAndBytes NF4)
- Training regime: bf16 mixed precision
Limitations & Bias
- Performance depends heavily on the quality and coverage of the tool schemas provided
- May hallucinate function calls for tools not present in the schema
- Vision capabilities are inherited from the base model; visual grounding for tool calling is experimental
- Limited to the tool-calling patterns seen in training data
Citation
If you use this model, please cite the base model:
@misc{qwen3vl,
title={Qwen3-VL Technical Report},
author={Qwen Team},
year={2025},
publisher={Alibaba Cloud}
}
Framework Versions
- PEFT 0.18.1
- Transformers ≥ 4.51.0
- Downloads last month
- 27