Distil text2sql
Collection
A fine-tuned Qwen3-4B model for converting natural language questions into SQL queries.
•
4 items
•
Updated
•
1
4-bit quantized GGUF version of distil-qwen3-4b-text2sql for efficient local inference. Only 2.5GB - runs on most laptops and edge devices.
| Metric | DeepSeek-V3 (Teacher) | Qwen3-4B (Base) | This Model |
|---|---|---|---|
| LLM-as-a-Judge | 80% | 62% | 80% |
| Exact Match | 48% | 16% | 60% |
| ROUGE | 87.6% | 84.2% | 89.5% |
git lfs install
git clone https://huggingface.co/distil-labs/distil-qwen3-4b-text2sql-gguf-4bit
cd distil-qwen3-4b-text2sql-gguf-4bit
# Create the Ollama model (Modelfile is included)
ollama create distil-qwen3-4b-text2sql -f Modelfile
# Run the model
ollama run distil-qwen3-4b-text2sql
>>> Schema:
... CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, department TEXT, salary INTEGER);
...
... Question: How many employees earn more than 50000?
SELECT COUNT(*) FROM employees WHERE salary > 50000;
from openai import OpenAI
client = OpenAI(base_url="http://127.0.0.1:11434/v1", api_key="EMPTY")
schema = """CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
department TEXT,
salary INTEGER
);"""
question = "How many employees earn more than 50000?"
response = client.chat.completions.create(
model="distil-qwen3-4b-text2sql",
messages=[
{
"role": "system",
"content": """You are given a database schema and a natural language question. Generate the SQL query that answers the question.
Rules:
- Use only tables and columns from the provided schema
- Use uppercase SQL keywords (SELECT, FROM, WHERE, etc.)
- Use SQLite-compatible syntax
- Output only the SQL query, no explanations"""
},
{
"role": "user",
"content": f"Schema:\n{schema}\n\nQuestion: {question}"
}
],
temperature=0
)
print(response.choices[0].message.content)
# Output: SELECT COUNT(*) FROM employees WHERE salary > 50000;
| Property | Value |
|---|---|
| Format | GGUF (Q4_K_M) |
| Size | ~2.5 GB |
| Base Model | distil-labs/distil-qwen3-4b-text2sql |
| Parameters | 4 billion |
| Quantization | 4-bit |
| Model | Format | Size | Use Case |
|---|---|---|---|
| distil-qwen3-4b-text2sql | Safetensors | ~8 GB | Transformers, vLLM |
| distil-qwen3-4b-text2sql-gguf | GGUF (F16) | ~15 GB | Full precision GGUF |
| This model | GGUF (Q4_K_M) | ~2.5 GB | Recommended for local use |
This model is released under the Apache 2.0 license.
We're not able to determine the quantization variants.
Base model
Qwen/Qwen3-4B-Base