Qwen2.5-Coder-3B-Vitest (GGUF)

A domain-specialized version of Qwen2.5-Coder-3B fine-tuned to generate Vitest + React Testing Library unit tests for React components.

The model focuses on behavior-faithful testing, avoiding hallucinated UI/state and preferring modern RTL best practices.

πŸ” What this model does

Generates Vitest tests using React Testing Library.

Prefers:

  • getByRole, getByLabelText, getByPlaceholderText
  • userEvent for interactions
  • vi.fn() for callback assertions

Avoids:

  • Hallucinated UI or state
  • Testing behavior not present in the component
  • Legacy fireEvent

Tries to produce behavior-faithful, minimal tests.

🧠 Training details

Detail Value
Base model Qwen2.5-Coder-3B
Method LoRA fine-tuning using MLX (Apple Silicon)
LoRA rank 16
Trainable parameters 6.6M (0.216% of total)
Dataset ~800 curated React component ↔ Vitest test pairs
Sequence length 2048
Training 3 epochs (450 steps)
Hardware Apple Silicon (Mac mini M4 Pro, 64 GB RAM)

The dataset was curated and scored to:

  • Prefer robust RTL queries
  • Penalize async misuse, fireEvent, focused/skipped tests
  • Enforce correct callback testing for callback-only components
  • Encourage clean, production-style tests

πŸ“¦ Files in this repo

  • qwen2.5-coder-3b-vitest.Q4_K_M.gguf β€” Quantized GGUF build (recommended for Ollama / llama.cpp)

πŸš€ Usage with Ollama

Create a Modelfile:

FROM qwen2.5-coder-3b-vitest.Q4_K_M.gguf
TEMPLATE """
You are an expert frontend test engineer.
{{ .Prompt }}
"""
PARAMETER temperature 0.2
PARAMETER top_p 0.9
PARAMETER num_ctx 4096

Then run:

ollama create vitest-coder -f Modelfile
ollama run vitest-coder "Write Vitest tests for this React component: ..."

⚠️ Limitations

  • May overuse async/await in simple sync cases
  • May generate redundant tests for trivial components
  • Best results when provided with clear component context
  • For large repos, using RAG (retrieval of similar tests/components) improves quality

πŸ“œ License

Same license as the base model: Qwen2.5-Coder-3B.

Downloads last month
40
GGUF
Model size
3B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for recursivecurse/qwen2.5-coder-3b-vitest.Q4_K_M.gguf

Base model

Qwen/Qwen2.5-3B
Adapter
(16)
this model