Moxin x llama.cpp Customized Quant for Qwen3.5-27B

We sincerely thank the open-source community developers and contributors unsloth for providing BF16 version and imatrix file.

We really appreciate the attention and we’re also happy to share additional quantization variants for everyone to try out and experiment with — hope you enjoy them!

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for moxin-org/Qwen3.5-27B-GGUF

Base model

Qwen/Qwen3.5-27B
Quantized
(94)
this model

Collection including moxin-org/Qwen3.5-27B-GGUF