EXL3 4.0bpw H6 quant (quatized with exllamav3 0.0.12)

Original: sam-paech/Mistral-Small-3_2-24B-Instruct-2506-antislop


A fine-tune of Mistral-Small-3_2-24B-Instruct-2506 using the antislop method described in this paper: https://arxiv.org/abs/2510.15061

The pipeline identifies the model's unique slop (over-represented words and phrases compared to human writing), generates a preference training set, and trains out the slop with our FTPO training algorithm.

https://github.com/sam-paech/auto-antislop

This process alters the model to make the most common slop words & phrases much less frequent, with minimal impact or degradation to the model.

It won't remove slop entirely. The technique only targets over-represented words & phrases, not stylistic or thematic slop.

This model should serve as a good base for further fine-tuning.

Downloads last month
-
Safetensors
Model size
7B params
Tensor type
F16
·
I16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Nimbz/sam-paech_Mistral-Small-3_2-24B-Instruct-2506-antislop_4.0bpw_H6_EXL3

Paper for Nimbz/sam-paech_Mistral-Small-3_2-24B-Instruct-2506-antislop_4.0bpw_H6_EXL3