Enthusiast Models
Collection
Models for 16GB+ VRAM • 4 items • Updated
A writing & roleplay finetune of Qwen3.5 27B. The primary emphasis is on writing quality as it strongly generalizes across both domains. This model is also trained from ConicCat/Qwen3.5-Antirep-27B to mitigate repetition issues.
The basic idea is to use a curriculum learning setup to overcome the lack of high quality roleplay data by first training on lower quality roleplay data, then training on higher quality writing data. Starting from ConicCat/Qwen3.5-Antirep-27B, the model was trained on a roughly equal mixture of instruct / roleplay / writing data for three epochs. The model was then trained for eleven epochs on a smaller dataset of short story anthologies by critically acclaimed authors.
<think>\n\n</think> or {{char}}: prefill. Only non-thinking was trained, but thinking probably still works.0.70.951.05 or a moderate dry setting should suffice.~100k context on 24GB Vram20-24k context with the vulkan backend, although it's pretty tight and may require some fiddling around with open programs e.t.c.