Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

EXL3 6.66bpw H8 quant, tested using it with 24k q8 context within 24GB VRAM.

Original: https://huggingface.co/knifeayumu/Cydonia-v4.1-MS3.2-Magnum-Diamond-24B

Foxgirl on Cydonia

Cydonia-v4.1-MS3.2-Magnum-Diamond-24B

Recipe based on knifeayumu/Cydonia-v1.2-Magnum-v4-22B. Just an update to those who are interested.

Image to Video Generation Info

Wan-AI/Wan2.2-I2V-A14B was used to turn this image from the previous merge (slightly cropped) to an animation utilising lightx2v/Wan2.2-Lightning for faster generation and pollockjj/ComfyUI-MultiGPU nodes.

ComfyUI workflow


Merge Details

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

  • TheDrummer/Cydonia-24B-v4.1
  • Doctor-Shotgun/MS3.2-24B-Magnum-Diamond

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: TheDrummer/Cydonia-24B-v4.1
  - model: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond
merge_method: slerp
base_model: TheDrummer/Cydonia-24B-v4.1
parameters:
  t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
Downloads last month
1
Safetensors
Model size
10B params
Tensor type
F16
·
I16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Nimbz/knifeayumu_Cydonia-v4.1-MS3.2-Magnum-Diamond-24B_6.66bpw_H8_EXL3

Quantized
(8)
this model