A Q4_K_M-Mixed GGUF version of MiniMaxAI/MiniMax-M2.5 generated with intel/auto-round, where the embedding layer and lm-head layer have 8-bit precision.
Script for reproducing this model.
pip install transformers==4.56.0 torch==2.9.1 auto_round==0.9.4
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from auto_round import AutoRound
model_name = "MiniMaxAI/MiniMax-M2.5"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="cpu", trust_remote_code=True, dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
layer_config = {}
for n, m in model.named_modules():
if n == "lm_head" or isinstance(m,torch.nn.Embedding):
layer_config[n] = {"bits": 8}
autoround = AutoRound(model, tokenizer, iters=0, layer_config=layer_config, nsamples=512, disable_opt_rtn=False)
autoround.quantize_and_save("/models/tmp_autoround", format="gguf:q4_k_m")
- Downloads last month
- 86
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for Felladrin/gguf-Q4_K_M-Mixed-AutoRound-MiniMax-M2.5
Base model
MiniMaxAI/MiniMax-M2.5