GPT-OSS-120B
GGUF conversion of unsloth/gpt-oss-120b
Unsloth configs were selected over openai's in order to incorporate their chat template fixes.
This is essentially like unsloth's F16 quant except the F16 weights are in BF16 instead, which is their native precision.
- Downloads last month
- 121
Hardware compatibility
Log In to add your hardware
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for Valeciela/gpt-oss-120b-BF16-GGUF
Base model
openai/gpt-oss-120b