-
-
-
-
-
-
Inference Providers
Active filters:
quantllm
codewithdark/Llama-3.2-3B-4bit
3B
•
Updated
•
10
codewithdark/Llama-3.2-3B-GGUF-4bit
3B
•
Updated
•
2
codewithdark/Llama-3.2-3B-4bit-mlx
Text Generation
•
3B
•
Updated
•
43
QuantLLM/Llama-3.2-3B-4bit-mlx
Text Generation
•
3B
•
Updated
•
15
QuantLLM/Llama-3.2-3B-2bit-mlx
Text Generation
•
3B
•
Updated
•
12
QuantLLM/Llama-3.2-3B-8bit-mlx
Text Generation
•
3B
•
Updated
•
39
QuantLLM/Llama-3.2-3B-5bit-mlx
Text Generation
•
3B
•
Updated
•
47
QuantLLM/Llama-3.2-3B-5bit-gguf
3B
•
Updated
•
3
QuantLLM/Llama-3.2-3B-2bit-gguf
3B
•
Updated
•
5
QuantLLM/functiongemma-270m-it-8bit-gguf
0.3B
•
Updated
•
11
•
1
QuantLLM/functiongemma-270m-it-4bit-gguf
0.3B
•
Updated
•
14
QuantLLM/functiongemma-270m-it-4bit-mlx
Text Generation
•
0.3B
•
Updated
•
33