versae's picture
Upload README.md with huggingface_hub
19b3e3a verified
metadata
license: gemma
datasets:
  - NbAiLab/aurora-sft-2512-filtered
language:
  - 'no'
  - nb
  - nn
base_model: NbAiLab/borealis-270m-instruct-preview
pipeline_tag: text-generation
library_name: transformers
tags:
  - conversational
  - instruct
  - experimental
  - llama-cpp
  - gguf-my-repo

borealis-270m-instruct-preview

Model creator: NbAiLab
Original model: NbAiLab/borealis-270m-instruct-preview
GGUF quantization: provided by versae using llama.cpp

Available Quantizations

  • Q4_K_M
  • Q8_0
  • BF16

Special thanks

🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.

Usage Examples

Q4_K_M

Ollama:

ollama run "hf.co/NbAiLab/borealis-270m-instruct-preview-gguf:Q4_K_M"

LM Studio:

lms load "NbAiLab/borealis-270m-instruct-preview-gguf/borealis-270m-instruct-preview-Q4_K_M.gguf"

llama.cpp CLI:

llama-cli --hf "NbAiLab/borealis-270m-instruct-preview-gguf:Q4_K_M" -p "The meaning to life and the universe is"

llama.cpp Server:

llama-server --hf "NbAiLab/borealis-270m-instruct-preview-gguf:Q4_K_M" -c 4096

Q8_0

Ollama:

ollama run "hf.co/NbAiLab/borealis-270m-instruct-preview-gguf:Q8_0"

LM Studio:

lms load "NbAiLab/borealis-270m-instruct-preview-gguf/borealis-270m-instruct-preview-Q8_0.gguf"

llama.cpp CLI:

llama-cli --hf "NbAiLab/borealis-270m-instruct-preview-gguf:Q8_0" -p "The meaning to life and the universe is"

llama.cpp Server:

llama-server --hf "NbAiLab/borealis-270m-instruct-preview-gguf:Q8_0" -c 4096

BF16

Ollama:

ollama run "hf.co/NbAiLab/borealis-270m-instruct-preview-gguf:BF16"

LM Studio:

lms load "NbAiLab/borealis-270m-instruct-preview-gguf/borealis-270m-instruct-preview-BF16.gguf"

llama.cpp CLI:

llama-cli --hf "NbAiLab/borealis-270m-instruct-preview-gguf:BF16" -p "The meaning to life and the universe is"

llama.cpp Server:

llama-server --hf "NbAiLab/borealis-270m-instruct-preview-gguf:BF16" -c 4096