File size: 7,041 Bytes
c83ff7e 80ebc50 df47971 80ebc50 c83ff7e 80ebc50 2064ed2 80ebc50 c83ff7e 2064ed2 80ebc50 c83ff7e 80ebc50 c83ff7e 80ebc50 c83ff7e df47971 c83ff7e 1a21757 c83ff7e a275fec c83ff7e a275fec c83ff7e 1a21757 c83ff7e df47971 2cfa502 c83ff7e df47971 c83ff7e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 |
---
base_model: tiiuae/Falcon-H1-1.5B-Base
language:
- ar
- cs
- de
- en
- es
- fr
- hi
- it
- ja
- ko
- nl
- pl
- pt
- ro
- ru
- sv
- ur
- zh
library_name: transformers
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
tags:
- falcon-h1
inference: true
pipeline_tag: text-generation
paper: tiiuae/falcon-h1
---
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/falcon-h1-logo.png" alt="drawing" width="800"/>
**Falcon-H1** is a new series of large language models (LLMs) featuring hybrid architecture designs optimized for both high performance and efficiency across diverse use cases. Unlike earlier Falcon models built solely on Transformer or Mamba architectures, Falcon-H1 adopts a parallel hybrid approach that combines Transformer-based attention with State Space Models (SSMs), known for superior long-context memory and computational efficiency. These models excel across reasoning, mathematics, multilingual tasks, instruction following, and scientific knowledge.
* **Paper:** [Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance](https://huggingface.co/papers/2507.22448)
* **GitHub Repository:** [https://github.com/tiiuae/Falcon-H1](https://github.com/tiiuae/Falcon-H1)
* **Project Documentation:** [https://tiiuae.github.io/Falcon-H1/](https://tiiuae.github.io/Falcon-H1/)
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Training Details](#training-details)
3. [Usage](#usage)
4. [Evaluation](#evaluation)
5. [Citation](#citation)
# TL;DR
Falcon-H1 is the latest evolution in the Falcon family of large language models, built upon an advanced hybrid architecture that integrates both State Space Models (SSMs) and Attention Mechanisms in each block. These models span from 500 million to 34 billion parameters, offering high performance and efficiency. They are optimized for diverse use cases, trained with support for 18 core languages (scalable to 100+), and achieve state-of-the-art multilingual and reasoning performances in instruction following, maths, coding, and scientific knowledge tasks.
# Model Details
## Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae)
- **Model type:** Causal decoder-only
- **Architecture:** Hybrid Transformers + Mamba architecture
- **Language(s) (NLP):** English, Multilingual
- **License:** Falcon-LLM License
# Training details
For more details about the training protocol of this model, please refer to the [Falcon-H1 technical blogpost](https://falcon-lm.github.io/blog/falcon-h1/) and [Technical Report](https://arxiv.org/abs/2507.22448).
# Usage
Currently to use this model you can either rely on Hugging Face `transformers`, `vLLM` or `llama.cpp` library.
## Inference
Make sure to install the latest version of `transformers` or `vllm`, eventually install these packages from source:
```bash
pip install git+https://github.com/huggingface/transformers.git
```
For vLLM, make sure to install `vllm>=0.9.0`:
```bash
pip install "vllm>=0.9.0"
```
### 🤗 transformers
Refer to the snippet below to run H1 models using 🤗 transformers:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "tiiuae/Falcon-H1-1B-Base"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Perform text generation
```
### vLLM
For vLLM, simply start a server by executing the command below:
```
# pip install vllm>=0.9.0
vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel-size 1
```
### `llama.cpp`
You can find all GGUF files compatible with `llama.cpp` under [our official collection](https://huggingface.co/collections/tiiuae/falcon-h1-6819f2795bc406da60fab8df)
# Evaluation
Falcon-H1 series perform very well on a variety of tasks, including reasoning tasks.
| Tasks | Falcon-H1-1.5B | Qwen3-1.7B | Qwen2.5-1.5B | Gemma3-1B | Llama3.2-1B | Falcon3-1B |
| --- | --- | --- | --- | --- | --- | --- |
| **General** | | | | | |
| BBH | **46.47** | 35.18 | 42.41 | 35.86 | 33.21 | 34.47 |
| ARC-C | 42.06 | 34.81 | 40.53 | 34.13 | 34.64 | **43.09** |
| TruthfulQA | 45.98 | **49.39** | 47.05 | 42.17 | 42.08 | 42.31 |
| HellaSwag | **63.33** | 49.27 | 62.23 | 42.24 | 55.3 | 58.53 |
| MMLU | **62.03** | 57.04 | 59.76 | 40.87 | 45.93 | 46.1 |
| **Math** | | | | | |
| GSM8k | **74.98** | 69.83 | 57.47 | 42.38 | 44.28 | 44.05 |
| MATH-500 | **74.0** | 73.0 | 48.4 | 45.4 | 13.2 | 19.8 |
| AMC-23 | 43.59 | **46.09** | 24.06 | 19.22 | 7.19 | 6.87 |
| AIME-24 | 11.25 | **12.5** | 2.29 | 0.42 | 1.46 | 0.41 |
| AIME-25 | **9.58** | 8.12 | 1.25 | 1.25 | 0.0 | 0.21 |
| **Science** | | | | | |
| GPQA | 26.34 | 27.68 | 26.26 | **28.19** | 26.59 | 26.76 |
| GPQA_Diamond | **35.19** | 33.33 | 25.59 | 21.55 | 25.08 | 31.31 |
| MMLU-Pro | **37.8** | 23.54 | 28.35 | 14.46 | 16.2 | 18.49 |
| MMLU-stem | **64.13** | 54.3 | 54.04 | 35.39 | 39.16 | 39.64 |
| **Code** | | | | | |
| HumanEval | **68.29** | 67.68 | 56.1 | 40.85 | 34.15 | 22.56 |
| HumanEval+ | **61.59** | 60.96 | 50.61 | 37.2 | 29.88 | 20.73 |
| MBPP | **64.81** | 58.73 | **64.81** | 57.67 | 33.6 | 20.63 |
| MBPP+ | **56.35** | 49.74 | 56.08 | 50.0 | 29.37 | 17.2 |
| LiveCodeBench | **17.61** | 14.87 | 12.52 | 5.09 | 2.35 | 0.78 |
| CRUXEval | **39.57** | 18.88 | 34.76 | 12.7 | 0.06 | 15.58 |
| **Instruction Following** | | | | | |
| IFEval | **80.66** | 70.77 | 45.33 | 61.48 | 55.34 | 54.26 |
| Alpaca-Eval | **28.18** | 21.89 | 9.54 | 17.87 | 9.38 | 6.98 |
| MTBench | **8.46** | 7.61 | 7.1 | 7.03 | 6.37 | 6.03 |
| LiveBench | 34.13 | **40.73** | 21.65 | 18.79 | 14.97 | 14.1 |
You can check more in detail on our [our release blogpost](https://falcon-lm.github.io/blog/falcon-h1/), detailed benchmarks.
# Useful links
- View [our release blogpost](https://falcon-lm.github.io/blog/falcon-h1/).
- View [our technical report](https://arxiv.org/abs/2507.22448).
- Feel free to join [our discord server](https://discord.gg/trwMYP9PYm) if you have any questions or to interact with our researchers and developers.
# Citation
If the Falcon-H1 family of models were helpful to your work, feel free to give us a cite.
```
@article{falconh1,
title={Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance},
author={Jingwei Zuo and Maksim Velikanov and Ilyas Chahed and Younes Belkada and Dhia Eddine Rhayem and Guillaume Kunsch and Hakim Hacid and Hamza Yous and Brahim Farhat and Ibrahim Khadraoui and Mugariya Farooq and Giulia Campesan and Ruxandra Cojocaru and Yasser Djilali and Shi Hu and Iheb Chaabane and Puneesh Khanna and Mohamed El Amine Seddik and Ngoc Dung Huynh and Phuc Le Khac and Leen AlQadi and Billel Mokeddem and Mohamed Chami and Abdalgader Abubaker and Mikhail Lubinets and Kacper Piskorski and Slim Frikha},
journal = {arXiv preprint arXiv:2507.22448},
year={2025}
}
``` |