Irodori-TTS-500M
Irodori-TTS-500M is a Japanese Text-to-Speech model based on a Rectified Flow Diffusion Transformer (RF-DiT) architecture. The architecture and training design largely follow Echo-TTS, using DACVAE continuous latents as the generation target. It supports zero-shot voice cloning from reference audio.
A unique feature of this model is emoji-based style and sound effect control โ by inserting specific emojis into the input text, you can control speaking styles, emotions, and even sound effects in the generated audio.
๐ Key Features
- Flow Matching TTS: Rectified Flow Diffusion Transformer over continuous DACVAE latents for high-quality Japanese speech synthesis.
- Voice Cloning: Zero-shot voice cloning from a short reference audio clip.
- Emoji-based Style Control: Control speaking styles, emotions, and sound effects by embedding emojis directly in the input text. See
EMOJI_ANNOTATIONS.mdfor the full list of supported emojis and their effects.
๐๏ธ Architecture
The model (approximately 500M parameters) consists of three main components:
- Text Encoder: Token embeddings initialized from llm-jp/llm-jp-3-150m, followed by self-attention + SwiGLU transformer layers with RoPE.
- Reference Latent Encoder: Encodes patched reference audio latents for speaker/style conditioning via self-attention + SwiGLU layers.
- Diffusion Transformer: Joint-attention DiT blocks with Low-Rank AdaLN (timestep-conditioned adaptive layer normalization), half-RoPE, and SwiGLU MLPs.
Audio is represented as continuous latent sequences via the DACVAE codec (128-dim), enabling high-quality 48kHz waveform reconstruction.
๐ง Audio Samples
1. Standard TTS
Basic Japanese text-to-speech generation (without reference audio).
| Case | Text | Generated Audio |
|---|---|---|
| Sample 1 | "ใ้ป่ฉฑใใใใจใใใใใพใใใใ ใใพ้ป่ฉฑใๅคงๅคๆททใฟๅใฃใฆใใใพใใๆใๅ ฅใใพใใใ็บไฟก้ณใฎใใจใซใใ็จไปถใใ่ฉฑใใใ ใใใ" | |
| Sample 2 | "ใใฎๆฃฎใซใฏใๅคใ่จใไผใใใใใพใใใๆใๆใ้ซใๆใๅคใ้ใใซ่ณใๆพใพใใฐใ้ขจใฎๆญๅฃฐใ่ใใใใจใใใฎใงใใ็งใฏๅไฟกๅ็ใงใใใใใใฎๅคใ็ขบใใซ่ชฐใใ็งใๅผใถๅฃฐใ่ใใใฎใงใใ" |
2. Emoji Annotation Control
Examples of controlling speaking style and effects with emojis. For the full list of supported emojis, see EMOJI_ANNOTATIONS.md.
| Case | Text (with Emoji) | Generated Audio |
|---|---|---|
| Sample 1 | ใชใผใซใใฉใใใใฎ๏ผโฆใ๏ผใใฃใจ่ฟใฅใใฆใปใใ๏ผโฆ๐๐ฎโ๐จ๐๐ฎโ๐จใใใใใฎใๅฅฝใใชใใ ๏ผ | |
| Sample 2 | ใใ โฆ๐ญใใใชใซ้ ทใใใจใ่จใใชใใงโฆ๐ญ | |
| Sample 3 | ๐คง๐คงใใใใญใ้ขจ้ชๅผใใกใใฃใฆใฆ๐คงโฆๅคงไธๅคซใใใ ใฎ้ขจ้ชใ ใใใใๆฒปใใ๐ฅบ |
3. Voice Cloning (Zero-shot)
Examples of cloning a voice from a reference audio clip.
| Case | Reference Audio | Generated Audio |
|---|---|---|
| Example 1 | ||
| Example 2 |
๐ Usage
For inference code, installation instructions, and training scripts, please refer to the GitHub repository:
๐ GitHub: Aratako/Irodori-TTS
โ ๏ธ Limitations
- Japanese Only: This model currently supports Japanese text input only.
- Emoji Control: While emoji-based style control adds expressiveness, the effect may vary depending on context and is not always perfectly consistent.
- Audio Quality: Quality depends on training data characteristics. Performance may vary for voices or speaking styles underrepresented in the training data.
- Kanji Reading Accuracy: The model's ability to accurately read Kanji is relatively weak compared to other TTS models of a similar size. You may need to convert complex Kanji into Hiragana or Katakana beforehand.
๐ License & Ethical Restrictions
License
This model is released under CC-BY-NC 4.0.
Ethical Restrictions
In addition to the license terms, the following ethical restrictions apply:
- No Impersonation: Do not use this model to clone or impersonate the voice of any individual (e.g., voice actors, celebrities, public figures) without their explicit consent.
- No Misinformation: Do not use this model to generate deepfakes or synthetic speech intended to mislead others or spread misinformation.
- Disclaimer: The developers assume no liability for any misuse of this model. Users are solely responsible for ensuring their use of the generated content complies with applicable laws and regulations in their jurisdiction.
๐ Acknowledgments
This project builds upon the following works:
- Echo-TTS โ Architecture and training design reference
- DACVAE โ Audio VAE
- llm-jp/llm-jp-3-150m โ Tokenizer and embedding weight initialization
We would also like to extend our special thanks to Respair for the inspiration behind the emoji annotation feature.
๐๏ธ Citation
If you use Irodori-TTS in your research or project, please cite it as follows:
@misc{irodori-tts,
author = {Chihiro Arata},
title = {Irodori-TTS: A Flow Matching-based Text-to-Speech Model with Emoji-driven Style Control},
year = {2026},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/Aratako/Irodori-TTS-500M}}
}