Metadata-Version: 2.4
Name: neucodec
Version: 0.0.5
Summary: A package for NeuCodec, based on xcodec2.
License: MIT
License-File: LICENSE
Author: Neuphonic
Author-email: support@neuphonic.com
Requires-Python: >=3.10,<4.0
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Dist: local_attention (>=1.11.1)
Requires-Dist: numpy (>=2.0.2)
Requires-Dist: torch (>=2.5.1)
Requires-Dist: torchao (>=0.12.0)
Requires-Dist: torchaudio (>=2.5.1)
Requires-Dist: torchtune (>=0.3.1)
Requires-Dist: transformers (>=4.44.2)
Requires-Dist: vector-quantize-pytorch (==1.17.8)
Description-Content-Type: text/markdown

# NeuCodec 🎧

HuggingFace 🤗: [Model](https://huggingface.co/neuphonic/neucodec), [Distilled Model](https://huggingface.co/neuphonic/distill-neucodec)


[NeuCodec Demo](https://github.com/user-attachments/assets/c03745cd-a8c8-46ca-8f5d-ba3af091923f)

*Created by Neuphonic - building faster, smaller, on-device voice AI*

A lightweight neural codec that encodes audio at just 0.8 kbps - perfect for researchers and builders who need something that *just works* for training high quality text-to-speech models.

# Key Features

🔊 Low bit-rate compression - a speech codec that compresses and reconstructs audio with near-inaudible reconstruction loss
<br>
🎼 Upsamples from 16kHz → 24kHz
<br>
🌍 Ready for real-world use - train your own SpeechLMs without needing to build your own codec
<br>
🏢 Commercial use permitted - use it in your own tools or products
<br>
📊 Released with large pre-encoded datasets - we’ve compressed Emilia-YODAS from 1.7TB to 41GB using NeuCodec, significantly reducing the compute requirements needed for training 
<br>

# Model Details

NeuCodec is a Finite Scalar Quantisation (FSQ) based 0.8kbps audio codec for speech tokenization.
It takes advantage of the following features:

* It uses both audio (BigCodec) and semantic ([Wav2Vec2-BERT](https://huggingface.co/facebook/w2v-bert-2.0)) encoders. 
* We make use of Finite Scalar Quantisation (FSQ) resulting in a single vector for the quantised output, which makes it ideal for downstream modeling with Speech Language Models.
* At 50 tokens/sec and 16 bits per token, the overall bit-rate is 0.8kbps.
* The codec takes in 16kHz input and outputs 24kHz using an upsampling decoder.

Our work largely based on extending the work of [X-Codec2.0](https://huggingface.co/HKUSTAudio/xcodec2).

- **Developed by:** Neuphonic
- **Model type:** Neural Audio Codec
- **License:** apache-2.0
- **Repository:** https://github.com/neuphonic/neucodec
- **Paper:** [arXiv](https://arxiv.org/abs/2509.09550)
- **Pre-encoded Datasets**:
  - [Emilia-YODAS-EN](https://huggingface.co/datasets/neuphonic/emilia-yodas-english-neucodec)

## Get Started

Use the code below to get started with the model.

To install from pypi in a dedicated environment:

**Using conda + pip:**
```bash
conda create -n neucodec python>3.9
conda activate neucodec
pip install neucodec
```

**Using uv:**
```bash
uv venv neucodec --python 3.10
source neucodec/bin/activate  # On Windows: neucodec\Scripts\activate
uv pip install neucodec
```

If you would like to use the onnx decoder, also install `onnxruntime`:
```bash
pip install onnxruntime
```
Then, to use the regular codec in python:

```python
import librosa
import torch
import torchaudio
from torchaudio import transforms as T
from neucodec import NeuCodec
 
model = NeuCodec.from_pretrained("neuphonic/neucodec")
model.eval().cuda()   
 
y, sr = torchaudio.load(librosa.ex("libri1"))
if sr != 16_000:
    y = T.Resample(sr, 16_000)(y)[None, ...] # (B, 1, T_16)

with torch.no_grad():
    fsq_codes = model.encode_code(y)
    # fsq_codes = model.encode_code(librosa.ex("libri1")) # or directly pass your filepath!
    print(f"Codes shape: {fsq_codes.shape}")  
    recon = model.decode_code(fsq_codes).cpu() # (B, 1, T_24)

torchaudio.save("reconstructed.wav", recon[0, :, :], 24_000)
```

## Training Details

The model was trained using the following data: 
* Emilia-YODAS
* MLS
* LibriTTS
* Fleurs
* CommonVoice
* HUI
* Additional proprietary set

All publically available data was covered by either the CC-BY-4.0 or CC0 license.

## Citation

To cite this project, use the following bibtex entry:

```
@article{julian2025fsq,
  title={Finite Scalar Quantization Enables Redundant and Transmission-Robust Neural Audio Compression at Low Bit-rates},
  author={Julian, Harry and Beeson, Rachel and Konathala, Lohith and Ulin, Johanna and Gao, Jiameng},
  journal={arXiv preprint arXiv:2509.09550},
  year={2025},
  url={https://arxiv.org/abs/2509.09550}
}
```

