Metadata-Version: 2.4
Name: much-segmenter
Version: 0.2.1
Summary: MUCH: A light text claim segmenter for hallucination detection.
Author: Jérémie Dentan, Alexi Canesse
Maintainer: Jérémie Dentan, Alexi Canesse
License: Apache-2.0
Project-URL: Homepage, https://github.com/orailix/much_segmenter
Project-URL: Issues, https://github.com/orailix/much_segmenter/issues
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: transformers>=4.47
Requires-Dist: nltk>=3.9.1
Dynamic: license-file

# MUCH-segmenter: A *fast* claim segmentation algorithm

This package implements `much_segmenter`, a fast, deterministic, and compute-efficient claim segmentation algorithm designed for English, French, Spanish, and German. This algorithm was introduced in our paper:

> **MUCH: A Multilingual Claim Hallucination Benchmark**
>
> Jérémie Dentan<sup>1</sup>, Alexi Canesse<sup>1</sup>, Davide Buscaldi<sup>1, 2</sup>, Aymen Shabou<sup>3</sup>, Sonia Vanier<sup>1</sup>
>
> <sup>1</sup>LIX (École Polytechnique, IP Paris, CNSR), <sup>2</sup>LIPN (Université Sorbonne Paris Nord), <sup>3</sup>Crédit Agricole SA
>
> [https://arxiv.org/abs/2511.17081](https://arxiv.org/abs/2511.17081)

## Usage and example

The main function of this package is `much_segmentation`, which segments an LLM generation into token chunks.

### Example

In this example, the LLM generation contains 12 tokens. Our claim segmentation algorithm splits this generation into 3 claims: the first contains tokens 0-3 ("No, Xining"), the second tokens 4-7 (" is the largest city"), and the last claim contains tokens 8-11 (" in Qinghai.").

```python
# Imports
from much_segmenter import much_segmentation, get_repr_string
from transformers import AutoTokenizer

# Defining the generation and the tokenizer
generation = "No, Xining is the largest city in Qinghai."
llm_tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b-instruct")

# Segmentation
token_chunks = much_segmentation(generation, llm_tokenizer)
print(token_chunks) # Should be [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]

# Display of the result
print(get_repr_string(generation, token_chunks, tokenizer=llm_tokenizer))

# Output should be:
"""
<Segmentation>
# 0 : No, Xining
# 1 :  is the largest city
# 2 :  in Qinghai.
"""
```

### Pre-computed tokens

Modern tokenizers are not idempotent. For example, the LLM can generate a sequence of `output_tokens` that are decoded into `generation = tokenizer.decode(output_tokens)`. However, it is possible that sometimes `tokenizer.encode(generation) != output_tokens`. This can happen because the same text can be encoded in several ways, and the path chosen by the tokenizer may differ from the one obtained during LLM generation.

This behavior can be problematic because the output of `much_segmenter` is token indices, so any mismatch between the output of `much_segmenter` and the tokens generated by the LLM can lead to computation errors. Consequently, `much_segmenter` includes an optional `precomputed_tokens`, which should contain the output tokens as generated by the LLM.

**⚠️ This optional parameter should ALWAYS be used when the output tokens are known, to avoid any token mismatch during segmentation ⚠️**

## Pseudo-code and algorithmic details

Our segmentation algorithm is fully rule-based and does not require external models or internet access, making it suitable for offline or computation-limited use cases. It is designed for English, French, Spanish, and German. We retain only these four European languages because their stopword and punctuation systems are similar. We expect our segmenter to be easily adaptable to languages with similar punctuation and stopwords, although we have not tested it beyond the four languages mentioned.

Our algorithm includes two main steps. First, we split the LLM generation into words using an external word tokenizer, and we use these words to identify the character indices of claim starts. Second, we map these character indices to the tokens of the LLM generation. For a detailed presentation of this algorithm and a discussion of its pseudo-code, please refer to our research paper available on arXiv: [https://arxiv.org/abs/2511.17081](https://arxiv.org/abs/2511.17081).

## Runtime

This claim segmentation algorithm was designed to be extremely fast. Segmenting the entire MUCH dataset took 6s. This dataset includes 4,873 samples containing a total of 392,022 characters, representing 101,917 output tokens that were segmented into 25,624 claims (20,751 claims after removing the final claims containing only the EOS token). For reference, the LLM generation runtime for these samples was 2,758s, meaning that segmentation represents only a 0.2% overhead.

These runtimes are single-process and single-thread measurements; segmentation can be further accelerated with parallel computing.

## Related artifacts

This package is released alongside the MUCH benchmark. This benchmark includes the following resources, which you might explore to see applications of our claim segmentation algorithm:

- Our research paper introducing MUCH and describing its generation in detail: [arXiv:2511.17081](https://arxiv.org/abs/2511.17081)
- A GitHub repository implementing the generation and utilization of MUCH: [orailix/much](https://github.com/orailix/much)
- The dataset, available on HuggingFace:
  - Main dataset: [orailix/MUCH](https://huggingface.co/datasets/orailix/MUCH)
  - Generation configs: [orailix/MUCH-configs](https://huggingface.co/datasets/orailix/MUCH-configs)
  - Baseline evaluation data: [orailix/MUCH-signals](https://huggingface.co/datasets/orailix/MUCH-signals)

## Acknowledgement

This work received financial support from the research chair *Trustworthy and Responsible AI* at École Polytechnique.

This work was granted access to the HPC resources of IDRIS under the allocation **AD011014843R1**, made by GENCI.

## Copyright and License

Copyright 2025–present Laboratoire d’Informatique de l’École Polytechnique.

This repository is released under the Apache-2.0 license.

Please cite this work as follows:

```bibtex
@misc{dentan_much_2025,
  title = {MUCH: A Multilingual Claim Hallucination Benchmark},
  author = {Dentan, Jérémie and Canesse, Alexi and Buscaldi, Davide and Shabou, Aymen and Vanier, Sonia},
  year = {2025},
  url = {https://arxiv.org/abs/2511.17081},
}
```
