Metadata-Version: 2.4
Name: tokenpack-rag
Version: 0.1.0
Summary: Query-aware semantic chunk selection under LLM context-window budgets.
Author-email: Metehan Kizilcik <metekizilcik@gmail.com>
License: Business Source License 1.1
        
        Licensor: TokenPack Contributors
        Licensed Work: TokenPack
        Additional Use Grant: None
        Change Date: 2030-05-10
        Change License: Apache License, Version 2.0
        
        License text copyright © 2017 MariaDB Corporation Ab, All Rights Reserved.
        "Business Source License" is a trademark of MariaDB Corporation Ab.
        
        Terms
        
        The Licensor hereby grants you the right to copy, modify, create derivative
        works, redistribute, and make non-production use of the Licensed Work. The
        Licensor may make an Additional Use Grant, above, permitting limited production
        use.
        
        Effective on the Change Date, or the fourth anniversary of the first publicly
        available distribution of a specific version of the Licensed Work under this
        License, whichever comes first, the Licensor hereby grants you rights under the
        terms of the Change License, and the rights granted in the paragraph above
        terminate.
        
        If your use of the Licensed Work does not comply with the requirements currently
        in effect as described in this License, you must purchase a commercial license
        from the Licensor, its affiliated entities, or authorized resellers, or you must
        refrain from using the Licensed Work.
        
        All copies of the original and modified Licensed Work, and derivative works of
        the Licensed Work, are subject to this License. This License applies separately
        for each version of the Licensed Work and the Change Date may vary for each
        version of the Licensed Work released by Licensor.
        
        You must conspicuously display this License on each original or modified copy of
        the Licensed Work. If you receive the Licensed Work in original or modified form
        from a third party, the terms and conditions set forth in this License apply to
        your use of that work.
        
        Any use of the Licensed Work in violation of this License will automatically
        terminate your rights under this License for the current and all other versions
        of the Licensed Work.
        
        This License does not grant you any right in any trademark or logo of Licensor
        or its affiliates (provided that you may use a trademark or logo of Licensor as
        expressly required by this License).
        
        TO THE EXTENT PERMITTED BY APPLICABLE LAW, THE LICENSED WORK IS PROVIDED ON AN
        "AS IS" BASIS. LICENSOR HEREBY DISCLAIMS ALL WARRANTIES AND CONDITIONS, EXPRESS
        OR IMPLIED, INCLUDING (WITHOUT LIMITATION) WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, AND TITLE.
        
        MariaDB hereby grants you permission to use this License's text to license your
        works, and to refer to it using the trademark "Business Source License", as long
        as you comply with the Covenants of Licensor below.
        
        Covenants of Licensor
        
        In consideration of the right to use this License's text and the "Business
        Source License" name and trademark, Licensor covenants to MariaDB, and to all
        other recipients of the licensed work to be provided by Licensor:
        
        1. To specify as the Change License the GPL Version 2.0 or any later version, or
           a license that is compatible with GPL Version 2.0 or a later version, where
           "compatible" means that software provided under the Change License can be
           included in a program with software provided under GPL Version 2.0 or a later
           version. Licensor may specify additional Change Licenses without limitation.
        
        2. To either: (a) specify an additional grant of rights to use that does not
           impose any additional restriction on the right granted in this License, as the
           Additional Use Grant; or (b) insert the text "None".
        
        3. To specify a Change Date.
        
        4. Not to modify this License in any other way.
        
Project-URL: Homepage, https://github.com/mo-tunn/TokenPack
Project-URL: Repository, https://github.com/mo-tunn/TokenPack
Project-URL: Paper, https://github.com/mo-tunn/TokenPack/blob/main/submission/TokenPack-paper.pdf
Keywords: rag,llm,context-compression,retrieval,knapsack
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Text Processing :: Indexing
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: sentence-transformers>=3.0.0
Provides-Extra: reranking
Requires-Dist: sentence-transformers>=3.0.0; extra == "reranking"
Provides-Extra: pdf
Requires-Dist: PyMuPDF>=1.24.0; extra == "pdf"
Requires-Dist: pypdf>=4.0.0; extra == "pdf"
Provides-Extra: tokens
Requires-Dist: tiktoken>=0.7.0; extra == "tokens"
Provides-Extra: compression
Requires-Dist: llmlingua>=0.2.2; extra == "compression"
Provides-Extra: modal
Requires-Dist: modal>=0.64.0; extra == "modal"
Requires-Dist: pandas>=2.0.0; extra == "modal"
Provides-Extra: mcp
Requires-Dist: mcp>=1.2.0; extra == "mcp"
Provides-Extra: office
Requires-Dist: python-docx>=1.1.0; extra == "office"
Requires-Dist: python-pptx>=0.6.23; extra == "office"
Requires-Dist: openpyxl>=3.1.0; extra == "office"
Provides-Extra: dev
Requires-Dist: pytest>=8.0.0; extra == "dev"
Requires-Dist: mcp>=1.2.0; extra == "dev"
Dynamic: license-file

# TokenPack-RAG

**TokenPack-RAG packs the most useful evidence chunks into a smaller LLM-ready context file.**

It turns long-context selection into a budgeted context-packing problem: chunks are items, token counts are weights, and query-conditioned evidence scores are values. The default pipeline is the current strongest setting from the paper:

```text
structure-aware semantic chunks + evidence-hybrid scoring + hybrid-greedy budget fill
```

The practical goal is simple: give your LLM less context while keeping the evidence that matters.

## What You Get

- A one-command CLI: `tokenpack-rag pack SOURCE --query "..."`
- Automatic token-budget estimation when you do not know what budget to choose.
- Automatic Markdown output next to your source file, such as `paper-tp.md`.
- Budget-valid context selection for long documents, code, PDFs, or mixed folders.
- Advanced `ingest`, `select`, `export-context`, `answer`, and `benchmark` commands for experiments.
- Optional local MCP server for agent tools such as Claude Desktop, Cursor, or Codex.
- Optional second-stage prompt compression with LLMLingua / LongLLMLingua.
- Reproducible paper artifacts under [`submission/`](submission).

## Install

From PyPI, once published:

```bash
pip install tokenpack-rag
```

From GitHub today:

```bash
pip install "tokenpack-rag @ git+https://github.com/mo-tunn/TokenPack.git"
```

For PDF parsing, Office files, token counting, compression, and development tools:

```bash
pip install "tokenpack-rag[pdf,office,tokens,compression,dev] @ git+https://github.com/mo-tunn/TokenPack.git"
```

For local agent/MCP usage:

```bash
pip install "tokenpack-rag[mcp,pdf,office,tokens] @ git+https://github.com/mo-tunn/TokenPack.git"
```

For local editable development:

```bash
git clone https://github.com/mo-tunn/TokenPack.git
cd TokenPack
pip install -e ".[pdf,office,tokens,compression,dev]"
```

TokenPack-RAG uses `sentence-transformers/all-MiniLM-L6-v2` as the default embedding model. Use `--offline-models` only when the model is already cached locally.

## Quick Start

Pack one document into an LLM-ready Markdown context:

```bash
tokenpack-rag pack README.md --query "How does TokenPack reduce LLM context cost?"
```

This writes:

```text
README-tp.md
```

For a PDF:

```bash
tokenpack-rag pack paper.pdf --query "What are the main contributions?"
```

This writes:

```text
paper-tp.md
```

For a folder:

```bash
tokenpack-rag pack docs/ --query "Summarize the design decisions in this project."
```

This writes:

```text
docs-tp.md
```

The output is not a modified PDF. It is a packed Markdown context file that you can paste or upload into your own LLM.

## Supported Inputs

TokenPack-RAG accepts a single file or a folder. Folder inputs are scanned recursively and unsupported binary/media files are skipped.

| Category | Extensions |
|---|---|
| Text and docs | `.txt`, `.text`, `.md`, `.markdown`, `.rst`, `.adoc`, `.tex`, `.log` |
| PDF | `.pdf` with the `pdf` extra |
| Web | `.html`, `.htm` |
| Data/config | `.json`, `.jsonl`, `.csv`, `.tsv`, `.yaml`, `.yml`, `.toml` |
| Office | `.docx`, `.pptx`, `.xlsx` with the `office` extra |
| Code | `.py`, `.js`, `.jsx`, `.ts`, `.tsx`, `.java`, `.go`, `.rs`, `.c`, `.cpp`, `.cs`, `.php`, `.rb`, `.swift`, `.kt`, `.scala`, `.sh`, `.ps1`, `.sql`, `.css`, `.xml`, and related variants |

Office support is optional so the base install stays lighter:

```bash
pip install "tokenpack-rag[office]"
```

## Auto Budget

`--budget` is optional. When you omit it, TokenPack-RAG estimates a budget from the source size:

```text
source_tokens = sum(chunk.token_count for chunk in index.chunks)
raw_budget = ceil(source_tokens * 0.50)
budget = clamp(raw_budget, min_budget=1200, max_budget=64000)
reserve_output = min(4000, max(512, int(budget * 0.10)))
selection_budget = budget - reserve_output
```

Example terminal summary:

```text
Source: paper.pdf
Output: paper-tp.md
Source tokens: 142,000
Auto budget: 64,000 tokens (ratio=50%, capped by max-budget)
Reserved for answer: 4,000
Selection budget: 60,000
Selected: 188 chunks / 59,240 tokens
```

You can still take control when you want a smaller or larger packed context:

```bash
tokenpack-rag pack paper.pdf \
  --query "What evidence supports the main claim?" \
  --budget 32000 \
  --overwrite
```

Other budget controls:

```bash
tokenpack-rag pack paper.pdf --query "..." --budget-ratio 0.35
tokenpack-rag pack paper.pdf --query "..." --max-budget 128000
tokenpack-rag pack paper.pdf --query "..." --reserve-output 2000
```

The default `64k` cap is intentional: TokenPack-RAG does local embedding and selection, so the packing step itself does not spend LLM API tokens. The cap is aimed at modern long-context models while still preventing unexpectedly huge output files.

## Output Files

By default, TokenPack-RAG writes the packed context next to the source:

| Source | Output |
|---|---|
| `paper.pdf` | `paper-tp.md` |
| `notes.txt` | `notes-tp.md` |
| `docs/` | `docs-tp.md` |

Existing output files are protected by default:

```bash
tokenpack-rag pack paper.pdf --query "..."
```

If `paper-tp.md` already exists, the command stops. Use `--overwrite` or choose an explicit path:

```bash
tokenpack-rag pack paper.pdf --query "..." --overwrite
tokenpack-rag pack paper.pdf --query "..." --out packed-context.md
```

Internal artifacts go under `.tokenpack/runs/<timestamp>/` unless you choose paths:

```bash
tokenpack-rag pack paper.pdf \
  --query "..." \
  --index-out .tokenpack/paper.index.json \
  --selection-out paper-tp.selection.json
```

## Optional Compression

TokenPack-RAG is selection-first by default. You can optionally compress the selected evidence with LLMLingua:

```bash
tokenpack-rag pack paper.pdf \
  --query "What evidence supports the main claim?" \
  --compress llmlingua \
  --compression-rate 0.85
```

For LongLLMLingua-style query-conditioned compression:

```bash
tokenpack-rag pack paper.pdf \
  --query "What evidence supports the main claim?" \
  --compress llmlingua \
  --longllmlingua \
  --compression-rate 0.85
```

By default, compression models are expected to be cached locally. Add `--allow-download` when you intentionally want Hugging Face downloads during compression.

## Use With Agents / MCP

TokenPack-RAG can also run as a local stdio MCP server. This lets an agent call TokenPack directly as a tool, produce a packed Markdown context, and then reason over that selected context.

Install with MCP support:

```bash
pipx install "tokenpack-rag[mcp,pdf,office,tokens]"
```

Add a local MCP server to your agent config:

```json
{
  "mcpServers": {
    "tokenpack-rag": {
      "command": "tokenpack-rag-mcp",
      "args": ["--workspace", "/path/to/project"]
    }
  }
}
```

Or run it through `uvx` without a permanent install:

```json
{
  "mcpServers": {
    "tokenpack-rag": {
      "command": "uvx",
      "args": [
        "--from",
        "tokenpack-rag[mcp,pdf,office,tokens]",
        "tokenpack-rag-mcp",
        "--workspace",
        "/path/to/project"
      ]
    }
  }
}
```

The MCP server exposes:

| Tool | Purpose |
|---|---|
| `pack_context` | Packs a file or folder into selected Markdown context and writes the `-tp.md` artifact. |
| `read_packed_context` | Reads a packed context file, optionally in slices for large contexts. |

By default the MCP server can only read and write inside `--workspace`. Use `--allow-any-path` only for trusted local setups.

## Advanced CLI

The one-command `pack` workflow is the main user-facing interface. The lower-level commands remain available for experiments and reproducible paper runs.

Build an index:

```bash
tokenpack-rag ingest README.md --index .tokenpack/readme-index.json
```

Select evidence under a manual budget:

```bash
tokenpack-rag select \
  --index .tokenpack/readme-index.json \
  --query "How does TokenPack reduce LLM context cost?" \
  --budget 3000 \
  --reserve-output 500 \
  --output .tokenpack/selection.json
```

Export the selected context:

```bash
tokenpack-rag export-context \
  --selection .tokenpack/selection.json \
  --output .tokenpack/context.txt
```

By default, these commands use:

```text
chunker: structure-aware semantic boundaries
chunk-size-preset: low-budget
scoring: evidence-hybrid
selector: budget-top-k (TokenPack hybrid-greedy)
```

Historical selectors such as `knapsack`, `knapsack-redundancy`, and `semantic-threshold` chunking remain available for ablation work, but the main pipeline is hybrid-greedy.

## Python API

```python
from tokenpack.embeddings import make_embedder
from tokenpack.pipeline import ingest_path
from tokenpack.scoring import score_chunks
from tokenpack.selectors import select_chunks

embedder = make_embedder()
index = ingest_path(
    "README.md",
    ".tokenpack/readme-index.json",
    embedder=embedder,
    chunker_name="structure-aware",
    target_tokens=250,
    min_tokens=40,
    max_tokens=320,
)

query = "How does TokenPack reduce LLM context cost?"
query_embedding = embedder.embed([query])[0]

scored = score_chunks(
    query_embedding,
    index.chunks,
    index.embeddings,
    scoring="evidence-hybrid",
    query_text=query,
    redundancy_penalty=0.35,
)

result = select_chunks(
    scored,
    strategy="budget-top-k",
    budget=3000,
    candidate_pool=250,
)

print(result.used_tokens, [item.chunk.id for item in result.selected])
```

## Headline Results

These are the cleanest results from the current paper artifacts. The paper is intentionally conservative: TokenPack-RAG does not claim universal knapsack dominance, but it does show that selection-first context packing is a strong budget-control layer.

| Setting | Main Result |
|---|---|
| **QASPER, matched ~50% saving** | Only TokenPack preserves **0.934 evidence recall** vs **0.713** for Only LLMLingua-2. |
| **QASPER complete evidence** | Only TokenPack preserves complete evidence on **0.870** of questions vs **0.120** for Only LLMLingua-2. |
| **QASPER cascade frontier** | TokenPack + LLMLingua-2 at rate 0.85 reaches **58.4% saving** with **0.851 evidence recall**. |
| **LongBench v2 generation pilot** | TP hybrid-greedy-50 answers **37/83** cases correctly vs **32/83 full context** and **34/83 production-RAG**, a **+15.6% relative accuracy gain** over full context with **50.6% saving**. |
| **LongBench aggressive cascade** | TP hybrid-greedy-50 + LongLLMLingua-50 keeps the same **37/83** correctness while reaching **74.6% context saving** on the 83-case eligible pilot. |

The strongest claim is:

> Select evidence first, then optionally compress it. Retrieval-time budget selection and prompt compression are not interchangeable.

## Reproduce Paper Runs

Fast local tests:

```bash
python -m pytest -q
```

QASPER selector baseline:

```bash
python submission/experiments/qasper_selector_eval.py \
  --data-file .tokenpack/data/qasper-validation.parquet \
  --chunker structure-aware \
  --strategies production-rag,budget-top-k,greedy-density,knapsack,knapsack-redundancy \
  --budget-ratios 0.20,0.30,0.40,0.50 \
  --max-papers 500 \
  --max-questions 861 \
  --candidate-pool 300 \
  --chunk-size-preset low-budget \
  --output-dir submission/results/qasper_selector_eval_strong_rerun
```

LongBench v2 Modal pilot used in the current paper:

```bash
python -m modal run submission/longbench_eval/app.py::build_and_run \
  --output-dir submission/results/longbench_v2_modal_hybrid_greedy_83_latency \
  --limit 83 \
  --source-min-tokens 8000 \
  --source-max-tokens 24000 \
  --max-scanned 503 \
  --model-id Qwen/Qwen2.5-14B-Instruct \
  --batch-size 1 \
  --context-order score-then-source \
  --latency-mode
```

See [`submission/source_code_manifest.md`](submission/source_code_manifest.md) for the full artifact map.

## Repository Layout

```text
src/tokenpack/                     Python package and CLI implementation
tests/                             Unit and smoke tests
examples/                          Small local examples for the CLI
submission/paper/                  LaTeX paper source, tables, figures
submission/experiments/            QASPER, LongBench, compression, and ablation scripts
submission/results/                Paper result artifacts and readouts
submission/longbench_eval/         Modal LongBench v2 generation harness
submission/modal_generation_eval/  Modal QASPER generation/judge harness
```

## Notes

- The default workflow is output-first: create a packed context file and send that file to your own LLM.
- Ollama is not required for `pack`; MCP support is optional and local-first.
- QASPER metrics are evidence-retention and answer-token-retention proxies, not human-judged generated-answer quality.
- LongBench v2 accuracy numbers are pilot-scale and should be read descriptively, not as statistically significant wins.
- Evidence-hybrid scoring weights are engineering defaults. The paper calls out weight calibration as future work.
- BudgetMem is discussed as related work; the old `budgetmem-style` proxy is kept only in `tokenpack.scoring_experimental`, not in the production CLI.

## License

TokenPack-RAG is licensed under the Business Source License 1.1. See [`LICENSE`](LICENSE).

## Citation

If you use TokenPack-RAG in research, cite the paper PDF in [`submission/TokenPack-paper.pdf`](submission/TokenPack-paper.pdf). A BibTeX entry will be added when the public preprint is available.
