Metadata-Version: 2.4
Name: semblend
Version: 0.3.1
Summary: Semantic KV cache reuse for LLM inference engines (vLLM, SGLang, TRT-LLM)
Author: WorldFlow AI
License:                                  Apache License
                                   Version 2.0, January 2004
                                http://www.apache.org/licenses/
        
           TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
        
           1. Definitions.
        
              "License" shall mean the terms and conditions for use, reproduction,
              and distribution as defined by Sections 1 through 9 of this document.
        
              "Licensor" shall mean the copyright owner or entity authorized by
              the copyright owner that is granting the License.
        
              "Legal Entity" shall mean the union of the acting entity and all
              other entities that control, are controlled by, or are under common
              control with that entity. For the purposes of this definition,
              "control" means (i) the power, direct or indirect, to cause the
              direction or management of such entity, whether by contract or
              otherwise, or (ii) ownership of fifty percent (50%) or more of the
              outstanding shares, or (iii) beneficial ownership of such entity.
        
              "You" (or "Your") shall mean an individual or Legal Entity
              exercising permissions granted by this License.
        
              "Source" form shall mean the preferred form for making modifications,
              including but not limited to software source code, documentation
              source, and configuration files.
        
              "Object" form shall mean any form resulting from mechanical
              transformation or translation of a Source form, including but
              not limited to compiled object code, generated documentation,
              and conversions to other media types.
        
              "Work" shall mean the work of authorship made available under
              the License, as indicated by a copyright notice that is included in
              or attached to the work (an example is provided in the Appendix below).
        
              "Derivative Works" shall mean any work, whether in Source or Object
              form, that is based on (or derived from) the Work and for which the
              editorial revisions, annotations, elaborations, or other modifications
              represent, as a whole, an original work of authorship. For the purposes
              of this License, Derivative Works shall not include works that remain
              separable from, or merely link (or bind by name) to the interfaces of,
              the Work and Derivative Works thereof.
        
              "Contribution" shall mean, as submitted to the Licensor for inclusion
              in the Work by the copyright owner or by an individual or Legal Entity
              authorized to submit on behalf of the copyright owner. For the purposes
              of this definition, "submitted" means any form of electronic, verbal,
              or written communication sent to the Licensor or its representatives,
              including but not limited to communication on electronic mailing lists,
              source code control systems, and issue tracking systems that are managed
              by, or on behalf of, the Licensor for the purpose of discussing and
              improving the Work, but excluding communication that is conspicuously
              marked or designated in writing by the copyright owner as "Not a
              Contribution."
        
              "Contributor" shall mean Licensor and any Legal Entity on behalf of
              whom a Contribution has been received by the Licensor and included
              within the Work.
        
           2. Grant of Copyright License. Subject to the terms and conditions of
              this License, each Contributor hereby grants to You a perpetual,
              worldwide, non-exclusive, no-charge, royalty-free, irrevocable
              copyright license to reproduce, prepare Derivative Works of,
              publicly display, publicly perform, sublicense, and distribute the
              Work and such Derivative Works in Source or Object form.
        
           3. Grant of Patent License. Subject to the terms and conditions of
              this License, each Contributor hereby grants to You a perpetual,
              worldwide, non-exclusive, no-charge, royalty-free, irrevocable
              (except as stated in this section) patent license to make, have made,
              use, offer to sell, sell, import, and otherwise transfer the Work,
              where such license applies only to those patent claims licensable
              by such Contributor that are necessarily infringed by their
              Contribution(s) alone or by the combination of their Contribution(s)
              with the Work to which such Contribution(s) was submitted. If You
              institute patent litigation against any entity (including a cross-claim
              or counterclaim in a lawsuit) alleging that the Work or a Contribution
              incorporated within the Work constitutes direct or contributory patent
              infringement, then any patent licenses granted to You under this
              License for that Work shall terminate as of the date such litigation
              is filed.
        
           4. Redistribution. You may reproduce and distribute copies of the
              Work or Derivative Works thereof in any medium, with or without
              modifications, and in Source or Object form, provided that You
              meet the following conditions:
        
              (a) You must give any other recipients of the Work or Derivative
                  Works a copy of this License; and
        
              (b) You must cause any modified files to carry prominent notices
                  stating that You changed the files; and
        
              (c) You must retain, in the Source form of any Derivative Works
                  that You distribute, all copyright, patent, trademark, and
                  attribution notices from the Source form of the Work,
                  excluding those notices that do not pertain to any part of
                  the Derivative Works; and
        
              (d) If the Work includes a "NOTICE" text file as part of its
                  distribution, You must include a readable copy of the
                  attribution notices contained within such NOTICE file, in
                  at least one of the following places: within a NOTICE text
                  file distributed as part of the Derivative Works; within
                  the Source form or documentation, if provided along with the
                  Derivative Works; or, within a display generated by the
                  Derivative Works, if and wherever such third-party notices
                  normally appear. The contents of the NOTICE file are for
                  informational purposes only and do not modify the License.
                  You may add Your own attribution notices within Derivative
                  Works that You distribute, alongside or as an addendum to
                  the NOTICE text from the Work, provided that such additional
                  attribution notices cannot be construed as modifying the License.
        
              You may add Your own license statement for Your modifications and
              may provide additional grant of rights to use, reproduce, modify,
              prepare Derivative Works of, publicly display, publicly perform,
              sublicense, and distribute those modifications and Derivative Works.
        
           5. Submission of Contributions. Unless You explicitly state otherwise,
              any Contribution intentionally submitted for inclusion in the Work
              by You to the Licensor shall be under the terms and conditions of
              this License, without any additional terms or conditions.
              Notwithstanding the above, nothing herein shall supersede or modify
              the terms of any separate license agreement you may have executed
              with Licensor regarding such Contributions.
        
           6. Trademarks. This License does not grant permission to use the trade
              names, trademarks, service marks, or product names of the Licensor,
              except as required for reasonable and customary use in describing the
              origin of the Work and reproducing the content of the NOTICE file.
        
           7. Disclaimer of Warranty. Unless required by applicable law or
              agreed to in writing, Licensor provides the Work (and each
              Contributor provides its Contributions) on an "AS IS" BASIS,
              WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
              implied, including, without limitation, any warranties or conditions
              of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
              PARTICULAR PURPOSE. You are solely responsible for determining the
              appropriateness of using or reproducing the Work and assume any
              risks associated with Your exercise of permissions under this License.
        
           8. Limitation of Liability. In no event and under no legal theory,
              whether in tort (including negligence), contract, or otherwise,
              unless required by applicable law (such as deliberate and grossly
              negligent acts) or agreed to in writing, shall any Contributor be
              liable to You for damages, including any direct, indirect, special,
              incidental, or exemplary damages of any character arising as a
              result of this License or out of the use or inability to use the
              Work (including but not limited to damages for loss of goodwill,
              work stoppage, computer failure or malfunction, or all other
              commercial damages or losses), even if such Contributor has been
              advised of the possibility of such damages.
        
           9. Accepting Warranty or Liability. While redistributing the Work or
              Derivative Works thereof, You may choose to offer, and charge a fee
              for, acceptance of support, warranty, indemnity, or other liability
              obligations and/or rights consistent with this License. However, in
              accepting such obligations, You may act only on Your own behalf and
              on Your sole responsibility, not on behalf of any other Contributor,
              and only if You agree to indemnify, defend, and hold each Contributor
              harmless for any liability incurred by, or claims asserted against,
              such Contributor by reason of your accepting any such warranty or
              additional liability.
        
           END OF TERMS AND CONDITIONS
        
           Copyright 2026 WorldFlow AI, Inc.
        
           Licensed under the Apache License, Version 2.0 (the "License");
           you may not use this file except in compliance with the License.
           You may obtain a copy of the License at
        
               http://www.apache.org/licenses/LICENSE-2.0
        
           Unless required by applicable law or agreed to in writing, software
           distributed under the License is distributed on an "AS IS" BASIS,
           WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
           See the License for the specific language governing permissions and
           limitations under the License.
        
Project-URL: Homepage, https://worldflow.ai/semblend
Project-URL: Repository, https://github.com/worldflowai/semblend
Project-URL: Documentation, https://docs.worldflow.ai/semblend
Project-URL: Bug Tracker, https://github.com/worldflowai/semblend/issues
Keywords: llm,inference,kv-cache,vllm,sglang,trt-llm,semantic-cache
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.24
Requires-Dist: rapidfuzz>=3.0
Provides-Extra: embedder
Requires-Dist: sentence-transformers>=3.0; extra == "embedder"
Requires-Dist: onnxruntime>=1.15; extra == "embedder"
Provides-Extra: gpu
Requires-Dist: torch>=2.0; extra == "gpu"
Requires-Dist: triton>=2.0; extra == "gpu"
Provides-Extra: vllm
Requires-Dist: vllm>=0.8.0; extra == "vllm"
Requires-Dist: torch>=2.0; extra == "vllm"
Requires-Dist: sentence-transformers>=3.0; extra == "vllm"
Provides-Extra: sglang
Requires-Dist: sglang>=0.4.0; extra == "sglang"
Requires-Dist: torch>=2.0; extra == "sglang"
Requires-Dist: sentence-transformers>=3.0; extra == "sglang"
Provides-Extra: trtllm
Requires-Dist: tensorrt_llm>=0.14; extra == "trtllm"
Requires-Dist: torch>=2.0; extra == "trtllm"
Requires-Dist: sentence-transformers>=3.0; extra == "trtllm"
Provides-Extra: dynamo
Requires-Dist: sentence-transformers>=3.0; extra == "dynamo"
Requires-Dist: nats-py>=2.0; extra == "dynamo"
Provides-Extra: dev
Requires-Dist: pytest>=8.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.23; extra == "dev"
Requires-Dist: ruff>=0.4; extra == "dev"
Provides-Extra: benchmarks
Requires-Dist: aiohttp>=3.9; extra == "benchmarks"
Requires-Dist: datasets>=2.16; extra == "benchmarks"
Requires-Dist: tqdm>=4.66; extra == "benchmarks"
Requires-Dist: rich>=13.7; extra == "benchmarks"
Requires-Dist: rouge-score>=0.1.2; extra == "benchmarks"
Requires-Dist: transformers>=4.40; extra == "benchmarks"
Dynamic: license-file

# SemBlend

<p align="center">
  <a href="https://pypi.org/project/semblend/"><img alt="PyPI" src="https://img.shields.io/pypi/v/semblend?color=blue&label=pypi"></a>
  <a href="https://pypi.org/project/semblend/"><img alt="Python" src="https://img.shields.io/badge/python-3.10%20%7C%203.11%20%7C%203.12%20%7C%203.13-blue"></a>
  <a href="https://github.com/worldflowai/semblend/actions/workflows/ci.yml"><img alt="CI" src="https://img.shields.io/github/actions/workflow/status/worldflowai/semblend/ci.yml?branch=main&label=CI"></a>
  <a href="LICENSE"><img alt="License" src="https://img.shields.io/badge/license-Apache%202.0-green"></a>
  <!-- Paper badge: uncomment when arXiv submission is accepted -->
  <!-- <a href="https://arxiv.org/abs/XXXX.XXXXX"><img alt="Paper" src="https://img.shields.io/badge/paper-arXiv-red"></a> -->
</p>

**Semantic KV cache reuse for LLM inference engines.**

SemBlend extends exact-prefix KV caching (vLLM, LMCache, SGLang) with *semantic* donor discovery. When a prompt is semantically similar to a cached one but lexically different — different instruction phrasing, sentence order, or template fields — SemBlend finds and reuses the cached KV tensors, replacing a multi-second prefill with sub-second KV retrieval.

```
vLLM + LMCache alone:        semantically similar prompt  →  0% hit   →  full prefill
vLLM + LMCache + SemBlend:                                →  83–100% hit  →  reuse donor KV
```

## Performance

Measured on A10G GPU (0.85 utilization), Qwen2.5-7B-AWQ, vLLM 0.14.1 + LMCache. All results from live benchmarks on real HuggingFace datasets with fresh pod isolation (n=15 per cell).

### TTFT speedup vs cold prefill

| Context | Cold TTFT | SemBlend TTFT | Speedup |
|---------|----------|---------------|---------|
| 4K | 2,102 ms | 433 ms | **4.9x** |
| 8K | 3,816 ms | 539 ms | **7.1x** |
| 12K | 5,655 ms | 648 ms | **8.7x** |
| 16K | 7,635 ms | 760 ms | **10.0x** |
| 24K | 11,977 ms | 972 ms | **12.3x** |

SemBlend TTFT stays under 1 second regardless of context length. Speedup scales linearly because cold prefill grows with context while SemBlend loads cached KV in constant time.

### Multi-dataset validation

Identical speedups across content types -- SemBlend is content-agnostic:

| Dataset | Content Type | 8K Speedup | 16K Speedup | 24K Speedup |
|---------|-------------|------------|-------------|-------------|
| XSum | News summaries | 7.1x | 10.0x | 12.3x |
| CNN/DailyMail | Long-form journalism | 7.1x | 9.4x | 12.2x |
| MultiNews | Multi-document news | -- | 9.3x | -- |

### Quality

Quality validated across 5 datasets, 4-5 context lengths each, with PPL ratio + LLM-as-judge faithfulness scoring (360 total runs):

| Dataset | PPL Range | Status | Judge (Cold) | Judge (SemBlend) | Faithful |
|---------|-----------|--------|--------------|------------------|----------|
| XSum | 1.018-1.054 | PASS | 0.84 | 0.84 | 100% |
| CNN/DailyMail | 1.011-1.049 | PASS | 0.87 | 0.86 | 97% |
| WikiHow | 0.987-1.037 | PASS | 0.82 | 0.84 | 97% |
| MultiNews | 0.958-1.064 | PASS | 0.79 | 0.78 | 100% |
| SAMSum | 1.140-1.198 | ELEVATED | 0.78 | 0.86 | 87% |

PPL < 1.065 for 4/5 datasets at all lengths. SAMSum shows elevated PPL due to short dialogue turns, but LLM-as-judge rates SemBlend output higher than cold (0.86 vs 0.78). 24 dataset-length cells, 360 total runs.

## Installation

```bash
pip install semblend            # CPU-only core (numpy + rapidfuzz)
pip install semblend[vllm]      # + vLLM/LMCache integration
pip install semblend[sglang]    # + SGLang integration
pip install semblend[embedder]  # + sentence-transformers (MiniLM GPU)
```

## Quick Start: vLLM + LMCache

Integrates via LMCache's `KVConnectorBase_V1` — no patching required.

```bash
pip install semblend[vllm] vllm lmcache

vllm serve Qwen/Qwen2.5-7B-Instruct-AWQ \
  --kv-transfer-config '{
    "kv_connector": "SemBlendConnectorV1",
    "kv_connector_module_path": "semblend.integration.vllm.connector_v1",
    "kv_role": "kv_both"
  }'
```

> **CacheBlend support:** For selective layer recomputation (CacheBlend), vLLM must expose
> the loaded model to KV connectors via `initialize_worker_connector()`. This is available
> in vLLM builds that include [PR #37339](https://github.com/vllm-project/vllm/pull/37339).
> Without it, SemBlend's semantic matching and KV injection still work — only CacheBlend's
> per-layer recomputation is unavailable.

## Quick Start: SGLang

```bash
pip install semblend[sglang] sglang

# CLI launcher — applies the RadixCache patch automatically
semblend-sglang --model-path Qwen/Qwen2.5-7B-Instruct --host 0.0.0.0 --port 8000
```

Or programmatically — call before SGLang initializes:

```python
from semblend.integration.sglang.radix_patcher import patch_radix_cache
patch_radix_cache()
# ... start SGLang server ...
```

A first-class [`SemanticPrefixProvider`](https://github.com/sgl-project/sglang/pull/20806) interface (no patching) is in progress upstream.

## Configuration

| Variable | Default | Description |
|----------|---------|-------------|
| `SEMBLEND_ENABLED` | `1` | Enable semantic donor search |
| `SEMBLEND_MIN_SIMILARITY` | `0.60` | Cosine similarity threshold |
| `SEMBLEND_EMBEDDER` | `minilm` | `minilm` (auto GPU) · `onnx_gpu` |
| `SEMBLEND_FUZZY_CHUNKS` | `0` | Fuzzy chunk matching for shifted prefixes |

## How It Works

```
Request → Embed (2–15ms) → Search (1ms) → Align (1ms) → Inject KV
              ↓                 ↓              ↓
         MiniLM-L6-v2    cosine search   MD5 chunk hash
         GPU (ONNX RT)   donor store     256-token boundary
         segmented pool
```

1. **Embed** — full-document segmented embedding on GPU via ONNX-runtime. Long prompts are split into overlapping 256-token windows, embedded in parallel, and mean-pooled into a single vector. 100% content coverage at any prompt length (~2ms short, ~10ms at 8K, ~15ms at 32K).
2. **Search** — brute-force cosine similarity against the donor store (<1ms at 1K donors; CAGRA GPU ANN for larger pools)
3. **Align** — MD5 chunk hashing finds reusable 256-token KV chunks; optional fuzzy matching handles shifted boundaries
4. **Inject** — donor token IDs substituted into the request; LMCache/RadixCache retrieves cached KV; RoPE correction applied in-place on K tensors

## When SemBlend Helps

Most effective when prompts share a large common context:

- **Document Q&A / RAG** — same retrieved passages, different questions
- **Summarization** — same article, different instruction phrasing
- **Multi-turn dialogue** — conversation history prefix reused across turns
- **Code completion** — shared repository context across requests

Dissimilar workloads (code generation from scratch, fully novel queries) see ~4% overhead with 0% hit — negligible in practice.

## Contributing

See [CONTRIBUTING.md](CONTRIBUTING.md).

## License

[Apache License 2.0](LICENSE).

Built at [WorldFlow AI](https://worldflowai.com). For enterprise support contact [research@worldflowai.com](mailto:research@worldflowai.com).
