Metadata-Version: 2.4
Name: hush-providers
Version: 0.1.8
Summary: Hush workflow LLM, embedding, and reranking providers
Project-URL: Homepage, https://github.com/batman1m2001-cyber/Hush-ai
Project-URL: Documentation, https://github.com/batman1m2001-cyber/Hush-ai#readme
Project-URL: Repository, https://github.com/batman1m2001-cyber/Hush-ai
Author: Hush Team
License-Expression: Apache-2.0
License-File: LICENSE
Keywords: azure,embedding,gemini,llm,openai,reranking,workflow
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.10
Requires-Dist: aiohttp>=3.8
Requires-Dist: hush-icore>=0.1.8
Requires-Dist: numpy>=2.2.6
Requires-Dist: openai>=1.0
Requires-Dist: pydantic>=2.0
Requires-Dist: pyyaml>=6.0
Provides-Extra: all
Requires-Dist: boto3>=1.28; extra == 'all'
Requires-Dist: google-cloud-aiplatform>=1.38; extra == 'all'
Requires-Dist: onnxruntime<1.20,>=1.15; extra == 'all'
Requires-Dist: requests>=2.32; extra == 'all'
Requires-Dist: tokenizers>=0.13; extra == 'all'
Requires-Dist: torch>=2.0; extra == 'all'
Requires-Dist: transformers>=4.30; extra == 'all'
Provides-Extra: all-light
Requires-Dist: boto3>=1.28; extra == 'all-light'
Requires-Dist: google-cloud-aiplatform>=1.38; extra == 'all-light'
Requires-Dist: onnxruntime<1.20,>=1.15; extra == 'all-light'
Requires-Dist: requests>=2.32; extra == 'all-light'
Requires-Dist: tokenizers>=0.13; extra == 'all-light'
Provides-Extra: bedrock
Requires-Dist: boto3>=1.28; extra == 'bedrock'
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.21; extra == 'dev'
Requires-Dist: pytest>=7.0; extra == 'dev'
Requires-Dist: python-dotenv>=1.0; extra == 'dev'
Requires-Dist: ruff>=0.1; extra == 'dev'
Provides-Extra: embeddings
Requires-Dist: onnxruntime<1.20,>=1.15; extra == 'embeddings'
Requires-Dist: tokenizers>=0.13; extra == 'embeddings'
Provides-Extra: gemini
Requires-Dist: google-cloud-aiplatform>=1.38; extra == 'gemini'
Requires-Dist: requests>=2.32; extra == 'gemini'
Provides-Extra: huggingface
Requires-Dist: torch>=2.0; extra == 'huggingface'
Requires-Dist: transformers>=4.30; extra == 'huggingface'
Provides-Extra: onnx
Requires-Dist: onnxruntime<1.20,>=1.15; extra == 'onnx'
Requires-Dist: tokenizers>=0.13; extra == 'onnx'
Provides-Extra: openai
Provides-Extra: rerankers
Requires-Dist: onnxruntime<1.20,>=1.15; extra == 'rerankers'
Requires-Dist: tokenizers>=0.13; extra == 'rerankers'
Description-Content-Type: text/markdown

# hush-providers

LLM, embedding, and reranking provider integrations for Hush workflows.

[![PyPI](https://img.shields.io/pypi/v/hush-providers)](https://pypi.org/project/hush-providers/)
[![Python](https://img.shields.io/badge/python-3.10%2B-blue)](https://python.org)

## Installation

```bash
pip install hush-providers
```

## Quick Start

### LLM (chain = prompt + LLM combined)

```python
from hush.core import Hush, GraphOp, START, END, PARENT
from hush.providers import chain

async def main():
    with GraphOp(name="chat") as graph:
        chat = chain(
            resource="gpt-4o",
            template={"system": "You are a helpful assistant.", "user": "{question}"},
            question=PARENT["question"],
        )
        START >> chat >> END

    result = await Hush(graph).run(inputs={"question": "What is Python?"})
    print(result["content"])
```

### Embedding

```python
from hush.providers import EmbeddingOp

embed = EmbeddingOp.of(resource="bge-m3", texts=PARENT["documents"])
```

### Reranking

```python
from hush.providers import RerankOp

rerank = RerankOp.of(resource="bge-reranker", query=PARENT["query"], documents=PARENT["docs"])
```

## Supported Providers

| Type | Providers |
|------|-----------|
| **LLM** | OpenAI, Azure OpenAI, Google Gemini, vLLM |
| **Embedding** | OpenAI/vLLM, TEI, HuggingFace, ONNX |
| **Reranking** | vLLM, Pinecone, Cohere, HuggingFace, ONNX |

## Configuration

Providers are configured via YAML resource files:

```yaml
# resources.yaml
llm:
  gpt-4o:
    type: openai
    model: gpt-4o
    api_key: ${OPENAI_API_KEY}

embeddings:
  bge-m3:
    type: onnx
    model_path: /models/bge-m3
```

## Feature Flags

Install only the providers you need:

```bash
pip install "hush-providers[openai]"       # OpenAI + Azure
pip install "hush-providers[gemini]"       # Google Gemini
pip install "hush-providers[onnx]"         # ONNX Runtime
pip install "hush-providers[all-light]"    # All without PyTorch
pip install "hush-providers[all]"          # Everything
```

## Rust Backend

All providers have native Rust implementations via [hush-providers (crate)](https://crates.io/crates/hush-providers) — direct HTTP calls without Python overhead.

## Related Packages

| Package | Description |
|---------|-------------|
| [hush-icore](https://pypi.org/project/hush-icore/) | Core workflow engine (required) |
| [hush-telemetry](https://pypi.org/project/hush-telemetry/) | Tracing with token/cost tracking |
| [hush-serve](https://pypi.org/project/hush-serve/) | Serve workflows as HTTP APIs |

## License

Apache 2.0
