Metadata-Version: 2.4
Name: langchain-wzrd
Version: 0.1.0
Summary: WZRD velocity-aware model selection for LangChain.
License-Expression: MIT
Project-URL: Homepage, https://twzrd.xyz
Project-URL: Signal API, https://api.twzrd.xyz/v1/signals/momentum
Project-URL: Source, https://github.com/twzrd-sol/wzrd-final/tree/main/integrations/langchain-wzrd
Keywords: langchain,llm,routing,wzrd,model-selection,velocity
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: wzrd-client>=0.2.0
Requires-Dist: langchain-core>=0.2
Requires-Dist: langchain-openai>=0.1
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"

# langchain-wzrd

WZRD velocity-aware model selection for LangChain.

Every call picks the best model based on live adoption velocity from
[WZRD](https://twzrd.xyz) — HuggingFace downloads, GitHub stars, OpenRouter
inference, and ArtificialAnalysis benchmarks, scored every 300 seconds.

## Install

```bash
pip install langchain-wzrd
```

## Quick start

```python
from langchain_wzrd import ChatWZRD

llm = ChatWZRD(task="code", openai_api_key="sk-or-...")
response = llm.invoke("Write a Python sort function")
print(response.content)
```

The model is selected per call. If Qwen3.5-9B is accelerating right now,
that's what you get. If Llama-70B takes the lead tomorrow, you get that instead.

## Earn CCM while routing

```python
llm = ChatWZRD(
    task="code",
    openai_api_key="sk-or-...",
    wzrd_report=True,
    wzrd_keypair="~/.config/solana/id.json",
)
```

Every routed call auto-reports to WZRD in a background thread. Your agent
earns CCM for contributing signal quality. No manual step needed.

## Options

| Parameter | Default | Description |
|-----------|---------|-------------|
| `task` | `"general"` | `"code"`, `"chat"`, `"reasoning"`, `"general"` |
| `fallback` | `"meta-llama/llama-3.3-70b-instruct"` | Model if WZRD is unavailable |
| `candidates` | `None` | Allowlist of model names to consider |
| `temperature` | `0.7` | Passed to underlying model |
| `max_tokens` | `None` | Passed to underlying model |
| `wzrd_report` | `False` | Auto-report routing decisions to earn CCM |
| `wzrd_keypair` | `None` | Solana keypair path for agent auth |

## How it works

1. `wzrd.pick_details(task)` queries the live signal API
2. Best model is selected by trend, confidence, and task fit
3. `ChatOpenAI` is instantiated with that model via OpenRouter
4. Response is returned — caller sees a normal LangChain ChatModel

No custom scoring. No duplicate routing logic. Thin wrapper around
[wzrd-client](https://pypi.org/project/wzrd-client/).
