Metadata-Version: 2.4
Name: langchain-wzrd
Version: 0.2.0
Summary: WZRD velocity-aware model selection and inference tools for LangChain.
License-Expression: MIT
Project-URL: Homepage, https://twzrd.xyz
Project-URL: Signal API, https://api.twzrd.xyz/v1/signals/momentum
Project-URL: Source, https://github.com/twzrd-sol/wzrd-final/tree/main/integrations/langchain-wzrd
Keywords: langchain,llm,routing,wzrd,model-selection,velocity,tools
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: langchain-core>=0.2
Requires-Dist: httpx>=0.24
Provides-Extra: chat
Requires-Dist: wzrd-client>=0.5.5; extra == "chat"
Requires-Dist: langchain-openai>=0.1; extra == "chat"
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"

# langchain-wzrd

LangChain tools for WZRD velocity signals. Real-time model selection across 100+ LLMs.

## Install

```bash
pip install langchain-wzrd
```

## Tools

### WzrdModelPicker (free, zero config)

Returns the best model for a task based on live adoption velocity.

```python
from langchain_wzrd import WzrdModelPicker
from langchain.agents import initialize_agent, AgentType
from langchain_openai import ChatOpenAI

tools = [WzrdModelPicker()]
llm = ChatOpenAI(model="gpt-4o-mini")
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
agent.run("What's the best model for code generation right now?")
```

With LangGraph:

```python
from langchain_wzrd import WzrdModelPicker
from langgraph.prebuilt import create_react_agent

tools = [WzrdModelPicker()]
agent = create_react_agent(llm, tools)
result = agent.invoke({"messages": [("user", "Which model has the most momentum?")]})
```

### WzrdInference (paid, requires API key)

Runs inference through WZRD's oracle. Auto-selects the top velocity model if none specified.

```python
from langchain_wzrd import WzrdInference

tools = [WzrdInference(api_key="your-key")]
# or set WZRD_API_KEY env var and use WzrdInference() with no args
```

### ChatWZRD (velocity-routed ChatModel)

Drop-in ChatModel that auto-routes every call to the top velocity model.

```python
from langchain_wzrd import ChatWZRD

llm = ChatWZRD(task="code", openai_api_key="sk-or-...")
response = llm.invoke("Write a Python sort function")
```

## How it works

Signals are cached for 60s. The free momentum endpoint requires no auth.
Models are ranked by a composite of trend direction, velocity score, and confidence.
