Metadata-Version: 2.4
Name: synapse-langchain
Version: 0.1.0
Summary: LangChain integration for the Synapse decentralized AI model marketplace. Wraps the Synapse orchestrator's /api/infer endpoint as a standard LangChain LLM.
Author: gitgonewild
License: MIT
Project-URL: Homepage, https://github.com/N1KH1LT0X1N/Synapse
Project-URL: Repository, https://github.com/N1KH1LT0X1N/Synapse
Project-URL: Issues, https://github.com/N1KH1LT0X1N/Synapse/issues
Project-URL: Documentation, https://github.com/N1KH1LT0X1N/Synapse/tree/main/integrations/langchain-synapse#readme
Keywords: langchain,synapse,decentralized-ai,ai-marketplace,ethereum,web3,llm,inference,huggingface
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: langchain-core>=0.3.0
Requires-Dist: requests>=2.31
Requires-Dist: pydantic>=2.5
Provides-Extra: dev
Requires-Dist: langchain>=0.3.0; extra == "dev"

# langchain-synapse

Dead-simple [LangChain](https://python.langchain.com) integration for the
**Synapse** decentralized inference network. Wraps the orchestrator's
`POST /api/infer` endpoint as a standard LangChain `LLM` so Synapse plugs into
any chain, agent, or RAG pipeline.

## Install

From PyPI:

```bash
pip install synapse-langchain
```

From source:

```bash
cd integrations/langchain-synapse
pip install -e .
```

> Distribution name on PyPI is `synapse-langchain`; the Python import name is
> `langchain_synapse` (unchanged).

## Use

```python
from langchain_synapse import SynapseLLM

llm = SynapseLLM(
    token_id="1",                                          # model NFT id
    user="0x0000000000000000000000000000000000000001",     # your wallet
    base_url="http://localhost:8000",                      # orchestrator
    max_tokens=128,
)

print(llm.invoke("Hello from LangChain!"))
```

Works with LCEL out of the box:

```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

chain = ChatPromptTemplate.from_template("Q: {q}\nA:") | llm | StrOutputParser()
chain.invoke({"q": "What is decentralized inference?"})
```

## Demos

```bash
# Make sure the orchestrator is running:
#   ROLE=orchestrator uvicorn backend.main:app --port 8000

python examples/demo.py
python examples/chain_demo.py
```

## Config

| Param         | Env var             | Default                   |
| ------------- | ------------------- | ------------------------- |
| `base_url`    | `SYNAPSE_API_URL`   | `http://localhost:8000`   |
| `token_id`    | —                   | *required*                |
| `user`        | —                   | *required*                |
| `session_id`  | —                   | random `bytes32` (demo)   |
| `max_tokens`  | —                   | `128`                     |
| `parameters`  | —                   | `None` (pipeline kwargs)  |

> `session_id` must come from `Marketplace.openSession` for real paid runs.
> Leave it `None` only when demoing against a dev orchestrator.

## How it works

`SynapseLLM._call` just POSTs to `/api/infer` with the Synapse
[`InferenceRequest`](../../backend/shared/schemas.py) shape and returns
`outputStr` from the response. That's the entire integration — 60-ish lines.
