Metadata-Version: 2.4
Name: visibe
Version: 0.1.3
Summary: AI Agent Observability Platform - Track CrewAI, LangChain, LangGraph, and more
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: requests>=2.31.0
Provides-Extra: crewai
Requires-Dist: openinference-instrumentation-crewai>=0.1.14; extra == "crewai"
Requires-Dist: openinference-instrumentation-openai>=0.1.18; extra == "crewai"
Requires-Dist: opentelemetry-api~=1.34.0; extra == "crewai"
Requires-Dist: opentelemetry-sdk~=1.34.0; extra == "crewai"
Provides-Extra: tiktoken
Requires-Dist: tiktoken>=0.5.0; extra == "tiktoken"
Provides-Extra: langchain
Requires-Dist: langchain-core>=0.1.0; extra == "langchain"
Provides-Extra: langgraph
Requires-Dist: langgraph>=0.2.0; extra == "langgraph"
Requires-Dist: langchain-core>=0.1.0; extra == "langgraph"
Provides-Extra: openai
Requires-Dist: openai>=1.0.0; extra == "openai"
Provides-Extra: autogen
Requires-Dist: autogen-agentchat>=0.4.0; extra == "autogen"
Requires-Dist: autogen-ext>=0.4.0; extra == "autogen"
Provides-Extra: all
Requires-Dist: openinference-instrumentation-crewai>=0.1.14; extra == "all"
Requires-Dist: openinference-instrumentation-openai>=0.1.18; extra == "all"
Requires-Dist: tiktoken>=0.5.0; extra == "all"
Requires-Dist: langchain-core>=0.1.0; extra == "all"
Requires-Dist: langgraph>=0.2.0; extra == "all"
Requires-Dist: openai>=1.0.0; extra == "all"
Requires-Dist: autogen-agentchat>=0.4.0; extra == "all"
Requires-Dist: autogen-ext>=0.4.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=7.4; extra == "dev"
Requires-Dist: pytest-asyncio>=0.23; extra == "dev"

<div align="center">

# Visibe SDK for Python

**Observability for AI agents.** Track costs, performance, and errors across your entire AI stack — whether you're using CrewAI, LangChain, LangGraph, AutoGen, or direct OpenAI calls.

[![PyPI version](https://img.shields.io/pypi/v/visibe.svg)](https://pypi.python.org/pypi/visibe)
![Python](https://img.shields.io/badge/python-3.10+-blue.svg)

</div>

---

## 📦 Getting Started

### Installation

```bash
pip install visibe
```

### Basic Configuration

Set your API key in a `.env` file:

```bash
VISIBE_API_KEY=sk_live_your_api_key_here
```

Then initialize the SDK — one line instruments everything:

```python
import visibe

visibe.init()
```

That's it. Every OpenAI, LangChain, LangGraph, CrewAI, and AutoGen client created after this call is automatically traced.

### Quick Usage Example

```python
import visibe
from openai import OpenAI

visibe.init()

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)
# This call is automatically traced — cost, tokens, duration, and content are captured.
```

---

## 🧩 Integrations

Visibe integrates with the most popular AI/agent frameworks in Python. Every integration supports three levels of control:

| Framework | `visibe.init()` | `obs.instrument()` | `obs.track()` / manual |
|-----------|:-:|:-:|:-:|
| **OpenAI** | ✅ | ✅ | ✅ |
| **LangChain** | ✅ | ✅ | ✅ |
| **LangGraph** | ✅ | ✅ | ✅ |
| **CrewAI** | ✅ | ✅ | ✅ |
| **AutoGen** | ✅ | ✅ | ✅ |
| **AWS Bedrock** | ✅ | ✅ | ✅ |

Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.

### OpenAI

```python
from visibe import Visibe
from openai import OpenAI

obs = Visibe()
client = OpenAI()

obs.instrument(client)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)
```

Group multiple calls into one trace:

```python
with obs.track(client, name="my-conversation"):
    r1 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
    r2 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
# ^ Both calls sent as one grouped trace
```

Works with chat completions and Responses API, streaming, tool calls, sync and async clients.

### LangChain / LangGraph

```python
from visibe import Visibe
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

obs = Visibe()
llm = ChatOpenAI(model="gpt-4o-mini")
graph = create_react_agent(llm, tools)

obs.instrument(graph, name="my-agent")

result = graph.invoke({"messages": [("user", "Your prompt here")]})
```

Dynamic pipe chains (`prompt | llm | parser`) are also automatically instrumented when using `visibe.init()`. Nested sub-graphs are tracked with hierarchical agent names.

### CrewAI

```python
from visibe import Visibe
from crewai import Agent, Task, Crew

obs = Visibe()

architect = Agent(role="Plot Architect", goal="Design mystery plots", backstory="...")
designer = Agent(role="Character Designer", goal="Create characters", backstory="...")
narrator = Agent(role="Narrator", goal="Write the story", backstory="...")

task1 = Task(description="Create a plot outline", agent=architect, expected_output="...")
task2 = Task(description="Design characters", agent=designer, expected_output="...", context=[task1])
task3 = Task(description="Write the story", agent=narrator, expected_output="...", context=[task1, task2])

crew = Crew(agents=[architect, designer, narrator], tasks=[task1, task2, task3])

obs.instrument(crew, name="mystery-writer")
result = crew.kickoff()
# ^ Single trace with all agents, LLM calls, and per-task cost breakdown
```

With `visibe.init()`, trace names are auto-derived from agent roles (e.g. `"Plot Architect, Character Designer, Narrator"`). Training and testing runs (`crew.train()`, `crew.test()`) are traced too.

### AutoGen

```python
from visibe import Visibe
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent

obs = Visibe()
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")

obs.instrument(model_client, name="my-conversation")

assistant = AssistantAgent("assistant", model_client=model_client)
result = await assistant.run(task="Help me with this task")
```

### AWS Bedrock

```python
from visibe import Visibe
import boto3

obs = Visibe()
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")

obs.instrument(bedrock)

response = bedrock.converse(
    modelId="anthropic.claude-3-haiku-20240307-v1:0",
    messages=[{"role": "user", "content": [{"text": "Hello!"}]}]
)
```

Group multiple calls into one trace:

```python
with obs.track(bedrock, name="my-workflow"):
    r1 = bedrock.converse(modelId="anthropic.claude-3-haiku-20240307-v1:0", messages=[...])
    r2 = bedrock.converse(modelId="amazon.nova-lite-v1:0", messages=[...])
# ^ Both calls sent as one grouped trace
```

Supports all Bedrock API methods: `converse`, `converse_stream`, `invoke_model`, and `invoke_model_with_response_stream`. Works with all models available via Bedrock — Claude, Nova, Llama, Mistral, and more.

---

## ⚙️ Configuration

```python
from visibe import Visibe

# API key from environment (recommended)
obs = Visibe()

# Or pass directly
obs = Visibe(api_key="sk_live_abc123")

# Group traces by session
obs = Visibe(session_id="user-session-123")
```

### Environment Variables

| Variable | Description | Default |
|----------|-------------|---------|
| `VISIBE_API_KEY` | Your API key (required) | — |
| `VISIBE_API_URL` | Override API endpoint | `https://api.visibe.ai` |
| `VISIBE_AUTO_INSTRUMENT` | Comma-separated frameworks to auto-instrument | All detected |
| `VISIBE_CONTENT_LIMIT` | Max chars for LLM/tool content in spans | `1000` |
| `VISIBE_DEBUG` | Enable debug logging (`1` to enable) | `0` |

---

## 📊 What Gets Tracked

| Metric | Description |
|--------|-------------|
| **Cost** | Total spend + per-agent and per-task cost breakdown |
| **Tokens** | Input/output tokens per LLM call |
| **Duration** | Total time + time per step |
| **Tools** | Which tools were used, duration, success/failure |
| **Errors** | When and where things failed |
| **Spans** | Full execution timeline with LLM calls, tool calls, and agent events |

---

## 📚 Documentation

For advanced usage, detailed integration guides, and API reference, check out the full documentation:

- [OpenAI integration](docs/integrations/openai.md)
- [LangChain integration](docs/integrations/langchain.md)
- [CrewAI integration](docs/integrations/crewai.md)
- [AutoGen integration](docs/integrations/autogen.md)
- [AWS Bedrock integration](docs/integrations/bedrock.md)

---

## 🔗 Resources

- [PyPI Package](https://pypi.python.org/pypi/visibe) — Install the latest version
- [Visibe Dashboard](https://app.visibe.ai) — View your traces and analytics

---

## 📃 License

MIT — see [LICENSE](LICENSE) for details.
