Metadata-Version: 2.4
Name: visibe
Version: 0.1.10
Summary: AI Agent Observability Platform - Track CrewAI, LangChain, LangGraph, and more
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: requests>=2.31.0
Provides-Extra: crewai
Requires-Dist: openinference-instrumentation-crewai>=0.1.14; extra == "crewai"
Requires-Dist: openinference-instrumentation-openai>=0.1.18; extra == "crewai"
Requires-Dist: opentelemetry-api~=1.34.0; extra == "crewai"
Requires-Dist: opentelemetry-sdk~=1.34.0; extra == "crewai"
Provides-Extra: tiktoken
Requires-Dist: tiktoken>=0.5.0; extra == "tiktoken"
Provides-Extra: langchain
Requires-Dist: langchain-core>=0.1.0; extra == "langchain"
Provides-Extra: langgraph
Requires-Dist: langgraph>=0.2.0; extra == "langgraph"
Requires-Dist: langchain-core>=0.1.0; extra == "langgraph"
Provides-Extra: openai
Requires-Dist: openai>=1.0.0; extra == "openai"
Provides-Extra: autogen
Requires-Dist: autogen-agentchat>=0.4.0; extra == "autogen"
Requires-Dist: autogen-ext>=0.4.0; extra == "autogen"
Provides-Extra: all
Requires-Dist: openinference-instrumentation-crewai>=0.1.14; extra == "all"
Requires-Dist: openinference-instrumentation-openai>=0.1.18; extra == "all"
Requires-Dist: tiktoken>=0.5.0; extra == "all"
Requires-Dist: langchain-core>=0.1.0; extra == "all"
Requires-Dist: langgraph>=0.2.0; extra == "all"
Requires-Dist: openai>=1.0.0; extra == "all"
Requires-Dist: autogen-agentchat>=0.4.0; extra == "all"
Requires-Dist: autogen-ext>=0.4.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=7.4; extra == "dev"
Requires-Dist: pytest-asyncio>=0.23; extra == "dev"

<div align="center">

# Visibe SDK for Python

<img src="https://raw.githubusercontent.com/Project140/visibe-python/main/analytics.jpeg" alt="Visibe Analytics" width="100%" />

**Observability for AI agents.** Track costs, performance, and errors across your entire AI stack — whether you're using CrewAI, LangChain, LangGraph, AutoGen, or direct OpenAI calls.

[![PyPI version](https://img.shields.io/pypi/v/visibe.svg)](https://pypi.python.org/pypi/visibe)
![Python](https://img.shields.io/badge/python-3.10+-blue.svg)

</div>

---

## 📦 Getting Started

### 1. Create an account

Sign up at **[app.visibe.ai](https://app.visibe.ai)** and create a project.

### 2. Get an API key

In your project, go to **Settings → API Keys** and generate a new key. It will look like `sk_live_...`.

### 3. Install the SDK

```bash
pip install visibe
```

### 4. Set your API key

```bash
export VISIBE_API_KEY=sk_live_your_api_key_here
```

Or pass it directly in code:

```python
visibe.init(api_key="sk_live_your_api_key_here")
```

Or load it from a `.env` file using [python-dotenv](https://pypi.org/project/python-dotenv/):

```bash
pip install python-dotenv
```

```python
from dotenv import load_dotenv
load_dotenv()  # loads VISIBE_API_KEY from .env

import visibe
visibe.init()
```

### 5. Instrument your app

```python
import visibe

visibe.init()
```

That's it. Every OpenAI, LangChain, LangGraph, CrewAI, AutoGen, and Bedrock client created after this call is automatically traced — no other code changes needed.

---

## 🧩 Integrations

| Framework | Auto (`visibe.init()`) | Manual |
|-----------|:-:|:-:|
| **OpenAI** | ✅ | ✅ |
| **LangChain** | ✅ | ✅ |
| **LangGraph** | ✅ | ✅ |
| **CrewAI** | ✅ | ✅ |
| **AutoGen** | ✅ | ✅ |
| **AWS Bedrock** | ✅ | ✅ |

Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.

### OpenAI

```python
import visibe
from openai import OpenAI

visibe.init()

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)
# Automatically traced — cost, tokens, duration, and content captured.
```

### LangChain / LangGraph

```python
import visibe
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

visibe.init()

llm = ChatOpenAI(model="gpt-4o-mini")
graph = create_react_agent(llm, tools)

result = graph.invoke({"messages": [("user", "Your prompt here")]})
# Automatically traced — agent steps, LLM calls, and tool calls captured.
```

Dynamic pipe chains (`prompt | llm | parser`) are also automatically traced. Nested sub-graphs are tracked with hierarchical agent names.

### CrewAI

```python
import visibe
from crewai import Agent, Task, Crew

visibe.init()

architect = Agent(role="Plot Architect", goal="Design mystery plots", backstory="...")
designer = Agent(role="Character Designer", goal="Create characters", backstory="...")
narrator = Agent(role="Narrator", goal="Write the story", backstory="...")

task1 = Task(description="Create a plot outline", agent=architect, expected_output="...")
task2 = Task(description="Design characters", agent=designer, expected_output="...", context=[task1])
task3 = Task(description="Write the story", agent=narrator, expected_output="...", context=[task1, task2])

crew = Crew(agents=[architect, designer, narrator], tasks=[task1, task2, task3])
result = crew.kickoff()
# Single trace with all agents, LLM calls, and per-task cost breakdown.
```

Training and testing runs (`crew.train()`, `crew.test()`) are traced too.

### AutoGen

```python
import visibe
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent

visibe.init()

model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
assistant = AssistantAgent("assistant", model_client=model_client)
result = await assistant.run(task="Help me with this task")
# Automatically traced.
```

### AWS Bedrock

```python
import visibe
import boto3

visibe.init()

bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
response = bedrock.converse(
    modelId="anthropic.claude-3-haiku-20240307-v1:0",
    messages=[{"role": "user", "content": [{"text": "Hello!"}]}]
)
# Automatically traced.
```

Supports `converse`, `converse_stream`, `invoke_model`, and `invoke_model_with_response_stream`. Works with all models available via Bedrock — Claude, Nova, Llama, Mistral, and more.

---

## ⚙️ Configuration

```python
import visibe

visibe.init(
    api_key="sk_live_abc123",       # or set VISIBE_API_KEY env var
    frameworks=["openai", "langgraph"],  # limit to specific frameworks
    content_limit=500,              # max chars for LLM content in traces
    debug=True,                     # enable debug logging
)
```

### Environment Variables

| Variable | Description | Default |
|----------|-------------|---------|
| `VISIBE_API_KEY` | Your API key (required) | — |
| `VISIBE_API_URL` | Override API endpoint | `https://api.visibe.ai` |
| `VISIBE_AUTO_INSTRUMENT` | Comma-separated frameworks to auto-instrument | All detected |
| `VISIBE_CONTENT_LIMIT` | Max chars for LLM/tool content in spans | `1000` |
| `VISIBE_DEBUG` | Enable debug logging (`1` to enable) | `0` |

---

## 📊 What Gets Tracked

| Metric | Description |
|--------|-------------|
| **Cost** | Total spend + per-agent and per-task cost breakdown |
| **Tokens** | Input/output tokens per LLM call |
| **Duration** | Total time + time per step |
| **Tools** | Which tools were used, duration, success/failure |
| **Errors** | When and where things failed |
| **Spans** | Full execution timeline with LLM calls, tool calls, and agent events |

---

## 🔧 Manual Instrumentation

For cases where you need explicit control — instrumenting a specific client, grouping calls into a named trace, or using Visibe without `init()`.

### Instrument a specific client

```python
from visibe import Visibe

tracer = Visibe(api_key="sk_live_abc123")
tracer.instrument(graph, name="my-agent")

result = graph.invoke({"messages": [("user", "Hello")]})
```

### Group multiple calls into one trace

```python
from visibe import Visibe

tracer = Visibe()

with tracer.track(client, name="my-conversation"):
    r1 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
    r2 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
# Both calls sent as one grouped trace.
```

### Remove instrumentation

```python
tracer.uninstrument(client)

# Or use as a context manager for automatic cleanup:
with tracer.instrument(graph, name="my-agent"):
    graph.invoke(...)
# Instrumentation removed automatically on exit.
```

---

## 📚 Documentation

- [OpenAI integration](docs/integrations/openai.md)
- [LangChain integration](docs/integrations/langchain.md)
- [CrewAI integration](docs/integrations/crewai.md)
- [AutoGen integration](docs/integrations/autogen.md)
- [AWS Bedrock integration](docs/integrations/bedrock.md)

---

## 🔗 Resources

- [visibe.ai](https://visibe.ai) — Product website
- [app.visibe.ai](https://app.visibe.ai) — Dashboard (sign up, manage API keys, view traces)
- [PyPI Package](https://pypi.python.org/pypi/visibe) — Latest version on PyPI

---

## 📃 License

MIT — see [LICENSE](LICENSE) for details.
