Metadata-Version: 2.4
Name: visibe
Version: 0.1.11
Summary: AI Agent Observability Platform - Track CrewAI, LangChain, LangGraph, and more
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: requests>=2.31.0
Provides-Extra: crewai
Requires-Dist: openinference-instrumentation-crewai>=0.1.14; extra == "crewai"
Requires-Dist: openinference-instrumentation-openai>=0.1.18; extra == "crewai"
Requires-Dist: opentelemetry-api~=1.34.0; extra == "crewai"
Requires-Dist: opentelemetry-sdk~=1.34.0; extra == "crewai"
Provides-Extra: tiktoken
Requires-Dist: tiktoken>=0.5.0; extra == "tiktoken"
Provides-Extra: langchain
Requires-Dist: langchain-core>=0.1.0; extra == "langchain"
Provides-Extra: langgraph
Requires-Dist: langgraph>=0.2.0; extra == "langgraph"
Requires-Dist: langchain-core>=0.1.0; extra == "langgraph"
Provides-Extra: openai
Requires-Dist: openai>=1.0.0; extra == "openai"
Provides-Extra: autogen
Requires-Dist: autogen-agentchat>=0.4.0; extra == "autogen"
Requires-Dist: autogen-ext>=0.4.0; extra == "autogen"
Provides-Extra: all
Requires-Dist: openinference-instrumentation-crewai>=0.1.14; extra == "all"
Requires-Dist: openinference-instrumentation-openai>=0.1.18; extra == "all"
Requires-Dist: tiktoken>=0.5.0; extra == "all"
Requires-Dist: langchain-core>=0.1.0; extra == "all"
Requires-Dist: langgraph>=0.2.0; extra == "all"
Requires-Dist: openai>=1.0.0; extra == "all"
Requires-Dist: autogen-agentchat>=0.4.0; extra == "all"
Requires-Dist: autogen-ext>=0.4.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=7.4; extra == "dev"
Requires-Dist: pytest-asyncio>=0.23; extra == "dev"

<img src="https://raw.githubusercontent.com/Project140/visibe-python/main/analytics.jpeg" alt="Visibe Analytics" width="100%" />

<div align="center">

# Visibe SDK for Python

**Observability for AI agents.** Track costs, performance, and errors across your entire AI stack — whether you're using CrewAI, LangChain, LangGraph, AutoGen, Anthropic, or direct OpenAI calls.

[![PyPI version](https://img.shields.io/pypi/v/visibe.svg)](https://pypi.python.org/pypi/visibe)
![Python](https://img.shields.io/badge/python-3.10+-blue.svg)

</div>

---

## 🚀 Quick Start

```bash
pip install visibe
```

Get your API key at [app.visibe.ai](https://app.visibe.ai) → Settings → API Keys, then add one line to your app:

```python
import visibe

visibe.init(api_key="sk_live_your_key_here")
```

That's it. Every OpenAI, Anthropic, LangChain, LangGraph, CrewAI, AutoGen, and Bedrock call is automatically traced from this point on — no wrappers, no config changes.

The API key can also be set via the `VISIBE_API_KEY` environment variable so you don't need to pass it in code:

```bash
export VISIBE_API_KEY=sk_live_your_key_here
```

```python
import visibe
visibe.init()
```

---

## 🧩 Supported Frameworks

- [OpenAI](https://github.com/openai/openai-python)
- [Anthropic](https://github.com/anthropics/anthropic-sdk-python)
- [LangChain](https://github.com/langchain-ai/langchain)
- [LangGraph](https://github.com/langchain-ai/langgraph)
- [CrewAI](https://github.com/crewAIInc/crewAI)
- [AutoGen](https://github.com/microsoft/autogen)
- [AWS Bedrock](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime.html)

Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.

---

## ⚙️ Configuration

| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `api_key` | `str` | — | Your Visibe API key. Falls back to `VISIBE_API_KEY` env var |
| `redact_content` | `bool` | `False` | Omit prompt and completion text from all traces. Only metadata is sent (tokens, cost, duration, model, errors). |
| `frameworks` | `list[str]` | All detected | Limit auto-instrumentation to specific frameworks (e.g. `["openai", "langgraph"]`) |
| `debug` | `bool` | `False` | Enable debug logging |

### Environment Variables

| Variable | Description | Default |
|----------|-------------|---------|
| `VISIBE_API_KEY` | Your API key (required) | — |
| `VISIBE_API_URL` | Override API endpoint | `https://api.visibe.ai` |
| `VISIBE_AUTO_INSTRUMENT` | Comma-separated frameworks to auto-instrument | All detected |
| `VISIBE_REDACT_CONTENT` | Omit prompt/completion text from traces (`1` or `true`) | `false` |
| `VISIBE_DEBUG` | Enable debug logging (`1` to enable) | `0` |

---

## 📊 What Gets Tracked

| Metric | Sent when `redact_content=True` |
|--------|:---:|
| Cost, tokens, duration | ✅ |
| Model & provider | ✅ |
| Tool calls (name, duration, success/failure) | ✅ |
| Errors (type, message) | ✅ |
| Full execution timeline (spans) | ✅ |
| Per-agent and per-task cost breakdown | ✅ |
| Prompt text | ❌ omitted |
| Completion text | ❌ omitted |

When `redact_content=True`, prompt and completion text never leave your codebase. You retain full observability over costs, performance, and errors.

---

## 📖 Integration Examples

### OpenAI

```python
import visibe
from openai import OpenAI

visibe.init()

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)
# Automatically traced — cost, tokens, duration, and content captured.
```

### Anthropic

```python
import visibe
import anthropic

visibe.init()

client = anthropic.Anthropic()
response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello!"}]
)
# Automatically traced. Streaming (stream=True and .stream()) also supported.
```

### LangChain / LangGraph

```python
import visibe
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

visibe.init()

llm = ChatOpenAI(model="gpt-4o-mini")
graph = create_react_agent(llm, tools)

result = graph.invoke({"messages": [("user", "Your prompt here")]})
# Automatically traced — agent steps, LLM calls, and tool calls captured.
```

Dynamic pipe chains (`prompt | llm | parser`) are also automatically traced. Nested sub-graphs are tracked with hierarchical agent names.

### CrewAI

```python
import visibe
from crewai import Agent, Task, Crew

visibe.init()

architect = Agent(role="Plot Architect", goal="Design mystery plots", backstory="...")
designer = Agent(role="Character Designer", goal="Create characters", backstory="...")
narrator = Agent(role="Narrator", goal="Write the story", backstory="...")

task1 = Task(description="Create a plot outline", agent=architect, expected_output="...")
task2 = Task(description="Design characters", agent=designer, expected_output="...", context=[task1])
task3 = Task(description="Write the story", agent=narrator, expected_output="...", context=[task1, task2])

crew = Crew(agents=[architect, designer, narrator], tasks=[task1, task2, task3])
result = crew.kickoff()
# Single trace with all agents, LLM calls, and per-task cost breakdown.
```

Training and testing runs (`crew.train()`, `crew.test()`) are traced too.

### AutoGen

```python
import visibe
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent

visibe.init()

model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
assistant = AssistantAgent("assistant", model_client=model_client)
result = await assistant.run(task="Help me with this task")
# Automatically traced.
```

### AWS Bedrock

```python
import visibe
import boto3

visibe.init()

bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
response = bedrock.converse(
    modelId="anthropic.claude-3-haiku-20240307-v1:0",
    messages=[{"role": "user", "content": [{"text": "Hello!"}]}]
)
# Automatically traced.
```

Supports `converse`, `converse_stream`, `invoke_model`, and `invoke_model_with_response_stream`. Works with all models available via Bedrock — Claude, Nova, Llama, Mistral, and more.

---

## 📖 API Reference

### `visibe.init()`

Call once at the top of your app, before creating any clients. Returns a `Visibe` instance.

```python
import visibe

visibe.init(api_key="sk_live_abc123")
```

### `track()`

Groups multiple LLM calls into a single named trace.

```python
from visibe import Visibe

tracer = Visibe()

with tracer.track(client, name="my-conversation"):
    r1 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
    r2 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
# Both calls appear as spans under one trace.
```

### `instrument()` / `uninstrument()`

Manually instrument a specific client instance instead of relying on auto-instrumentation.

```python
from visibe import Visibe

tracer = Visibe(api_key="sk_live_abc123")
tracer.instrument(graph, name="my-agent")

result = graph.invoke({"messages": [("user", "Hello")]})

tracer.uninstrument(graph)

# Or use as a context manager for automatic cleanup:
with tracer.instrument(graph, name="my-agent"):
    graph.invoke(...)
# Instrumentation removed automatically on exit.
```

---

## 🛡️ Safety Guarantees

- **No crashes** — every SDK operation is wrapped in try/except
- **No latency** — all backend calls are fire-and-forget
- **No key, no problem** — SDK is silently a no-op when no API key is set

No data is sold or shared with third parties. Content is used solely to display traces in your dashboard.

---

## 🔗 Resources

- [visibe.ai](https://visibe.ai) — Product website
- [app.visibe.ai](https://app.visibe.ai) — Dashboard
- [PyPI Package](https://pypi.python.org/pypi/visibe)

---

## 📃 License

MIT — see [LICENSE](LICENSE) for details.
