# llms-full.txt
# amplitude-ai 1.2.1 — Detailed API Reference for LLM Agents
# Use this file for instrumentation guidance. See llms.txt for discovery.

## Core API

### AmplitudeAI(amplitude)
Initialize the SDK. Required entry point.
```
from amplitude import Amplitude
from amplitude_ai import AmplitudeAI
amplitude = Amplitude("api-key")
ai = AmplitudeAI(amplitude=amplitude, content_mode="full")
```

### patch(amplitude)
Zero-code instrumentation. Monkey-patches all detected provider SDKs.
```
from amplitude_ai import patch, unpatch
patch(amplitude=amplitude)
unpatch()
```

### wrap(client, amplitude)
Convert an existing provider client into an instrumented wrapper.
```
from amplitude_ai import wrap
instrumented = wrap(openai_client, amplitude=amplitude)
```

### ai.agent(agent_id, user_id)
Create a bound agent for user/session lineage.
```
agent = ai.agent("my-agent", user_id="u1")
child = agent.child("sub-agent")
with agent.session(session_id="s1") as session:
    ...  # LLM calls tracked automatically
    with session.run_as(child) as cs:
        ...  # delegate to child agent
```

### @tool / @observe
Decorators for automatic function tracking.
```
from amplitude_ai import tool, observe

@tool
def search_products(query: str) -> dict:
    return {'items': []}

@observe
async def handle_request(request):
    ...  # Emits [Agent] Span events with session lifecycle
```

### MockAmplitudeAI
Deterministic testing helper. Drop-in replacement.
```
from amplitude_ai import MockAmplitudeAI
mock = MockAmplitudeAI()
mock.get_events("[Agent] AI Response")  # → list of events
mock.flush()  # → list of events
```

## Provider Wrappers

| Provider    | Import                              | Streaming | Tool Calls | TTFB | Cache Tokens |
|-------------|-------------------------------------|-----------|------------|------|--------------|
| OpenAI      | `from amplitude_ai import OpenAI`   | Yes       | Yes        | Yes  | Yes          |
| Anthropic   | `from amplitude_ai import Anthropic` | Yes      | Yes        | Yes  | Yes          |
| Gemini      | `from amplitude_ai import Gemini`   | Yes       | No         | No   | No           |
| AzureOpenAI | `from amplitude_ai import AzureOpenAI` | Yes   | Yes        | Yes  | No           |
| Bedrock     | `from amplitude_ai import Bedrock`  | Yes       | Yes        | No   | No           |
| Mistral     | `from amplitude_ai import Mistral`  | Yes       | No         | No   | No           |

## Canonical Patterns

### 1. Zero-code (patch)
```python
from amplitude import Amplitude
from amplitude_ai import AmplitudeAI, patch
amplitude = Amplitude("api-key")
ai = AmplitudeAI(amplitude=amplitude)
patch(amplitude=amplitude)
```

### 2. Wrap existing client
```python
from amplitude_ai import wrap
instrumented = wrap(existing_client, amplitude=amplitude)
```

### 3. Provider wrapper
```python
from amplitude_ai import AmplitudeAI, OpenAI
client = OpenAI(amplitude=ai, api_key="sk-...")
response = client.chat.completions.create(model="gpt-4o", messages=[...], amplitude_user_id="u1")
```

### 4. Bound agent + session
```python
agent = ai.agent("assistant", user_id="u1")
with agent.session(session_id="s1") as session:
    response = client.chat.completions.create(...)
```

### 5. FastAPI middleware
```python
from amplitude_ai.middleware import AmplitudeAIMiddleware
app.add_middleware(AmplitudeAIMiddleware, ai=ai)
```

### 6. Multi-agent orchestration with session.run_as()
```python
orchestrator = ai.agent("orchestrator", user_id="u1")
researcher = orchestrator.child("researcher")
with orchestrator.session(session_id="s1") as s:
    # Provider calls inside run_as are automatically tagged with the child's agent_id
    with s.run_as(researcher) as cs:
        response = client.chat.completions.create(model="gpt-4o", messages=[...])
# run_as shares session_id, trace_id, turn counter; does NOT emit Session End
```

## MCP Tools

- `get_event_schema(event_type?)` — Return event schema and property definitions
- `get_integration_pattern(pattern_id?)` — Return canonical instrumentation patterns
- `validate_setup()` — Check required environment variables
- `suggest_instrumentation(framework?, provider?, content_tier?)` — Value-first setup guidance with content-tier and privacy defaults
- `validate_file(source, language?)` — Detect uninstrumented LLM call sites
- `search_docs(query, max_results?)` — Search README and API reference by keyword

## CLI

- `amplitude-ai-mcp` — Start the MCP server for AI coding agents
- `amplitude-ai-doctor [--json] [--no-mock-check]` — Validate environment and event pipeline
- `amplitude-ai-status [--json]` — Show SDK version, installed providers, and env config

## Common Errors

- "No events captured" → Ensure session context wraps your LLM calls
- "patch() drops events silently" → patch() requires active SessionContext
- "flush() timeout" → Call ai.flush() before process exit in serverless
- Import errors → Install optional deps: pip install 'amplitude-ai[openai]'
