Metadata-Version: 2.4
Name: logwick
Version: 1.0.0
Summary: Audit logging for AI agents — official Python SDK
Home-page: https://logwick.io
Author: Logwick
Author-email: Logwick <hello@logwick.io>
License: MIT
Project-URL: Homepage, https://logwick.io
Project-URL: Documentation, https://logwick.io/docs
Project-URL: Repository, https://github.com/logwickio/logwick-python
Keywords: logwick,ai,logging,audit,llm,agents,observability
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Logging
Classifier: Intended Audience :: Developers
Requires-Python: >=3.7
Description-Content-Type: text/markdown
Dynamic: author
Dynamic: home-page
Dynamic: requires-python

# Logwick Python SDK

The official Python SDK for [Logwick](https://logwick.io) — audit logging for AI agents.

## Installation

```bash
pip install logwick
```

## Quick start

```python
import logwick

logwick.init(api_key="sk-lw-your-key")

# Fire and forget — never blocks your code
logwick.fire({
    "agent": "gpt-4o",
    "action": "email_draft",
    "status": "success",
    "input": user_prompt,
    "output": result,
    "tokens": 312,
    "user": user_email,
})
```

## OpenAI wrapper

Automatically logs input, output, tokens, and latency:

```python
from logwick import LogwickClient

lw = LogwickClient(api_key="sk-lw-your-key")

result = lw.openai(
    lambda: client.chat.completions.create(model="gpt-4o", messages=messages),
    {"action": "email_draft", "user": user_email}
)
# result is the normal OpenAI response — nothing changes in your code
```

## Anthropic / Claude wrapper

```python
result = lw.anthropic(
    lambda: anthropic.messages.create(
        model="claude-3-5-sonnet-20241022",
        messages=messages,
        max_tokens=1024
    ),
    {"action": "document_review", "user": user_email}
)
```

## Google Gemini wrapper

```python
result = lw.gemini(
    lambda: model.generate_content(prompt),
    {"action": "data_analysis", "user": user_email}
)
```

## LangChain integration

One handler logs every LLM call in your chain automatically:

```python
handler = lw.langchain_handler(user="ops@acme.com")
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
# Every call in the chain is now logged automatically
```

## Client options

```python
from logwick import LogwickClient

lw = LogwickClient(
    api_key="sk-lw-your-key",   # required
    silent=False,                # print warnings (default: True = silent)
    tags=["production"],         # default tags added to every log
)
```

## Using the global client

```python
import logwick

logwick.init(api_key="sk-lw-your-key", tags=["production"])

# Use anywhere in your codebase without passing the client around
logwick.fire({"agent": "gpt-4o", "action": "summarize", "status": "success", ...})
```

## Get your API key

Sign up free at [logwick.io](https://logwick.io) — 5,000 logs/month free, no credit card required.
