Metadata-Version: 2.4
Name: divami-labs-experiential-ui-agent
Version: 0.1.1
Summary: AI agent that generates DashboardSpec JSON from any context data, powered by pydantic-ai
Author-email: Divami Design Labs <hello@divami.com>
License: MIT
Keywords: ai,dashboard,dynamic-ui,llm,pydantic-ai
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.11
Requires-Dist: pydantic-ai-slim>=1.0.0
Provides-Extra: all
Requires-Dist: google-genai>=1.0.0; extra == 'all'
Requires-Dist: logfire>=3.0.0; extra == 'all'
Requires-Dist: openai>=1.0.0; extra == 'all'
Provides-Extra: google
Requires-Dist: google-genai>=1.0.0; extra == 'google'
Provides-Extra: logfire
Requires-Dist: logfire>=3.0.0; extra == 'logfire'
Provides-Extra: openai
Requires-Dist: openai>=1.0.0; extra == 'openai'
Description-Content-Type: text/markdown

# divami-labs-experiential-ui-agent

AI-powered dashboard generation package. Pass any user context dict and a role string; get back a fully structured `DashboardSpec` JSON ready for the React frontend to render — no transformation needed.

Built on [pydantic-ai](https://github.com/pydantic/pydantic-ai). Bring your own model — Google, OpenAI, or any pydantic-ai compatible provider.

---

## Structure

```
package/
├── src/
│   └── dynamic_ui/
│       ├── __init__.py     # Public API surface
│       ├── agent.py        # Agent setup + generate_dashboard() entry point
│       ├── deps.py         # AgentDeps runtime dependency container
│       ├── models.py       # DashboardSpec Pydantic schema (mirrors frontend types.ts)
│       ├── prompt.py       # System prompt for the final assembly step
│       ├── telemetry.py    # Optional Logfire integration
│       └── tools.py        # Six reasoning tools (pipeline steps 1–6)
└── pyproject.toml
```

---

## Installation

Pick only the provider extras you need:

```bash
# Google Gemini
pip install "divami-labs-experiential-ui-agent[google]"

# OpenAI
pip install "divami-labs-experiential-ui-agent[openai]"

# Both + Logfire observability
pip install "divami-labs-experiential-ui-agent[all]"
```

| Provider | Extra | Env var required |
|----------|-------|-----------------|
| Google Gemini | `[google]` | `GOOGLE_API_KEY` |
| OpenAI | `[openai]` | `OPENAI_API_KEY` |
| Logfire observability | `[logfire]` | `LOGFIRE_TOKEN` |
| Everything | `[all]` | — |

---

## Quick start

```python
from dynamic_ui import generate_dashboard, DashboardSpec

spec: DashboardSpec = await generate_dashboard(
    model="google-gla:gemini-2.0-flash",   # or "openai:gpt-4o", etc.
    context={
        "sales": [...],
        "emails": [...],
    },
    user_role="VP of Sales",
    user_persona=(
        "Prefers high-level KPI cards before detailed charts. "
        "Focuses on pipeline health and quota attainment. "
        "Needs risk flags surfaced at the top of the dashboard."
    ),
    user_prompt="What are my top priorities today?",  # optional
)

# Pass directly to the frontend <DashboardRenderer> — no transformation needed.
payload = spec.model_dump()
```

### With Logfire observability (optional)

Call once at application startup before any agent run:

```python
from dynamic_ui import configure_logfire

configure_logfire(service_name="my-app")
```

Reads `LOGFIRE_TOKEN` from the environment automatically. Safe to call even when `logfire` is not installed — logs a warning and continues.

---

## Usage in a FastAPI / async server

```python
import asyncio
from dynamic_ui import generate_dashboard, DashboardSpec

# FastAPI example
from fastapi import FastAPI
app = FastAPI()

@app.post("/dashboard")
async def create_dashboard(role: str, context: dict, persona: str | None = None) -> dict:
    spec: DashboardSpec = await generate_dashboard(
        model="openai:gpt-4o",
        context=context,
        user_role=role,
        user_persona=persona,
    )
    return spec.model_dump()

# Plain asyncio script
if __name__ == "__main__":
    async def main():
        spec = await generate_dashboard(
            model="google-gla:gemini-2.0-flash",
            context={"orders": [{"id": 1, "amount": 250}]},
            user_role="Operations Manager",
            user_persona="Needs cost and fulfilment KPIs front and centre.",
        )
        import json
        print(json.dumps(spec.model_dump(), indent=2))

    asyncio.run(main())
```

### Accessing individual schema types

All Pydantic models and chart-type constants are re-exported from the top-level package:

```python
from dynamic_ui import (
    DashboardSpec,
    DashboardRow,
    CardWidget,
    ChartWidget,
    TableWidget,
    ListWidget,
    Metric,
    SeriesConfig,
    ListItem,
    DataRecord,
    CHART_BAR,
    CHART_LINE,
    CHART_PIE,
    # ... see __init__.py for full list
)
```

### Advanced: supply your own `AgentDeps`

`AgentDeps` is the runtime dependency container injected into every tool call. All fields available on `AgentDeps` are also first-class parameters of `generate_dashboard()`:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | `str \| Model` | ✅ | pydantic-ai model string, e.g. `"openai:gpt-4o"` |
| `context` | `dict` | ✅ | The user's raw data (sales records, emails, metrics, …) |
| `user_role` | `str` | — | Job title / role, e.g. `"VP of Sales"` |
| `user_persona` | `str \| None` | — | Richer behavioural description — communication style, preferred detail level, key KPIs, pain points |
| `user_prompt` | `str \| None` | — | Explicit question the dashboard should answer |
| `extra_context` | `dict \| None` | — | Tenant info, feature flags, locale, theme palette, etc. |

```python
from dynamic_ui import AgentDeps, generate_dashboard

# Using generate_dashboard directly (recommended)
spec = await generate_dashboard(
    model="openai:gpt-4o",
    context={...},
    user_role="CFO",
    user_persona=(
        "Needs a single-screen P&L summary. "
        "Prefers bar charts over tables. Risk items must appear first."
    ),
    user_prompt="How are we tracking against Q2 targets?",
)

# Or build AgentDeps manually for advanced use
deps = AgentDeps(
    context={...},
    user_role="CFO",
    user_persona="Needs a single-screen P&L summary.",
)
```

---

## Agent pipeline

Each `generate_dashboard()` call runs six reasoning steps in order via pydantic-ai tools:

| Step | Tool | Purpose |
|------|------|---------|
| 1 | `analyse_persona` | Who is this user, what do they need today |
| 2 | `analyse_data` | Scan `CONTEXT_JSON` for signals and anomalies |
| 3 | `plan_interactions` | Assign channel keys, decide controller/follower pairs |
| 4 | `plan_layout` | Design the section/grid skeleton |
| 5 | `plan_navigation` | Decide which widgets open drilldown views |
| 6 | `generate_dashboard_json` | Assemble and validate the final `DashboardSpec` |

---

## `DashboardSpec` schema

The output mirrors `frontend/src/types/index.ts` exactly. Key types:

```
DashboardSpec
└── sections: DashboardRow[]
    └── widgets: Widget[]           # CardWidget | ChartWidget | TableWidget | ListWidget
```

Widget placement uses CSS grid — set `colSpan` / `rowSpan`; declare `cols` (and `rows` when using `rowSpan`) on the section.

Interactive widgets communicate via channel keys:
- Controller chart sets `broadcastOn: "ch_<noun>"`
- Follower widgets set `listenTo: "ch_<noun>"` and `reactionMode: "highlight" | "filter"`

---

## Development setup

```bash
cd backend/package
uv sync
```

---

## Publishing to PyPI

### Prerequisites

1. Create an account on [PyPI](https://pypi.org) (and optionally [TestPyPI](https://test.pypi.org) for staging).
2. Generate an **API token** at `Account Settings → API tokens` on PyPI.
3. Install `uv` if not already available:
   ```bash
   curl -LsSf https://astral.sh/uv/install.sh | sh
   ```

### Step 1 — bump the version

Edit `pyproject.toml` and increment the `version` field (follow [Semantic Versioning](https://semver.org/)):

```toml
[project]
version = "0.2.0"   # was 0.1.0
```

### Step 2 — build the distribution artefacts

```bash
cd backend/package
uv build
```

This produces two files inside `dist/`:
- `divami_labs_experiential_ui_agent-<version>-py3-none-any.whl` — wheel (fast install)
- `divami_labs_experiential_ui_agent-<version>.tar.gz` — source distribution

### Step 3 — (optional) smoke-test on TestPyPI first

```bash
# Upload to the test index
uv publish --publish-url https://test.pypi.org/legacy/ --token pypi-<YOUR_TEST_TOKEN>

# Install from TestPyPI to verify
pip install --index-url https://test.pypi.org/simple/ \
            --extra-index-url https://pypi.org/simple/ \
            "divami-labs-experiential-ui-agent[google]"
```

### Step 4 — publish to PyPI

```bash
uv publish --token pypi-<YOUR_PYPI_TOKEN>
```

Or configure the token once via the `UV_PUBLISH_TOKEN` environment variable so you don't pass it on every command:

```bash
export UV_PUBLISH_TOKEN="pypi-<YOUR_PYPI_TOKEN>"
uv publish
```

> **Tip:** store the token in your CI/CD secret store (GitHub Actions `PYPI_API_TOKEN` secret, etc.) and never commit it to the repository.

### Step 5 — verify the release

```bash
pip install "divami-labs-experiential-ui-agent[google]" --upgrade
python -c "from dynamic_ui import generate_dashboard; print('OK')"
```

### CI/CD (GitHub Actions example)

Create `.github/workflows/publish.yml`:

```yaml
name: Publish to PyPI

on:
  push:
    tags:
      - "v*"          # trigger on version tags, e.g. v0.2.0

jobs:
  publish:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install uv
        uses: astral-sh/setup-uv@v5

      - name: Build
        run: uv build
        working-directory: backend/package

      - name: Publish
        run: uv publish
        working-directory: backend/package
        env:
          UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
```

Add `PYPI_API_TOKEN` as a repository secret in **Settings → Secrets and variables → Actions**.
