Metadata-Version: 2.4
Name: grid-cortex-client
Version: 0.2.117
Summary: python client for grid cortex
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Requires-Python: >=3.8
Requires-Dist: httpx>=0.28.1
Requires-Dist: msgpack-numpy>=0.4.0
Requires-Dist: msgpack>=1.0.0
Requires-Dist: numpy<2
Requires-Dist: pillow>=10.0.0
Requires-Dist: requests>=2.20.0
Requires-Dist: rerun-sdk==0.22.1
Requires-Dist: websockets>=12.0
Description-Content-Type: text/markdown

# Grid Cortex Client

[![PyPI version](https://img.shields.io/pypi/v/grid-cortex-client.svg)](https://pypi.org/project/grid-cortex-client/)
[![Python](https://img.shields.io/pypi/pyversions/grid-cortex-client.svg)](https://pypi.org/project/grid-cortex-client/)

Python client for [GRID Cortex](https://cortex.generalrobotics.dev).

## Installation

```bash
pip install grid-cortex-client
```

## Quick Start

```python
from grid_cortex_client import CortexClient, ModelType

client = CortexClient(api_key="your-api-key")

# Monocular depth estimation
depth_map = client.run(ModelType.ZOEDEPTH, image_input="path/to/image.jpg")
```

## Configuration

Pass your API key and base URL directly, or set them as environment variables:

```bash
export GRID_CORTEX_API_KEY="your-api-key"
export GRID_CORTEX_BASE_URL="https://cortex-prod.generalrobotics.dev/cortex"
```

```python
# Explicit configuration
client = CortexClient(api_key="your-key", base_url="https://...")

# Or rely on environment variables
client = CortexClient()
```

## Input Formats

All image-based models accept multiple input types:

- **File path:** `"path/to/image.jpg"`
- **URL:** `"https://example.com/image.jpg"`
- **PIL Image:** `Image.open("image.jpg")`
- **NumPy array:** `np.ndarray` with shape `(H, W, 3)`

## Async & Concurrent Inference

The async client lets you call multiple models concurrently so total latency equals the **slowest** model, not the sum of all of them.

### Concurrent multi-model example

```python
import asyncio
import numpy as np
from grid_cortex_client import AsyncCortexClient, ModelType

async def run_perception_pipeline(image: np.ndarray):
    """Run depth, detection, and segmentation concurrently on the same frame."""
    async with AsyncCortexClient() as client:
        depth, detections, mask = await asyncio.gather(
            client.run(ModelType.ZOEDEPTH, image_input=image),
            client.run(ModelType.OWLV2, image_input=image, prompt="bottle"),
            client.run(ModelType.GSAM2, image_input=image, prompt="bottle"),
        )
    return depth, detections, mask

depth, detections, mask = asyncio.run(
    run_perception_pipeline(np.array(Image.open("scene.jpg")))
)
```

### High-throughput streaming with pub/sub

For continuous streams (e.g. camera feeds), the `CortexHubClient` uses WebSockets to overlap sending and receiving. While frame N's result is being returned, frame N+1 is already being processed server-side:

```python
import asyncio
import numpy as np
from grid_cortex_client import CortexHubClient, ModelType

async def publisher(hub: CortexHubClient, frames: list[np.ndarray]):
    """Send frames as fast as possible."""
    for i, frame in enumerate(frames):
        await hub.publish(ModelType.ZOEDEPTH, request_id=f"frame_{i}", image_input=frame)

async def subscriber(hub: CortexHubClient, num_frames: int):
    """Receive results as they arrive."""
    count = 0
    async for result in hub.subscribe():
        if result.ok:
            print(f"{result.request_id}: shape={result.data.shape}")
        count += 1
        if count >= num_frames:
            break

async def main():
    frames = [np.random.randint(0, 255, (480, 640, 3), dtype=np.uint8)] * 100

    async with CortexHubClient() as hub:
        await asyncio.gather(
            publisher(hub, frames),
            subscriber(hub, len(frames)),
        )

asyncio.run(main())
```

## Documentation

For model-specific usage examples, parameter references, and detailed guides, see the full documentation:

**[docs.generalrobotics.dev/models/cortex](https://docs.generalrobotics.dev/models/cortex)**

## Requirements

- Python >= 3.8
