Metadata-Version: 2.4
Name: grid-cortex-client
Version: 0.2.83
Summary: python client for grid cortex
Requires-Python: >=3.8
Requires-Dist: httpx>=0.28.1
Requires-Dist: msgpack-numpy>=0.4.0
Requires-Dist: msgpack>=1.0.0
Requires-Dist: numpy<2
Requires-Dist: pillow>=10.0.0
Requires-Dist: requests>=2.20.0
Requires-Dist: rerun-sdk==0.22.1
Requires-Dist: websockets>=12.0
Description-Content-Type: text/markdown

# Grid Cortex Client

Python client for GRID Cortex ML inference API.

## Quick Start

```python
from grid_cortex_client import CortexClient, ModelType

client = CortexClient()
depth_map = client.run(ModelType.ZOEDEPTH, image_input="path/to/image.jpg")
```

## Installation

```bash
cd grid-cortex-client
BUILD_VERSION=0.1 uv pip install -e .
```

## Configuration

Set environment variables (recommended):

```bash
export GRID_CORTEX_API_KEY="your-api-key"
export GRID_CORTEX_BASE_URL="https://cortex-stage.generalrobotics.dev/cortex"  # or prod URL (change this only if you want to target local/stage server. Default points to prod)
```

Or pass directly:

```python
client = CortexClient(api_key="your-key", base_url="https://...")
```

## Contributing

See [`CONTRIBUTING.md`](CONTRIBUTING.md) to add client support for a new model.

> **Note:** Deploy your model server-side first via [`ray-serve/models/EXAMPLE.md`](../ray-serve/models/EXAMPLE.md).


## Usage Examples

### Basic Inference

```python
from grid_cortex_client import CortexClient, ModelType

client = CortexClient()

# Depth estimation
depth = client.run(ModelType.ZOEDEPTH, image_input="image.jpg")

# Object detection
detections = client.run(ModelType.OWLV2, image_input="image.jpg", prompt="person")

# Segmentation
mask = client.run(ModelType.GSAM2, image_input="image.jpg", prompt="cat")
```

### Async Usage

```python
import asyncio
from grid_cortex_client import AsyncCortexClient, ModelType

async def main():
    async with AsyncCortexClient() as client:
        result = await client.run(ModelType.ZOEDEPTH, image_input="image.jpg")
    return result

depth = asyncio.run(main())
```

### Input Formats

All image-based models accept:
- File path: `"path/to/image.jpg"`
- URL: `"https://example.com/image.jpg"`
- PIL Image: `Image.open("image.jpg")`
- NumPy array: `np.ndarray` (H, W, 3)

### Standard `run()` Argument Names

`CortexClient.run(...)` forwards keyword arguments to the selected model wrapper. To keep the client API consistent across models, use these names when adding new models:

| Argument | Meaning | Notes |
|----------|---------|-------|
| `image_input` | Single RGB image | Prefer this over `image`, `rgb`, `img`, etc. |
| `left_image`, `right_image` | Stereo RGB pair | Use explicit left/right for stereo inputs. |
| `depth_image` | Depth map input | Use when the input is a depth image (not RGB). |
| `seg_image` | Segmentation mask input | Use when the input is a mask/segmentation image. |
| `point_cloud` | Point cloud input | Use for 3D point cloud inputs. |
| `camera_intrinsics` | Camera intrinsics | Prefer this over `K`. |
| `prompt` | Text prompt | Use when there is a single text prompt input. |
| `text`, `points`, `boxes`, `labels` | Prompt-specific inputs | Prefer specific names when multiple prompt types exist. |
| `box_threshold`, `text_threshold`, `nms_threshold` | Thresholds | Use these names for detection/segmentation thresholds. |
| `timeout` | Per-call timeout (seconds) | Optional; overrides the client default timeout. |
| `aux_args` | Optional extra parameters | Prefer a dict for additional model-specific knobs. |

If the server expects different field names, keep the public `run()` signature consistent and translate inside `preprocess()`.

## Logging

```python
import logging

# Reduce verbosity
logging.getLogger("grid_cortex_client").setLevel(logging.WARNING)
logging.getLogger("httpx").setLevel(logging.WARNING)
```
