Metadata-Version: 2.4
Name: master-log-client
Version: 0.1.0
Summary: Dependency-free Python client for Master Log
Author: Master Log
License: MIT
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Topic :: System :: Logging
Requires-Python: >=3.9
Description-Content-Type: text/markdown

# Master Log Python Client

Small dependency-free Python client for sending logs to Master Log.

## Environment

```bash
export MASTER_LOG_API_KEY=dev-key
export MASTER_LOG_ENDPOINT=http://127.0.0.1:8000
```

`MASTER_LOG_ENDPOINT` may be the backend root, `/api/v1`, or `/api/v1/logs`.

Optional controls:

```bash
export MASTER_LOG_BATCH_SIZE=100
export MASTER_LOG_FLUSH_INTERVAL_SECONDS=1.0
export MASTER_LOG_MAX_QUEUE_SIZE=10000
export MASTER_LOG_DROP_WHEN_FULL=true
export MASTER_LOG_QUEUE_TIMEOUT_SECONDS=0.25
export MASTER_LOG_BACKPRESSURE=true
export MASTER_LOG_INITIAL_SEND_SECONDS_PER_LOG=0.005
export MASTER_LOG_MAX_ENQUEUE_SLEEP_SECONDS=0.25
export MASTER_LOG_MIN_REQUEST_INTERVAL_SECONDS=0.25
```

## Install For Local Use

From this directory:

```bash
python -m pip install -e .
```

## Simple Usage

```python
from master_log_client import flush, mlog

mlog("Telescope array online")
mlog("Dome slit wind threshold approaching", severity="warn", tags=["dome", "weather"])
flush()
```

## Configure In Code

Use `configure()` when you do not want to rely on environment variables. Values passed to
`configure()` take precedence over `MASTER_LOG_API_KEY` and `MASTER_LOG_ENDPOINT`.

```python
from master_log_client import configure, flush, mlog

configure(
    api_key="dev-key",
    endpoint="http://127.0.0.1:8000",
    batch_size=100,
    flush_interval_seconds=1.0,
)

mlog("Telescope array online", tags=["observatory", "telescope"])
flush()
```

## Background Batching

By default, the client starts a small multiprocessing worker on platforms with `fork` support when no custom transport is configured. Calls to `mlog()` enqueue events locally and wake the worker. The worker drains the queue, batches events, and sends them to `/api/v1/logs/batch`.

Use `min_request_interval_seconds` or `MASTER_LOG_MIN_REQUEST_INTERVAL_SECONDS` to enforce a minimum delay between actual HTTP request starts. This is separate from `flush_interval_seconds`: a full batch can wake the worker immediately, but the worker still waits until the minimum request interval has elapsed before sending the next request. The default is `0.0`, which disables this hard request-rate limit.

Async enqueue is intentionally not free by default. After each queued log, the caller sleeps for the worker's moving average send time per accepted log. The first burst uses `initial_send_seconds_per_log`, then successful batch sends tune the value. This keeps accidental hot loops closer to the service's observed ingest rate instead of letting them fill the local queue immediately. Use `backpressure=False` or `MASTER_LOG_BACKPRESSURE=false` only when the caller already has its own rate limit.

Short scripts should call `flush()` before exiting when they need confirmation that queued logs were sent. The default client also flushes during interpreter shutdown on a best-effort basis.

```python
from master_log_client import configure, flush, mlog, shutdown

configure(
    api_key="dev-key",
    endpoint="http://127.0.0.1:8000",
    min_request_interval_seconds=0.25,
)

mlog("Mirror cover opened", tags=["startup", "telescope"])
mlog("Mount tracking enabled", tags=["mount"])

result = flush(timeout_seconds=5)
if not result.ok:
    print("Master Log flush failed:", result.error)

shutdown()
```

Use `async_mode=False` when a script must block on every send, when testing with a custom transport, or when multiprocessing is not appropriate for the host process.

On platforms without `fork` support, such as Windows, the client defaults to synchronous sends. You can force `async_mode=True` or `MASTER_LOG_ASYNC=true` from a multiprocessing-safe application entrypoint.

```python
from master_log_client import MasterLogClient

client = MasterLogClient(
    api_key="dev-key",
    endpoint="http://127.0.0.1:8000",
    async_mode=False,
)

result = client.info("Synchronous log send", tags=["debugging"])
```

## Severity Helpers

```python
from master_log_client import debug, error, fatal, info, warn

info("Flat-field calibration completed", tags=["calibration", "ccd"])
warn("Seeing degraded", tags=["seeing", "atmosphere"], metadata={"fwhm_arcseconds": 3.4})
error("CCD cooling loop failed to settle", tags=["ccd", "camera"])
fatal("Pier collision guard triggered", tags=["mount", "safety"])
```

## Print-Like Behavior

`mlog` accepts multiple values like `print` and joins them with `sep`.

```python
from master_log_client import mlog

frame_id = 42
target = "M31"

mlog("Captured frame", frame_id, "for", target, tags=["imaging"])
```

## Optional Fields

```python
from master_log_client import mlog

mlog(
    "Short-lived satellite glint crossed the exposure path.",
    title="Temporary satellite glint alert",
    severity="info",
    tags=["transient", "satellite-pass"],
    ttl_seconds=120,
    metadata={"target": "M13", "duration_seconds": 18},
)
```

## Automatic Metadata

Every event includes a `python_client` metadata block with:

- Hostname
- Fully qualified domain name when available
- Process id
- Process name
- Current working directory
- Python version
- Library version

Failures are non-fatal by default. The call returns a `MasterLogResult` with `ok`, `status_code`, `event_id`, `queued`, `accepted`, and `error`.
