Metadata-Version: 2.4
Name: caducus
Version: 0.1.0
Summary: CLI for collecting ops events and running reinforcement-memory analysis.
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: PyYAML>=6.0
Requires-Dist: virtuus>=0.4.0
Provides-Extra: dev
Requires-Dist: behave>=1.2.6; extra == "dev"
Requires-Dist: pytest>=8.0; extra == "dev"
Requires-Dist: ruff>=0.4.0; extra == "dev"
Requires-Dist: black>=24.0; extra == "dev"
Requires-Dist: build>=1.2.2; extra == "dev"
Requires-Dist: python-semantic-release>=10.5.3; extra == "dev"
Provides-Extra: reinforcement-memory
Requires-Dist: biblicus[reinforcement-memory]>=1.8.0; extra == "reinforcement-memory"

# Caducus

Caducus helps operations teams understand what is going wrong right now across logs, alerts, dead-letter queues, and other operational event streams.

It is a CLI-first system for collecting timestamped operational events, normalizing them into a canonical schema, storing them as plain JSON, and using semantic reinforcement memory to surface recurring patterns, fresh anomalies, and just-in-time context during incidents.

## Why Caducus Exists

Operational signals are scattered across many systems:

- CloudWatch logs
- alerting systems
- dead-letter queues
- notifications and incident messages

Each source captures part of the truth, but not the whole picture. Caducus is intended to bring those signals together into one stream of timestamped event records that can be analyzed as a living memory of operational behavior.

The goal is not just to search historical data. The goal is to create a radar for what looks unusual, active, or important now.

## How It Works

Caducus is designed around a simple flow:

1. Collect operational events from source systems.
2. Normalize them into canonical event records with text, timestamps, source identity, and generalized metadata.
3. Persist them as JSON files in a Virtuus-backed folder structure.
4. Analyze event groups using Biblicus reinforcement memory.
5. Surface patterns, anomalies, and context for operators.

This keeps the system inspectable and composable. The underlying data lives in plain folders, not inside a black-box database.

## CLI-First MVP

The initial product is a CLI utility.

The MVP is focused on a coherent end-to-end flow:

- collect events from operational sources
- store them in a canonical schema
- run analysis over selected event groups
- inspect recent events and analysis outputs from the command line

Initial source areas for the MVP are:

- CloudWatch Logs
- SQS dead-letter queues
- one alert source

Configuration is intended to be layered through YAML, environment variables, and CLI overrides. Caducus will own collection and orchestration while allowing Biblicus-related analysis settings to flow through the Caducus configuration tree without duplicating Biblicus's schema.

## Architecture At A Glance

Caducus is intentionally thin:

- **Caducus** handles collection, normalization, orchestration, and CLI workflows.
- **Virtuus** provides file-backed JSON storage and retrieval.
- **Biblicus** provides semantic reinforcement-memory analysis.

```mermaid
flowchart LR
    sources[OpsSources] --> caducus[Caducus]
    caducus --> events[CanonicalEvents]
    events --> virtuus[VirtuusStorage]
    caducus --> biblicus[BiblicusAnalysis]
    biblicus --> radar[OpsRadar]
```

## Running the demo

Real HDFS data uses **component-derived group IDs**: each log row’s `component` becomes `hdfs-demo:<component>` (e.g. `hdfs-demo:dfs.DataNode$DataXceiver`). You must use a group ID that exists in your ingested data.

### Quick demo (small fixture, no download)

```bash
pip install -e ".[reinforcement-memory]"
caducus demo run --input tests/fixtures/demo_hdfs_sample.csv --group-id "hdfs-demo:DataNode" --data-dir /tmp/caducus-demo
```

The fixture has components `DataNode` and `NameNode`, so valid group IDs are `hdfs-demo:DataNode` and `hdfs-demo:NameNode`.

### Full demo on real HDFS data

1. Install optional deps (Biblicus reinforcement-memory and the datasets library for the download script):

   ```bash
   pip install -e ".[reinforcement-memory]"
   pip install datasets
   ```

2. Download a subset of the [HDFS_v1](https://huggingface.co/datasets/logfit-project/HDFS_v1) dataset:

   ```bash
   python scripts/download_hdfs_demo.py --output demo_data/hdfs_sample.csv --max-rows 10000
   ```

3. Ingest and list available groups (group IDs come from the CSV `component` column):

   ```bash
   caducus demo ingest --input demo_data/hdfs_sample.csv --data-dir ./caducus-data
   caducus groups --data-dir ./caducus-data
   ```

4. Run analysis for one of the listed group IDs:

   ```bash
   caducus analyze --group-id "hdfs-demo:dfs.DataNode$DataXceiver" --data-dir ./caducus-data
   ```

   Or do ingest and analyze in one step (use a group ID that exists in the CSV):

   ```bash
   caducus demo run --input demo_data/hdfs_sample.csv --group-id "hdfs-demo:dfs.DataNode$DataXceiver" --data-dir ./caducus-data
   ```

## Releases

Caducus uses `python-semantic-release` with Conventional Commits.

Use commit messages like:

- `feat: add CloudWatch collector checkpointing`
- `fix: quote group IDs containing dollar signs in docs`
- `feat!: change canonical event schema`

Release behavior:

- `feat:` triggers a minor release
- `fix:` triggers a patch release
- `feat!:` or a `BREAKING CHANGE:` footer triggers a major release

The release workflow lives in `.github/workflows/release.yml` and runs on pushes to `main`. It will:

1. Determine the next version from commit messages.
2. Update `project.version` in `pyproject.toml` and `src/caducus/__init__.py`.
3. Generate `CHANGELOG.md`, create a tag, and create a GitHub Release.
4. Publish the built distributions to PyPI.

PyPI publishing is configured for GitHub Actions trusted publishing. Before the first live release, configure the `caducus` project on PyPI to trust this repository's `release.yml` workflow.

## Roadmap

Caducus is intended to grow beyond the initial CLI foundation over time.

Planned directions include:

- broader source integrations across operational systems
- deeper analysis of concepts and entities derived from operational activity
- richer incident context and root-cause workflows
- a future web UI and embeddable components for other applications

## Repository Direction

This repository is being built outside-in. Product definition and behavior specifications come first, followed by the minimum implementation needed to satisfy them.
