Metadata-Version: 2.4
Name: engram-causal
Version: 0.1.5
Summary: Causal reasoning layer for AI systems. Answers why instead of what.
Author: Karan Kartikeya
Project-URL: Homepage, https://engram.viberank.co.in
Project-URL: Repository, https://github.com/karankartikeya/Engram
Project-URL: Documentation, https://engram.viberank.co.in/docs
Project-URL: Issues, https://github.com/karankartikeya/Engram/issues
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: fastapi
Requires-Dist: uvicorn[standard]
Requires-Dist: pydantic>=2.0
Requires-Dist: anthropic
Requires-Dist: kuzu
Requires-Dist: python-dotenv
Requires-Dist: httpx
Requires-Dist: datasets
Provides-Extra: hosted
Requires-Dist: boto3; extra == "hosted"
Requires-Dist: redis; extra == "hosted"
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Requires-Dist: pytest-asyncio; extra == "dev"
Requires-Dist: pre-commit; extra == "dev"
Requires-Dist: ruff; extra == "dev"
Requires-Dist: mypy; extra == "dev"

# engram-causal

> **Causal reasoning layer for AI systems. Answers "why" instead of just "what".**

Extract cause-effect relationships from your AI agent sessions, store them in a causal graph, and inject root causes into debugging workflows **without burning tokens on re-investigation**.

---

## The Problem in 30 Seconds

Your AI agent debugs a payment failure (6+ files read, 5+ token-expensive turns). Next week: same failure, same re-investigation, same tokens wasted.

Why? Because memory systems don't store *causality* — they store facts.

**Engram solves this:**

```bash
# Structured event ingestion (zero LLM cost)
engram ingest ./logs/session.json

# Query: "Why did payment fail?"
engram why "payment_failed"
# Returns: "Token validation bug in auth_middleware.py:47 (conf: 0.94)"
# Tokens used: 0 (pure graph traversal)
```

Next time your agent sees that setup, it knows the root cause instantly.

---

## Key Features

✅ **Three-Track Extraction** — Rules (free) → SHA256 cache (free) → Claude once, then cached (~$0.0003)

✅ **Persistent Causal Graph** — Store relationships, survive sessions

✅ **Why-Query Engine** — Backward DFS traversal for root causes (zero LLM)

✅ **EU AI Act Ready** — Full audit trail + confidence scores

✅ **Structured Events** — Parse OpenTelemetry, typed JSON logs with deterministic rules

✅ **Low Cost** — Most queries run on cached triples or graph math, not LLM

---

## Install

```bash
pip install engram-causal

# Set your Anthropic API key (optional, only for free-text extraction)
export ANTHROPIC_API_KEY=sk-ant-...
```

---

## Quick Start

### Extract causal triples from text

```bash
engram extract "The database timed out because the connection pool was exhausted"
```

**Output:**
```
1 triple(s) extracted:
  'connection pool exhausted' → 'database timed out' (conf=0.89, mechanism=external-event)
```

### Ingest structured events (zero LLM)

```bash
engram ingest ./logs/events.json
```

### Query root causes

```bash
engram why "database_timeout"
```

### Start the API server

```bash
engram serve
# API: http://localhost:8000
# Graph viewer: http://localhost:8000/graph
```

---

## Use Cases

- **AI Agent Debugging** — Capture why decisions failed, prevent repeated mistakes
- **Explainable Systems** — EU AI Act Article 13 compliance, auditable reasoning chains
- **Root Cause Analysis** — Trace failures back to origin across sessions
- **Incident Response** — Structured post-mortems with causal chains

---

## What's Different

| | **Vector DB** | **Chat Memory** | **Engram** |
|---|---|---|---|
| Stores | Embeddings | Facts & history | Cause-effect chains |
| Query style | "Find similar" | "What happened?" | **"Why did this happen?"** |
| Explainability | ❌ Black box | ❌ Fact-based only | ✅ Causal + confidence |
| LLM cost | High (reprocess) | Medium (caching) | **Ultra-low** (graph math) |

---

## Documentation

**Full docs:** https://engram.viberank.co.in

**GitHub:** https://github.com/karankartikeya/engram

**Report issues:** https://github.com/karankartikeya/engram/issues

---

## License

MIT
