Metadata-Version: 2.4
Name: anura-graph
Version: 0.3.0
Summary: Python client for the Anura Memory API — GraphRag + FilesRag
Project-URL: Homepage, https://anuramemory.com
Project-URL: Documentation, https://anuramemory.com/docs
License-Expression: MIT
Keywords: ai,graphmem,knowledge-graph,memory,rag
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Typing :: Typed
Requires-Python: >=3.9
Requires-Dist: httpx>=0.24.0
Description-Content-Type: text/markdown

# anura-graph

Python client for the [Anura Memory](https://anuramemory.com) API.

Anura Memory provides two memory products for AI agents:

- **GraphRag** — Knowledge graph with automatic triple extraction, deduplication, and hybrid retrieval
- **FilesRag** — Markdown file storage with heading-based chunking and semantic search

## Installation

```bash
pip install anura-graph
```

## Quick Start

```python
from graphmem import GraphMem

mem = GraphMem(api_key="gm_your_key_here")

# --- GraphRag ---
mem.remember("Alice is VP of Engineering at Acme Corp")
ctx = mem.get_context("What does Alice do?")

# --- FilesRag ---
mem.write_file("/notes/standup.md", "# Standup\n## 2026-02-21\n- Shipped auth module")
results = mem.search_files("auth module")
```

## Configuration

```python
from graphmem import GraphMem, RetryConfig

mem = GraphMem(
    api_key="gm_your_key_here",
    base_url="https://anuramemory.com",  # default
    retry=RetryConfig(
        max_retries=3,       # default
        base_delay=0.5,      # seconds, default
        max_delay=10.0,      # seconds, default
        retry_on=[429, 500, 502, 503, 504],  # default
    ),
    timeout=30.0,  # seconds, default
)
```

The client can be used as a context manager:

```python
with GraphMem(api_key="gm_your_key_here") as mem:
    mem.remember("Alice works at Acme")
```

## API Reference

### GraphRag

#### `remember(text) -> RememberResult`

Extract knowledge from text and store as triples in the graph.

```python
result = mem.remember("Einstein was born in Ulm, Germany")
print(result.extracted_count)  # 1
print(result.merged_count)     # 1
```

#### `get_context(query, options?) -> ContextResult`

Retrieve context from the knowledge graph.

```python
from graphmem import ContextOptions

# JSON format (default)
ctx = mem.get_context("Einstein")
print(ctx.entities)  # [{ "name": "Albert Einstein" }, ...]

# Markdown format (ideal for LLM system prompts)
ctx = mem.get_context("Einstein", ContextOptions(format="markdown"))
print(ctx.content)  # "- Albert Einstein -> BORN_IN -> Ulm..."

# Hybrid mode (graph + vector + communities)
ctx = mem.get_context("Einstein", ContextOptions(mode="hybrid"))
```

#### `search(entity) -> SearchResult`

Find an entity and its direct (1-hop) connections.

```python
result = mem.search("Alice")
print(result.edges)
```

#### `ingest_triples(triples) -> IngestResult`

Ingest pre-formatted triples directly (no LLM extraction).

```python
from graphmem import Triple

result = mem.ingest_triples([
    Triple(subject="TypeScript", predicate="CREATED_BY", object="Microsoft"),
])
```

#### `get_graph() -> GraphData`

Get the full graph (nodes, edges, communities).

#### `delete_edge(id, blacklist=False)`

Delete an edge. Optionally blacklist to prevent re-creation.

#### `update_edge_weight(id, weight=None, increment=None)`

Set or increment an edge's weight.

#### `delete_node(id)`

Delete a node and all its connected edges.

#### `export_graph() -> ExportData`

Export the graph as portable JSON.

#### `import_graph(data) -> ImportResult`

Import a graph export (merges, does not delete existing data).

#### `list_communities() -> list[Community]`

List all detected communities.

#### `detect_communities() -> DetectCommunitiesResult`

Run Louvain community detection + LLM summarization.

### FilesRag

#### `write_file(path, content, name=None) -> WriteFileResult`

Create or update a markdown memory file. Files are automatically chunked by `##` headings and indexed for semantic search.

```python
result = mem.write_file(
    "/docs/architecture.md",
    "# Architecture\n\n## Backend\nNode.js with Prisma...\n\n## Frontend\nNext.js...",
)
print(result.file.id)       # "clxx..."
print(result.chunk_count)   # 3
print(result.created)       # True
```

If a file already exists at the given path, its content is replaced and re-indexed.

#### `list_files() -> list[MemoryFile]`

List all files in the current project.

```python
files = mem.list_files()
for f in files:
    print(f"{f.path} ({f.size} bytes)")
```

#### `read_file(id) -> FileWithContent`

Get a file with its full content.

```python
file = mem.read_file("file_id")
print(file.content)  # full markdown
```

#### `update_file(id, content, name=None) -> WriteFileResult`

Update a file's content (re-chunks and re-indexes).

```python
result = mem.update_file("file_id", "# Updated content\n...")
```

#### `delete_file(id)`

Delete a file and all its indexed chunks.

#### `search_files(query, limit=None, file_id=None) -> list[FileSearchResult]`

Semantic search across file chunks. Pass `file_id` to scope the search to a single file.

```python
results = mem.search_files("authentication flow", limit=5)
# Search within a single file:
# results = mem.search_files("auth", file_id="file_abc123")
for r in results:
    print(r.file["path"])
    for chunk in r.chunks:
        print(f"  {chunk.heading_title} ({chunk.score:.2f})")
        print(f"  {chunk.excerpt}")
```

### Projects

| Method | Description |
|--------|-------------|
| `list_projects()` | List all projects |
| `create_project(name)` | Create a new project |
| `delete_project(id)` | Delete a project |
| `select_project(id)` | Switch active project |

### Traces

| Method | Description |
|--------|-------------|
| `list_traces(limit?, cursor?)` | List query traces with pagination |
| `get_trace(id)` | Get details for a specific trace |

### Blacklist

| Method | Description |
|--------|-------------|
| `list_blacklist(limit?, cursor?)` | List blacklisted triples |
| `add_to_blacklist(subject, predicate, object)` | Add a triple to the blacklist |
| `remove_from_blacklist(id)` | Remove from blacklist |

### Pending Facts

| Method | Description |
|--------|-------------|
| `list_pending(limit?, cursor?)` | List pending facts |
| `approve_fact(id)` | Approve a pending fact |
| `reject_fact(id, blacklist?)` | Reject a pending fact |
| `approve_all()` | Approve all pending facts |
| `reject_all()` | Reject all pending facts |

### Usage

#### `get_usage() -> UsageInfo`

```python
usage = mem.get_usage()
print(usage.tier)                  # "FREE"
print(usage.current_facts)         # 42
print(usage.current_file_count)    # 3
print(usage.current_file_storage)  # 1024
```

### Health

#### `health() -> HealthResult`

## Error Handling

```python
from graphmem import GraphMem, GraphMemError

try:
    mem.read_file("nonexistent")
except GraphMemError as e:
    print(f"API error {e.status}: {e}")
    print(e.body)  # raw response body
```

## Rate Limiting

After each request, rate limit info is available:

```python
mem.remember("some fact")
print(mem.rate_limit.remaining)  # requests remaining
print(mem.rate_limit.limit)      # total allowed per window
print(mem.rate_limit.reset)      # unix timestamp when window resets
```

## Types

All types are exported from the top-level package:

```python
from graphmem import (
    # GraphRag
    RememberResult, ContextResult, SearchResult, Triple,
    GraphNode, GraphEdge, GraphData, Community,
    # FilesRag
    MemoryFile, FileWithContent, FileSearchResult, WriteFileResult,
    # Config
    RetryConfig, RateLimitInfo, UsageInfo,
)
```

## License

MIT
