Metadata-Version: 2.4
Name: mnemosynth
Version: 1.0.1
Summary: Universal AI Memory Plugin — persistent, verified, hallucination-resistant memory for any LLM, agent, or coding workflow
Project-URL: Homepage, https://github.com/vasudevjaiswal/mnemosynth
Project-URL: Repository, https://github.com/vasudevjaiswal/mnemosynth
Project-URL: Documentation, https://github.com/vasudevjaiswal/mnemosynth#readme
Project-URL: Issues, https://github.com/vasudevjaiswal/mnemosynth/issues
Project-URL: Changelog, https://github.com/vasudevjaiswal/mnemosynth/blob/main/CHANGELOG.md
Author-email: Vasudev Jaiswal <vasudevjaiswal@protonmail.com>
Maintainer-email: Vasudev Jaiswal <vasudevjaiswal@protonmail.com>
License-Expression: Apache-2.0
License-File: LICENSE
Keywords: agent,ai,belief-revision,cognitive,contradiction-detection,hallucination,knowledge-graph,llm,mcp,memory,rag,second-brain,vector-db
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Typing :: Typed
Requires-Python: >=3.10
Requires-Dist: click>=8.0
Requires-Dist: lancedb>=0.15
Requires-Dist: mcp>=1.0
Requires-Dist: networkx>=3.0
Requires-Dist: numpy>=1.24
Requires-Dist: pyarrow>=14.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: rich>=13.0
Provides-Extra: adapters
Requires-Dist: crewai-tools>=0.12; extra == 'adapters'
Requires-Dist: crewai>=0.80; extra == 'adapters'
Requires-Dist: langchain-core>=0.3; extra == 'adapters'
Provides-Extra: autogen
Requires-Dist: autogen-agentchat>=0.4; extra == 'autogen'
Requires-Dist: autogen-ext>=0.4; extra == 'autogen'
Provides-Extra: crewai
Requires-Dist: crewai-tools>=0.12; extra == 'crewai'
Requires-Dist: crewai>=0.80; extra == 'crewai'
Provides-Extra: dev
Requires-Dist: mypy>=1.10; extra == 'dev'
Requires-Dist: pre-commit>=3.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest-cov>=5.0; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: ruff>=0.4; extra == 'dev'
Provides-Extra: langchain
Requires-Dist: langchain-core>=0.3; extra == 'langchain'
Provides-Extra: ml
Requires-Dist: scikit-learn>=1.4; extra == 'ml'
Requires-Dist: sentence-transformers>=3.0; extra == 'ml'
Requires-Dist: transformers>=4.40; extra == 'ml'
Provides-Extra: openai-agents
Requires-Dist: openai-agents>=0.1; extra == 'openai-agents'
Provides-Extra: production
Requires-Dist: falkordb>=1.0; extra == 'production'
Requires-Dist: opentelemetry-api>=1.20; extra == 'production'
Requires-Dist: opentelemetry-sdk>=1.20; extra == 'production'
Requires-Dist: psycopg[pool]>=3.1; extra == 'production'
Requires-Dist: qdrant-client>=1.9; extra == 'production'
Requires-Dist: redis>=5.0; extra == 'production'
Provides-Extra: pydantic-ai
Requires-Dist: pydantic-ai>=0.1; extra == 'pydantic-ai'
Description-Content-Type: text/markdown

<div align="center">
  <h1>🧠 Mnemosynth</h1>
  <h3>Cognitive Memory OS for AI</h3>
  <p>
    <a href="https://pypi.org/project/mnemosynth/"><img src="https://img.shields.io/pypi/v/mnemosynth?color=%2334d399&label=PyPI&logo=pypi&logoColor=white" alt="PyPI"></a>
    <a href="https://pypi.org/project/mnemosynth/"><img src="https://img.shields.io/pypi/dm/mnemosynth?color=%236366f1&label=Downloads" alt="Downloads"></a>
    <img src="https://img.shields.io/badge/Python-≥3.10-blue?logo=python&logoColor=white" alt="Python">
    <img src="https://img.shields.io/badge/license-Apache%202.0-orange" alt="License">
    <img src="https://img.shields.io/badge/tests-216%20passed-brightgreen" alt="Tests">
  </p>
  <p><em>Persistent, verified, hallucination-resistant memory for any LLM, agent, or AI workflow.</em></p>
</div>

---

## The Problem

LLMs forget everything when a session ends. RAG helps — but raw retrieval doesn't verify, decay, or reason about what it stores. You get confident hallucinations, stale facts, and no audit trail.

**Mnemosynth is a cognitive memory OS, not a vector store.**

---

## Install

```bash
pip install mnemosynth       # Lightweight (keyword engines)
pip install "mnemosynth[ml]" # Advanced (PyTorch + Transformers)
```

> **Zero-config. No Docker. No external databases. No API keys.**
> Everything runs locally under `~/.mnemosynth/`.

---

## Three-Tier Memory Model

Inspired by how the brain actually works:

| Tier | Brain Region | Backend | What It Stores |
|---|---|---|---|
| **Episodic** 🔵 | Hippocampus | LanceDB (vector) | Events, conversations, timestamped history |
| **Semantic** 🟢 | Neocortex | NetworkX (graph) | Verified facts, entity relationships |
| **Procedural** 🟠 | Cerebellum | JSON registry | Tools, schemas, workflows |

Memories are **auto-classified** on write. No manual tagging required.

---

## Quickstart

### Python API

```python
from mnemosynth import Mnemosynth

brain = Mnemosynth()

# Store — auto-classified into episodic/semantic/procedural
brain.remember("User prefers Python and dark mode")
brain.remember("Yesterday we finalized the auth module")
brain.remember("Deploy with: docker compose up -d")

# Retrieve with semantic search
for r in brain.recall("What languages does the user prefer?"):
    print(f"[{r.memory_type.value}] {r.content} ({r.confidence:.0%})")

# Compressed context block for LLM injection
digest = brain.digest("Starting a new project")

# Dream: cluster episodic → promote to verified semantic facts
brain.dream()
```

### MCP Server (Claude Desktop / Cursor / Windsurf)

```json
{
  "mcpServers": {
    "mnemosynth": {
      "command": "mnemosynth",
      "args": ["serve"]
    }
  }
}
```

---

## Anti-Hallucination Engine

| Feature | Description |
|---|---|
| **🧊 Ebbinghaus Decay** | Stale memories lose confidence over time (configurable half-life) |
| **⚔️ Contradiction Detection** | Flags conflicting facts via DeBERTa NLI or keyword antonyms |
| **💜 Sentiment Scoring** | DistilBERT or keyword-based emotional valence scoring |
| **🛡️ Immune System** | Blocks prompt injections, rate limits, quarantines threats |
| **📊 Corroboration** | Repeated observations boost confidence |
| **🔗 Belief Revision** | Deprecated facts link forward to their replacements |

---

## Agentic Framework Adapters

First-class integrations for every major AI framework:

```python
# CrewAI
from mnemosynth.adapters.crewai import get_crewai_tools
tools = get_crewai_tools(brain)

# LangChain / LangGraph
from mnemosynth.adapters.langchain import get_mnemosynth_tools, MnemosynthMemory

# AutoGen
from mnemosynth.adapters.autogen import get_autogen_tools

# Cross-Agent Memory Bus
from mnemosynth.adapters.memory_bus import MemoryBus
bus = MemoryBus()
bus.remember("shared fact", namespace="researcher")
```

Install adapter deps: `pip install "mnemosynth[crewai]"`, `"mnemosynth[langchain]"`, etc.

---

## Causal Memory Chains

Track **Decision → Reason → Outcome** DAGs:

```python
from mnemosynth.engine.causal import CausalChainEngine

engine = CausalChainEngine(brain)
engine.record_chain(
    decision="Switched from PostgreSQL to SQLite",
    reasons=["Need zero-config deployment", "Single-user workload"],
    outcome="Reduced setup time from 30min to 0",
)

# Query chains by topic
chains = engine.search("database decisions")

# Get XML digest for LLM prompts
digest = engine.get_digest("why did we switch databases?")
```

---

## Agent-to-Agent (A2A) Protocol

Cross-agent memory sharing with visibility controls:

```python
from mnemosynth.a2a import A2AProtocol, MemoryRequest

proto = A2AProtocol(brain)
proto.register_agent("researcher", capabilities=["search", "analyze"])

# Agent requests memories
response = proto.handle_request(MemoryRequest(agent_id="coder", query="user preferences"))

# Agent shares memories
proto.share_memory("User prefers dark mode", source_agent="researcher", visibility="shared")
```

---

## OpenTelemetry Tracing

Full observability with zero-effort instrumentation:

```python
from mnemosynth.telemetry import instrument_brain, get_metrics

brain = Mnemosynth()
instrument_brain(brain)  # All operations now emit traces + metrics

# Or use the pre-instrumented wrapper:
from mnemosynth.telemetry import TracedMnemosynth
brain = TracedMnemosynth()

# Check metrics
print(get_metrics())  # {remember_total: 42, recall_duration_ms_avg: 3.2, ...}
```

---

## Production Database Backends

Scale beyond the built-in stores:

```python
# Qdrant (vector search at scale)
from mnemosynth.stores.qdrant_store import QdrantEpisodicStore
store = QdrantEpisodicStore(url="http://localhost:6333")

# FalkorDB (graph at scale)
from mnemosynth.stores.falkordb_store import FalkorDBSemanticStore
store = FalkorDBSemanticStore(host="localhost", port=6379)

# PostgreSQL (ACID persistence)
from mnemosynth.stores.postgres_store import PostgresStore
store = PostgresStore(dsn="postgresql://user:pass@localhost/mnemosynth")
```

Install: `pip install "mnemosynth[production]"`

---

## MCP Tools (8 tools)

| Tool | Description |
|---|---|
| `add_memory` | Store with auto-classification |
| `search_memory` | Semantic search across all tiers |
| `get_digest` | Compressed XML context block |
| `get_contradictions` | Surface conflicting facts |
| `run_dream` | Trigger consolidation |
| `forget` | Delete by ID |
| `get_stats` | Memory statistics |
| `get_provenance` | Full audit trail |

---

## CLI

```bash
mnemosynth serve              # Start MCP server
mnemosynth stats              # Memory dashboard
mnemosynth search "query"     # Semantic search
mnemosynth inspect            # Browse memory tree
mnemosynth dream              # Run consolidation
mnemosynth health             # System diagnostics
mnemosynth export -o out.json # Export to JSON
mnemosynth forget <ID>        # Delete a memory
mnemosynth reset --confirm    # Wipe everything
```

---

## Configuration

Override defaults at `~/.mnemosynth/config.yaml`:

```yaml
embedding_model: all-MiniLM-L6-v2
max_episodic_memories: 10000
max_semantic_nodes: 5000

decay:
  half_life_days: 30.0
  min_confidence: 0.1

dream:
  interval_hours: 24
  min_cluster_size: 3

digest:
  max_tokens: 150
  top_k: 5
```

---

## Architecture

```
Claude Desktop / Cursor / Windsurf / Python API / CrewAI / LangChain
                    |  MCP (stdio) / Python import / A2A Protocol
           Router · Digest · Decay · Dream · Causal Chains
           Sentiment · Contradiction · Immune · Telemetry
                    |
   Episodic (LanceDB/Qdrant) · Semantic (NetworkX/FalkorDB) · Procedural (JSON)
                    |
              ~/.mnemosynth/  (or PostgreSQL)
```

---

## Optional Dependencies

| Extra | What It Adds |
|---|---|
| `mnemosynth[ml]` | PyTorch, Transformers (DeBERTa, DistilBERT) |
| `mnemosynth[production]` | Qdrant, FalkorDB, PostgreSQL, OpenTelemetry |
| `mnemosynth[crewai]` | CrewAI adapter |
| `mnemosynth[langchain]` | LangChain adapter |
| `mnemosynth[autogen]` | AutoGen adapter |
| `mnemosynth[adapters]` | All framework adapters |

---

<div align="center">
  <strong>🧠 MNEMOSYNTH</strong> — Because AI shouldn't have amnesia.<br>
  <sub>Built by <a href="https://github.com/vasudevjaiswal">Vasudev Jaiswal</a> · Apache 2.0</sub>
</div>