Metadata-Version: 2.4
Name: mem0-open-mcp
Version: 0.1.1
Summary: Open-source MCP server for mem0 - local LLMs, self-hosted, Docker-free
Author: Alex
License-Expression: Apache-2.0
Keywords: ai,llm,local,mcp,mem0,memory,ollama,server
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.10
Requires-Dist: fastapi>=0.115.0
Requires-Dist: httpx>=0.27.0
Requires-Dist: mcp>=1.0.0
Requires-Dist: mem0ai>=1.0.0
Requires-Dist: pydantic-settings>=2.0.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: pyyaml>=6.0.0
Requires-Dist: rich>=13.0.0
Requires-Dist: typer[all]>=0.12.0
Requires-Dist: uvicorn[standard]>=0.32.0
Provides-Extra: chroma
Requires-Dist: chromadb>=0.5.0; extra == 'chroma'
Provides-Extra: dev
Requires-Dist: mypy>=1.13.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.24.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: ruff>=0.8.0; extra == 'dev'
Provides-Extra: ollama
Requires-Dist: ollama>=0.4.0; extra == 'ollama'
Provides-Extra: pinecone
Requires-Dist: pinecone>=5.0.0; extra == 'pinecone'
Provides-Extra: qdrant
Requires-Dist: qdrant-client>=1.12.0; extra == 'qdrant'
Description-Content-Type: text/markdown

# mem0-open-mcp

Open-source MCP server for [mem0](https://mem0.ai) — **local LLMs, self-hosted, Docker-free**.

Created because the official `mem0-mcp` configuration wasn't working properly for my setup.

## Features

- **Local LLMs**: Ollama (recommended), LMStudio*, or any OpenAI-compatible API
- **Self-hosted**: Your data stays on your infrastructure
- **Docker-free**: Simple `pip install` + CLI
- **Flexible**: YAML config with environment variable support
- **Multiple Vector Stores**: Qdrant, Chroma, Pinecone, and more

> *LMStudio requires JSON mode compatible models

## Quick Start

### Installation

Install from source:

```bash
git clone https://github.com/yourname/mem0-open-mcp.git
cd mem0-open-mcp
pip install -e .
```

### Usage

```bash
# Create default config
mem0-open-mcp init

# Interactive configuration wizard
mem0-open-mcp configure

# Start the server
mem0-open-mcp serve

# With options
mem0-open-mcp serve --port 8765 --user-id alice
```

## Configuration

Create `mem0-open-mcp.yaml`:

```yaml
server:
  host: "0.0.0.0"
  port: 8765
  user_id: "default"

llm:
  provider: "ollama"
  config:
    model: "llama3.2"
    base_url: "http://localhost:11434"

embedder:
  provider: "ollama"
  config:
    model: "nomic-embed-text"
    base_url: "http://localhost:11434"
    embedding_dims: 768

vector_store:
  provider: "qdrant"
  config:
    collection_name: "mem0_memories"
    host: "localhost"
    port: 6333
    embedding_model_dims: 768
```

### With LMStudio

> **⚠️ Note**: LMStudio requires a model that supports `response_format: json_object`. 
> mem0 uses structured JSON output for memory extraction. If you get `response_format` errors,
> use Ollama instead or select a model with JSON mode support in LMStudio.

```yaml
llm:
  provider: "openai"
  config:
    model: "your-model-name"
    base_url: "http://localhost:1234/v1"

embedder:
  provider: "openai"
  config:
    model: "your-embedding-model"
    base_url: "http://localhost:1234/v1"
```

## MCP Integration

Connect your MCP client to:

```
http://localhost:8765/mcp/<client-name>/sse/<user-id>
```

### Claude Desktop

```json
{
  "mcpServers": {
    "mem0": {
      "url": "http://localhost:8765/mcp/claude/sse/default"
    }
  }
}
```

## Available MCP Tools

| Tool | Description |
|------|-------------|
| `add_memories` | Store new memories from text |
| `search_memory` | Search memories by query |
| `list_memories` | List all user memories |
| `get_memory` | Get a specific memory by ID |
| `delete_memories` | Delete memories by IDs |
| `delete_all_memories` | Delete all user memories |

## API Endpoints

| Endpoint | Method | Description |
|----------|--------|-------------|
| `/health` | GET | Health check |
| `/api/v1/status` | GET | Server status |
| `/api/v1/config` | GET/PUT | Configuration |
| `/api/v1/memories` | GET/POST/DELETE | Memory operations |
| `/api/v1/memories/search` | POST | Search memories |

## Requirements

- Python 3.10+
- Vector store (Qdrant recommended)
- LLM server (Ollama, LMStudio, etc.)

## License

Apache 2.0
