Metadata-Version: 2.4
Name: mcp-server-rapid-rag
Version: 0.1.0
Summary: MCP Server for rapid-rag - Local RAG with semantic search and LLM queries
Author-email: Humotica <info@humotica.com>
License: MIT
Keywords: chromadb,mcp,ollama,rag,retrieval,semantic-search
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Requires-Python: >=3.10
Requires-Dist: mcp>=1.0.0
Requires-Dist: rapid-rag>=0.2.0
Description-Content-Type: text/markdown

# mcp-server-rapid-rag

MCP Server for **rapid-rag** - Local RAG with semantic search and LLM queries.

Search your documents with AI, no cloud needed! Works with Ollama for local LLM inference.

## Installation

```bash
pip install mcp-server-rapid-rag
```

## Configuration

Add to your Claude Desktop config (`~/.config/claude/claude_desktop_config.json`):

```json
{
  "mcpServers": {
    "rapid-rag": {
      "command": "mcp-server-rapid-rag"
    }
  }
}
```

Or with uvx:

```json
{
  "mcpServers": {
    "rapid-rag": {
      "command": "uvx",
      "args": ["mcp-server-rapid-rag"]
    }
  }
}
```

## Tools

### `rag_add`
Add files or directories to the RAG collection. Supports .txt, .md, .pdf.

```
"Add my docs folder to RAG: ~/Documents/notes"
```

### `rag_add_text`
Add raw text directly to the collection.

```
"Store this meeting notes in RAG: [text content]"
```

### `rag_search`
Semantic search - find the most relevant documents.

```
"Search my documents for: Python async patterns"
```

### `rag_query`
Full RAG pipeline - search documents and get an AI-generated answer.

```
"Based on my documents, how do I configure logging?"
```

### `rag_info`
Get collection statistics.

```
"Show me the RAG collection info"
```

### `rag_list`
List all available collections.

```
"List my RAG collections"
```

### `rag_clear`
Clear a collection (requires confirmation).

```
"Clear the 'old_project' RAG collection"
```

## Example Usage

Ask Claude:

> "Add all the markdown files from ~/projects/docs to my RAG"

Claude will:
1. Index all .md files in the directory
2. Split them into chunks with embeddings
3. Store them in ChromaDB locally

Then ask:

> "Based on my docs, how do I set up authentication?"

Claude will:
1. Search the indexed documents
2. Pass relevant chunks to Ollama
3. Generate an answer with source citations

## Requirements

- **rapid-rag**: Core RAG library with ChromaDB
- **Ollama** (optional): For `rag_query` - local LLM inference

### Install Ollama

```bash
# macOS/Linux
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull qwen2.5:7b
```

## Collections

Documents are organized into collections. Each collection has:
- Separate vector database
- Persistent storage in `./rapid_rag_data/{collection}/`
- Own embedding cache

Default collection is "default", but you can create multiple:

```
"Add ~/work/project-a to the 'project-a' collection"
"Search 'project-a' for: API endpoints"
```

## Links

- [rapid-rag on PyPI](https://pypi.org/project/rapid-rag/)
- [Humotica](https://humotica.com)
- [ChromaDB](https://www.trychroma.com/)
- [Ollama](https://ollama.ai/)

## License

MIT
