Metadata-Version: 2.4
Name: ragway
Version: 0.1.0
Summary: The way to build RAG — modular, configurable, plugin-based
Project-URL: Homepage, https://github.com/yourusername/ragway
Project-URL: Documentation, https://ragway.dev
Project-URL: Issues, https://github.com/yourusername/ragway/issues
License: MIT
Keywords: ai,embeddings,llm,rag,ragway,retrieval
Classifier: Development Status :: 3 - Alpha
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.11
Requires-Dist: aiohttp>=3.9
Requires-Dist: click>=8.1
Requires-Dist: numpy>=1.24
Requires-Dist: pydantic>=2.0
Requires-Dist: python-dotenv>=1.0
Requires-Dist: pyyaml>=6.0
Provides-Extra: all
Requires-Dist: aiohttp>=3.9; extra == 'all'
Requires-Dist: anthropic>=0.25; extra == 'all'
Requires-Dist: asyncpg>=0.30; extra == 'all'
Requires-Dist: boto3>=1.34; extra == 'all'
Requires-Dist: chromadb>=0.4; extra == 'all'
Requires-Dist: cohere>=5.0; extra == 'all'
Requires-Dist: faiss-cpu>=1.7; extra == 'all'
Requires-Dist: google-cloud-aiplatform>=1.70; extra == 'all'
Requires-Dist: llama-cpp-python>=0.2; extra == 'all'
Requires-Dist: openai>=1.0; extra == 'all'
Requires-Dist: openpyxl>=3.1; extra == 'all'
Requires-Dist: pgvector>=0.3; extra == 'all'
Requires-Dist: pinecone-client>=3.0; extra == 'all'
Requires-Dist: python-docx>=1.1; extra == 'all'
Requires-Dist: qdrant-client>=1.9; extra == 'all'
Requires-Dist: sentence-transformers>=2.2; extra == 'all'
Requires-Dist: weaviate-client>=4.0; extra == 'all'
Requires-Dist: youtube-transcript-api>=0.6; extra == 'all'
Provides-Extra: all-cloud
Requires-Dist: anthropic>=0.25; extra == 'all-cloud'
Requires-Dist: cohere>=5.0; extra == 'all-cloud'
Requires-Dist: openai>=1.0; extra == 'all-cloud'
Requires-Dist: pinecone-client>=3.0; extra == 'all-cloud'
Provides-Extra: all-local
Requires-Dist: faiss-cpu>=1.7; extra == 'all-local'
Requires-Dist: llama-cpp-python>=0.2; extra == 'all-local'
Requires-Dist: sentence-transformers>=2.2; extra == 'all-local'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.25; extra == 'anthropic'
Provides-Extra: azure-openai
Requires-Dist: openai>=1.0; extra == 'azure-openai'
Provides-Extra: bedrock
Requires-Dist: boto3>=1.34; extra == 'bedrock'
Provides-Extra: bge
Requires-Dist: sentence-transformers>=2.2; extra == 'bge'
Provides-Extra: chroma
Requires-Dist: chromadb>=0.4; extra == 'chroma'
Provides-Extra: cohere
Requires-Dist: cohere>=5.0; extra == 'cohere'
Provides-Extra: dev
Requires-Dist: aiohttp>=3.9; extra == 'dev'
Requires-Dist: anthropic>=0.25; extra == 'dev'
Requires-Dist: asyncpg>=0.30; extra == 'dev'
Requires-Dist: boto3>=1.34; extra == 'dev'
Requires-Dist: chromadb>=0.4; extra == 'dev'
Requires-Dist: cohere>=5.0; extra == 'dev'
Requires-Dist: faiss-cpu>=1.7; extra == 'dev'
Requires-Dist: google-cloud-aiplatform>=1.70; extra == 'dev'
Requires-Dist: llama-cpp-python>=0.2; extra == 'dev'
Requires-Dist: mypy; extra == 'dev'
Requires-Dist: openai>=1.0; extra == 'dev'
Requires-Dist: openpyxl>=3.1; extra == 'dev'
Requires-Dist: pgvector>=0.3; extra == 'dev'
Requires-Dist: pinecone-client>=3.0; extra == 'dev'
Requires-Dist: pytest; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: python-docx>=1.1; extra == 'dev'
Requires-Dist: qdrant-client>=1.9; extra == 'dev'
Requires-Dist: ruff; extra == 'dev'
Requires-Dist: sentence-transformers>=2.2; extra == 'dev'
Requires-Dist: weaviate-client>=4.0; extra == 'dev'
Requires-Dist: youtube-transcript-api>=0.6; extra == 'dev'
Provides-Extra: docx
Requires-Dist: python-docx>=1.1; extra == 'docx'
Provides-Extra: excel
Requires-Dist: openpyxl>=3.1; extra == 'excel'
Provides-Extra: faiss
Requires-Dist: faiss-cpu>=1.7; extra == 'faiss'
Provides-Extra: groq
Requires-Dist: openai>=1.0; extra == 'groq'
Provides-Extra: llama
Requires-Dist: llama-cpp-python>=0.2; extra == 'llama'
Provides-Extra: mistral
Requires-Dist: openai>=1.0; extra == 'mistral'
Provides-Extra: notion
Requires-Dist: aiohttp>=3.9; extra == 'notion'
Provides-Extra: openai
Requires-Dist: openai>=1.0; extra == 'openai'
Provides-Extra: pgvector
Requires-Dist: asyncpg>=0.30; extra == 'pgvector'
Requires-Dist: pgvector>=0.3; extra == 'pgvector'
Provides-Extra: pinecone
Requires-Dist: pinecone-client>=3.0; extra == 'pinecone'
Provides-Extra: qdrant
Requires-Dist: qdrant-client>=1.9; extra == 'qdrant'
Provides-Extra: vertex-ai
Requires-Dist: google-cloud-aiplatform>=1.70; extra == 'vertex-ai'
Provides-Extra: weaviate
Requires-Dist: weaviate-client>=4.0; extra == 'weaviate'
Provides-Extra: youtube
Requires-Dist: youtube-transcript-api>=0.6; extra == 'youtube'
Description-Content-Type: text/markdown

# ragway

The way to build RAG.

## Install

```bash
pip install ragway
```

## Quickstart

```python
from ragway import RAG
import asyncio, pathlib
pathlib.Path("example_docs").mkdir(exist_ok=True); pathlib.Path("example_docs/intro.md").write_text("# RAG\nRetrieval-Augmented Generation combines retrieval with generation.", encoding="utf-8")
rag = RAG(llm="openai", api_key="YOUR_OPENAI_KEY")
print(asyncio.run(rag.ingest("example_docs")), asyncio.run(rag.query("What is RAG?")))
```

## Why ragway

- Compared to LangChain: smaller public surface, fewer framework abstractions, explicit config and component wiring.
- Compared to LlamaIndex: direct control over retrieval/rerank/vectorstore choices without committing to one indexing model.
- For production code: you can start simple with `RAG(...)`, then move to YAML config and provider-specific tuning without rewriting app code.

## What You Can Swap

| Component | Options |
| --- | --- |
| LLM | anthropic, openai, mistral, groq, llama, local |
| Vectorstore | faiss, chroma, pinecone, weaviate |
| Retrieval | vector, bm25, hybrid, multi_query, parent_document |
| Reranker | cohere, bge, cross_encoder (or None) |
| Chunking | fixed, recursive, semantic, sliding_window, hierarchical |
| Pipeline | naive, hybrid, self, long_context, agentic |

## Install Options

```bash
# Base package
pip install ragway

# Provider extras
pip install ragway[anthropic]
pip install ragway[openai]
pip install ragway[mistral]
pip install ragway[groq]
pip install ragway[cohere]
pip install ragway[pinecone]
pip install ragway[weaviate]
pip install ragway[faiss]
pip install ragway[chroma]
pip install ragway[llama]
pip install ragway[bge]

# Bundles
pip install ragway[all-cloud]
pip install ragway[all-local]
pip install ragway[all]
pip install ragway[dev]
```

## Config File Example

```yaml
version: "1.0"
pipeline: hybrid

plugins:
  llm:
    provider: groq
    model: llama-3.1-8b-instant
    api_key: ${GROQ_API_KEY}
    temperature: 0.2
    max_tokens: 512

  embedding:
    provider: openai
    model: text-embedding-3-small
    api_key: ${OPENAI_API_KEY}
    batch_size: 32

  vectorstore:
    provider: faiss
    index_name: rag-lab
    top_k: 5

  retrieval:
    strategy: hybrid
    top_k: 5
    rrf_k: 60

  reranker:
    enabled: true
    provider: cohere
    api_key: ${COHERE_API_KEY}
    top_k: 3

  chunking:
    strategy: recursive
    chunk_size: 512
    overlap: 50
```

```python
from ragway import RAG
import asyncio

rag = RAG.from_config("rag.yaml")
print(asyncio.run(rag.query("Summarize the key idea.")))
```

## Links

- Docs: https://ragway.dev
- Issues: https://github.com/yourusername/ragway/issues
- Changelog: https://github.com/yourusername/ragway/blob/main/CHANGELOG.md
