Metadata-Version: 2.4
Name: docklee
Version: 1.0.1
Summary: Docklee AI context infrastructure SDK
Project-URL: Homepage, https://docklee.com
Project-URL: Documentation, https://docs.docklee.com
Project-URL: Repository, https://github.com/docklee-ai/docklee-python
License: MIT
Keywords: ai,context,knowledge,llm,memory,rag
Requires-Python: >=3.9
Requires-Dist: httpx>=0.27.0
Provides-Extra: langchain
Requires-Dist: langchain-core>=0.2.0; extra == 'langchain'
Requires-Dist: langchain>=0.2.0; extra == 'langchain'
Provides-Extra: langgraph
Requires-Dist: langgraph>=0.1.0; extra == 'langgraph'
Provides-Extra: voice
Requires-Dist: pipecat-ai>=0.0.30; extra == 'voice'
Description-Content-Type: text/markdown

# docklee

AI context infrastructure SDK for Python. Company knowledge + persistent memory for any AI agent.

## Install

```bash
pip install docklee
```

## Quick Start

```python
import asyncio
from docklee import Docklee

async def main():
    async with Docklee(api_key="dk_live_xxxx") as client:

        # Query a knowledge engine
        answer = await client.knowledge.query("eng_xxxx", "What is our refund policy?")
        print(answer.answer)
        print(answer.confidence)

        # Retrieve chunks for your own LLM
        chunks = await client.knowledge.retrieve("eng_xxxx", "pricing tiers")
        for chunk in chunks:
            print(chunk.content)

        # Write to memory
        await client.memory.write("space_xxxx", "User prefers dark mode")

        # Search memory
        results = await client.memory.search("space_xxxx", "user preferences")
        for r in results:
            print(r.content)

        # Unified context — KE + DUM in one call
        context = await client.context.assemble(
            "eng_xxxx",
            "What is the pricing for 50 seats?",
            memory_space_id="space_xxxx",
        )
        print(context.answer)

asyncio.run(main())
```

## OpenAI Integration

```python
from openai import AsyncOpenAI
from docklee.providers import withDocklee

client = withDocklee(
    AsyncOpenAI(api_key="sk-xxxx"),
    docklee_key="dk_live_xxxx",
    engine_id="eng_xxxx",          # company knowledge engine
    memory_space_id="space_xxxx",  # user memory space
    mode="precise",                # "precise" | "guide" | "explore"
    write_to_memory=True,          # auto-save conversations to memory
)

# Use normally - knowledge and memory injected automatically
response = await client.chat.completions.create(
    model="gpt-4o",
    messages=[
        { "role": "system", "content": "You are a helpful assistant." },
        { "role": "user", "content": "What is our refund policy?" }
    ]
)
```

## Universal Tool Support

```python
from docklee.providers import DockleeTools

tools = DockleeTools(
    docklee_key="dk_live_xxxx",
    engine_id="eng_xxxx",          # knowledge engine to search
    memory_space_id="space_xxxx",  # memory space to recall from
)

# Returns tools in the format each LLM expects
tools.for_openai()     # OpenAI function calling format
tools.for_anthropic()  # Anthropic tool use format
tools.for_gemini()     # Gemini function calling format
tools.for_any()        # generic format

# Handle tool calls from the LLM
result = await tools.handle_tool_call(
    "docklee_search_knowledge",
    {"query": "refund policy"}
)
```

## LangChain

```bash
pip install docklee[langchain]
```

```python
from docklee.integrations.langchain import DockleeRetriever, DockleeMemory

retriever = DockleeRetriever(
    api_key="dk_live_xxxx",
    engine_id="eng_xxxx",  # replaces your existing vector store retriever
)

memory = DockleeMemory(
    api_key="dk_live_xxxx",
    space_id="space_xxxx",  # replaces your existing conversation memory
)

docs = await retriever.ainvoke("What is our pricing?")
history = await memory.aload_memory_variables({"input": "pricing"})
```

## LangGraph

```bash
pip install docklee[langgraph]
```

```python
from docklee.integrations.langgraph import docklee_knowledge_node, docklee_memory_node

# Add as nodes in your graph
graph.add_node("knowledge", docklee_knowledge_node(
    api_key="dk_live_xxxx",
    engine_id="eng_xxxx",     # searches KE at this node
    output_key="knowledge",   # key written to graph state
))

graph.add_node("memory", docklee_memory_node(
    api_key="dk_live_xxxx",
    space_id="space_xxxx",    # reads and writes DUM at this node
    write_response=True,      # save agent response to memory after each turn
))
```

## Voice Agents

```bash
pip install docklee[voice]
```

```python
from docklee.integrations.pipecat import DockleeContextProcessor

processor = DockleeContextProcessor(
    api_key="dk_live_xxxx",
    engine_id="eng_xxxx",          # fetch knowledge before each turn
    memory_space_id="space_xxxx",  # inject user memory context
    mode="precise",                # answer mode
)

# Call before passing transcript to your LLM
context = await processor.get_context(transcript)
```

## Links

- Website: https://docklee.com
- Docs: https://docs.docklee.com
- API Reference: https://docs.docklee.com/api
