Metadata-Version: 2.4
Name: kondoo
Version: 0.1.6
Summary: A flexible, self-hosted RAG chatbot framework for containerized deployments.
Author-email: Luis Pereida <luis.pereida@sysadminctl.services>
License: MIT License
        
        Copyright (c) 2025 sysadminctl.services
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
Project-URL: Homepage, https://github.com/sysadminctl-services/kondoo
Project-URL: Repository, https://github.com/sysadminctl-services/kondoo
Project-URL: Bug Tracker, https://github.com/sysadminctl-services/kondoo/issues
Keywords: chatbot,rag,llm,ollama,gemini,self-hosted,docker,podman,container,framework,python,flask,llama-index
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Communications :: Chat
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Operating System :: OS Independent
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: flask
Requires-Dist: python-dotenv
Requires-Dist: google-generativeai
Requires-Dist: llama-index
Requires-Dist: llama-index-llms-gemini
Requires-Dist: llama-index-llms-openai
Requires-Dist: llama-index-llms-openai-like
Requires-Dist: llama-index-embeddings-ollama
Requires-Dist: llama-index-embeddings-huggingface
Requires-Dist: llama-index-embeddings-openai
Dynamic: license-file

# Kondoo 🦙

**Kondoo** is not just a chatbot; it is a framework for building **autonomous digital minds**. Its name is inspired by the word “condominium,” a system of independent dwellings that share the same structure. Similarly, Kondoo allows multiple bots to operate independently, each with its own personality and knowledge base, but sharing the same robust, containerized framework.

This project was born with a **“self-hosted first”** philosophy, giving you complete control over your data and the models you use, from a local `tinyllama` to cloud APIs such as Gemini.

> **Kondoo: Your knowledge, your rules, your assistants.**

---

## 🚀 Key Features

* **Framework Agnostic:** Not tied to a specific provider. Use an `ANSWER_LLM_PROVIDER` to choose your answer engine (Gemini, OpenAI, Ollama) and a `KNOWLEDGE_PROVIDER` for your embeddings (Ollama, local, OpenAI).
* **Containerized by Design:** Built on **Podman** and `compose`, ensuring maximum portability and clean, repeatable deployment.
* **Self-Hosted First:** Designed to run 100% locally, using Ollama for both embeddings and response generation, giving you full control and privacy.
* **Flexible:** Easily configure each bot's personality through a simple `personality.txt` file.
* **Extensible:** The `src/` structure makes it an installable Python package, ready to be imported into larger projects.

## 🏛️ Project Structure

Kondoo is structured as a Python framework, separating reusable code from implementation examples:

* `src/kondoo/`: The source code for the `kondoo` framework (installable via `pip`).
* `example/example_bot/`: A complete and functional example bot that shows how to use the framework. This is your starting point.
* `pyproject.toml`: Defines the project and all its dependencies.
* `.env.example`: A universal template with all available environment variables.

## ⚡ Quickstart Guide

Try Kondoo in 5 minutes using the sample bot.

### 1. Prerequisites

* [Podman](https://podman.io/) and `podman-compose`.
* [Python 3.9+](https://www.python.org/)
* Your own Ollama service (local or remote) or an API Key (e.g., Google Gemini).
* [SynapsIA](https://github.com/sysadminctl-services/synapsia) to create the knowledge base.

### 2. Clone the Repository

```bash
git clone https://github.com/sysadminctl-services/kondoo.git
cd kondoo
```

### 3. Set Up the Example Bot
Navigate to the example directory:

```bash
cd example/example_bot
```

Create your personal configuration file from the root template:

```bash
cp ../../.env.example .env
```

Edit the .env file and fill in the variables. For a 100% local test with Ollama:

```Ini, TOML
# example/example_bot/.env
ANSWER_LLM_PROVIDER=ollama_compatible
KNOWLEDGE_PROVIDER=ollama

LLM_MODEL_NAME="tinyllama"
LLM_BASE_URL="http://host.containers.internal:11434/v1"
LLM_API_KEY="ollama"

EMBEDDING_MODEL_NAME="mxbai-embed-large"
OLLAMA_BASE_URL="http://host.containers.internal:11434"
```

### 4. Create the Knowledge Base

Create the directories for the documents and the knowledge base:

```bash
mkdir docs
mkdir knowledge
echo “Kondoo is a RAG chatbot framework.” > docs/info.txt
```

Use [SynapsIA](https://github.com/sysadminctl-services/synapsia) to process your documents:

```bash
python synapsia.py --docs ../Kondoo/example/example_bot/docs/ --knowledge ../Kondoo/example/example_bot/knowledge/
```

### 5. Launch the Container

Return to the bot directory and run podman-compose:

```bash
# While in example/example_bot/
podman-compose up --build
```

### 6. Test the Bot
Open a new terminal and send a query using curl:

```bash
curl -X POST \
  -H “Content-Type: application/json” \
  -d ‘{“query”: “What is Kondoo?”}’ \
  http://localhost:5000/query
```

You should receive a JSON response generated by your local tinyllama.

## ⚙️ Configuration (.env)

All configuration variables are documented in the `.env.example` file. Variables are loaded from `.env` in your bot's directory (e.g., `example/example_bot/.env`).

### 1. Provider Selection

These variables act as "switches" to choose which services to use.

* `ANSWER_LLM_PROVIDER`: Choose your response (LLM) engine.
    * `gemini`: (Cloud) Google Gemini (requires `LLM_API_KEY`).
    * `openai`: (Cloud) OpenAI (requires `LLM_API_KEY`).
    * `ollama_compatible`: (Self-Hosted) Any OpenAI-compatible API, like Ollama (requires `LLM_BASE_URL` and `LLM_MODEL_NAME`).
* `KNOWLEDGE_PROVIDER`: Choose your embeddings (knowledge) engine.
    * `ollama`: (Self-Hosted) Use an Ollama service (requires `OLLAMA_BASE_URL` and `EMBEDDING_MODEL_NAME`).
    * `local`: (Local) Use a HuggingFace model on the CPU/GPU (requires `EMBEDDING_MODEL_NAME`).
    * `openai`: (Cloud) Use OpenAI's embeddings API (requires `LLM_API_KEY`).

---

### 2. Provider-Specific Settings

These are the "control knobs" required by the providers you selected above.

#### Answer Engine (LLM) Settings

* `LLM_API_KEY`:
    * **Required by:** `gemini`, `openai`.
    * **Description:** Your secret API key for the chosen cloud service.
* `LLM_MODEL_NAME`:
    * **Required by:** `gemini`, `openai`, `ollama_compatible`.
    * **Description:** The specific model name to use for generating answers.
    * **Examples:** `models/gemini-1.5-flash`, `gpt-4o`, `tinyllama`.
* `LLM_BASE_URL`:
    * **Required by:** `ollama_compatible`.
    * **Description:** The full base URL of your self-hosted LLM's OpenAI-compatible API.
    * **Example (Ollama):** `http://host.containers.internal:11434/v1`

#### Knowledge (Embedding) Settings

* `EMBEDDING_MODEL_NAME`:
    * **Required by:** `ollama`, `local`, `openai`.
    * **Description:** The specific model name to use for embeddings.
    * **Examples:** `mxbai-embed-large`, `nomic-embed-text`.
* `OLLAMA_BASE_URL`:
    * **Required by:** `ollama` (provider).
    * **Description:** The base URL of your Ollama service (the non-`/v1` endpoint).
    * **Example:** `http://host.containers.internal:11434`

---

### 3. Bot Configuration

These variables control the bot's identity and data paths.

* `BOT_PERSONALITY_FILE`:
    * **Description:** The path *inside the container* to the text file that defines the bot's personality.
    * **Default:** `/app/personality.txt` (as set by the `Containerfile`).
* `KNOWLEDGE_DIR`:
    * **Description:** The path *inside the container* where the bot will load its knowledge base from.
    * **Default:** `/app/knowledge` (as set by the `compose.yaml` volume).

## ⚖️ License
This project is licensed under the MIT License. See the LICENSE file for more details.
