Metadata-Version: 2.4
Name: kollabor
Version: 0.4.22
Summary: An advanced, highly customizable terminal-based chat application for interacting with LLMs
Author-email: Kollabor Contributors <contributors@example.com>
License: MIT
Project-URL: Homepage, https://github.com/kollaborai/kollabor-cli
Project-URL: Repository, https://github.com/kollaborai/kollabor-cli
Project-URL: Documentation, https://github.com/kollaborai/kollabor-cli/blob/main/docs/
Project-URL: Bug Tracker, https://github.com/kollaborai/kollabor-cli/issues
Project-URL: Homebrew Tap, https://github.com/kollaborai/homebrew-tap
Keywords: llm,cli,chat,terminal,ai,chatbot,assistant,kollabor,plugin-system,event-driven
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Communications :: Chat
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Classifier: Topic :: Terminals
Classifier: Environment :: Console
Classifier: Operating System :: MacOS
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX :: Linux
Classifier: Operating System :: Unix
Requires-Python: >=3.12
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: aiohttp>=3.10.11
Requires-Dist: httpx>=0.27.0
Requires-Dist: kollabor-agent>=0.4.22
Requires-Dist: kollabor-ai>=0.4.22
Requires-Dist: kollabor-config>=0.4.22
Requires-Dist: kollabor-events>=0.4.22
Requires-Dist: kollabor-plugins>=0.4.22
Requires-Dist: kollabor-tui>=0.4.22
Requires-Dist: psutil>=5.9.0
Requires-Dist: packaging>=23.0
Requires-Dist: pydantic>=2.0.0
Provides-Extra: dev
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: mypy>=1.0.0; extra == "dev"
Requires-Dist: flake8>=6.0.0; extra == "dev"
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Dynamic: license-file

# Kollabor

[![Python Version](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

An advanced, highly customizable terminal-based chat application for interacting with Large Language Models (LLMs). Built with a powerful plugin system and comprehensive hook architecture for complete customization.

**macOS:** `brew install kollaborai/tap/kollabor`
**Other:** `curl -sS https://raw.githubusercontent.com/kollaborai/kollabor-cli/main/install.sh | bash`
**Run:** `kollab`

## Features

- **Event-Driven Architecture**: Everything has hooks - every action triggers customizable hooks that plugins can attach to
- **Advanced Plugin System**: Dynamic plugin discovery and loading with comprehensive SDK
- **Rich Terminal UI**: Beautiful terminal rendering with status areas, visual effects, and modal overlays
- **Conversation Management**: Persistent conversation history with full logging support
- **Model Context Protocol (MCP)**: Built-in support for MCP integration
- **Tool Execution**: Function calling and tool execution capabilities
- **Pipe Mode**: Non-interactive mode for scripting and automation
- **Environment Variable Support**: Complete configuration via environment variables (API settings, system prompts, etc.)
- **Extensible Configuration**: Flexible configuration system with plugin integration
- **Async/Await Throughout**: Modern Python async patterns for responsive performance

## Installation

### macOS (Recommended)

Standard Homebrew installation - what most macOS users expect:

```bash
brew install kollaborai/tap/kollabor
```

To upgrade:

```bash
brew upgrade kollabor
```

### One-Line Install (Cross-Platform)

Auto-detects the best method (uvx > pipx > pip):

```bash
curl -sS https://raw.githubusercontent.com/kollaborai/kollabor-cli/main/install.sh | bash
```

### Using uvx (Fastest, Isolated)

uvx runs the app in an isolated environment without installation:

```bash
uvx --from kollabor kollab
```

Or install to uv tool cache for instant startup:

```bash
uv tool install kollabor
kollab
```

### Using pipx (Isolated, Clean)

Recommended for user-space installation without system conflicts:

```bash
pipx install kollabor
```

### Using pip

Standard Python package installation:

```bash
pip install kollabor
```

### From Source

```bash
git clone https://github.com/kollaborai/kollabor-cli.git
cd kollabor-cli
pip install -e .
```

### Development Installation

```bash
pip install -e ".[dev]"
```

## Quick Start

### Interactive Mode

Simply run the CLI to start an interactive chat session:

```bash
kollab
```

### Pipe Mode

Process a single query and exit:

```bash
# Direct query
kollab "What is the capital of France?"

# From stdin
echo "Explain quantum computing" | kollab -p

# From file
cat document.txt | kollab -p

# With custom timeout
kollab --timeout 5min "Complex analysis task"
```

## Configuration

On first run, Kollabor creates a `.kollabor-cli` directory in your current working directory:

```
.kollabor-cli/
├── config.json           # User configuration
├── system_prompt/        # System prompt templates
├── logs/                 # Application logs
└── state.db              # Persistent state
```

### Configuration Options

The configuration system uses dot notation:

- `kollabor.llm.*` - LLM service settings
- `terminal.*` - Terminal rendering options
- `application.*` - Application metadata

### Environment Variables

All configuration can be controlled via environment variables, which take precedence over config files:

#### API Configuration

Kollabor uses a **profile-based** configuration system. Each profile defines a provider, model, and connection settings. Environment variables follow the pattern `KOLLABOR_{PROFILE_NAME}_{FIELD}`.

**Supported fields per profile:** `MODEL`, `PROVIDER`, `BASE_URL`, `API_KEY`, `MAX_TOKENS`, `TEMPERATURE`, `TIMEOUT`, `TOP_P`, `STREAMING`, `SUPPORTS_TOOLS`, `DESCRIPTION`, `EXTRA_HEADERS`

##### Local LLM (Ollama, LM Studio, vLLM)

```bash
KOLLABOR_LOCAL_PROVIDER=custom
KOLLABOR_LOCAL_BASE_URL=http://localhost:11434/v1   # Ollama
KOLLABOR_LOCAL_MODEL=llama3.1
KOLLABOR_LOCAL_MAX_TOKENS=32768
KOLLABOR_LOCAL_TEMPERATURE=0.7
```

```bash
KOLLABOR_LMSTUDIO_PROVIDER=custom
KOLLABOR_LMSTUDIO_BASE_URL=http://localhost:1234/v1
KOLLABOR_LMSTUDIO_MODEL=qwen3-0.6b
```

##### OpenAI

```bash
# GPT-5.2: max output 128K tokens, 400K context
KOLLABOR_OPENAI_PROVIDER=openai
KOLLABOR_OPENAI_API_KEY=sk-proj-...
KOLLABOR_OPENAI_MODEL=gpt-5.2
KOLLABOR_OPENAI_MAX_TOKENS=128000
KOLLABOR_OPENAI_TEMPERATURE=0.7
```

```bash
# GPT-5 mini: max output 128K tokens (faster, cost-efficient)
KOLLABOR_OPENAI_PROVIDER=openai
KOLLABOR_OPENAI_API_KEY=sk-proj-...
KOLLABOR_OPENAI_MODEL=gpt-5-mini
KOLLABOR_OPENAI_MAX_TOKENS=128000
```

```bash
# o4-mini: max output 100K tokens (reasoning model)
KOLLABOR_OPENAI_PROVIDER=openai
KOLLABOR_OPENAI_API_KEY=sk-proj-...
KOLLABOR_OPENAI_MODEL=o4-mini
KOLLABOR_OPENAI_MAX_TOKENS=100000
```

##### Anthropic Claude

```bash
# Claude Sonnet 4.6: max output 64K tokens
KOLLABOR_CLAUDE_PROVIDER=anthropic
KOLLABOR_CLAUDE_API_KEY=sk-ant-...
KOLLABOR_CLAUDE_MODEL=claude-sonnet-4-6
KOLLABOR_CLAUDE_MAX_TOKENS=64000
```

```bash
# Claude Opus 4.6: max output 128K tokens
KOLLABOR_OPUS_PROVIDER=anthropic
KOLLABOR_OPUS_API_KEY=sk-ant-...
KOLLABOR_OPUS_MODEL=claude-opus-4-6
KOLLABOR_OPUS_MAX_TOKENS=128000
```

##### Azure OpenAI

```bash
KOLLABOR_AZURE_PROVIDER=azure_openai
KOLLABOR_AZURE_BASE_URL=https://myresource.openai.azure.com
KOLLABOR_AZURE_API_KEY=your-azure-key
KOLLABOR_AZURE_MODEL=gpt-5-mini
KOLLABOR_AZURE_MAX_TOKENS=128000
```

##### Google Gemini

```bash
# Gemini 3.1 Pro (preview): max output 64K tokens, 1M context
KOLLABOR_GEMINI_PROVIDER=gemini
KOLLABOR_GEMINI_API_KEY=your-gemini-key
KOLLABOR_GEMINI_MODEL=gemini-3.1-pro-preview
KOLLABOR_GEMINI_MAX_TOKENS=64000
```

```bash
# Gemini 3 Flash (preview): max output 65K tokens, 1M context
KOLLABOR_GEMINI_PROVIDER=gemini
KOLLABOR_GEMINI_API_KEY=your-gemini-key
KOLLABOR_GEMINI_MODEL=gemini-3-flash-preview
KOLLABOR_GEMINI_MAX_TOKENS=65536
```

##### xAI Grok

```bash
# Grok 4.1 Fast: 2M context, ~30K max output
KOLLABOR_GROK_PROVIDER=custom
KOLLABOR_GROK_BASE_URL=https://api.x.ai/v1
KOLLABOR_GROK_API_KEY=your-xai-key
KOLLABOR_GROK_MODEL=grok-4-1-fast-reasoning
KOLLABOR_GROK_MAX_TOKENS=30000
```

```bash
# Grok Code: 256K context, 10K max output, coding-optimized
KOLLABOR_GROKCODE_PROVIDER=custom
KOLLABOR_GROKCODE_BASE_URL=https://api.x.ai/v1
KOLLABOR_GROKCODE_API_KEY=your-xai-key
KOLLABOR_GROKCODE_MODEL=grok-code-fast-1
KOLLABOR_GROKCODE_MAX_TOKENS=10000
```

##### Z.AI (Zhipu / GLM)

```bash
# GLM-5: max output 131K tokens, 205K context
KOLLABOR_GLM_PROVIDER=custom
KOLLABOR_GLM_BASE_URL=https://api.z.ai/api/paas/v4
KOLLABOR_GLM_API_KEY=your-zai-key
KOLLABOR_GLM_MODEL=glm-5
KOLLABOR_GLM_MAX_TOKENS=131072
```

```bash
# GLM-4.7 Flash: max output 131K tokens, 203K context (fast & cheap)
KOLLABOR_GLMFAST_PROVIDER=custom
KOLLABOR_GLMFAST_BASE_URL=https://api.z.ai/api/paas/v4
KOLLABOR_GLMFAST_API_KEY=your-zai-key
KOLLABOR_GLMFAST_MODEL=glm-4.7-flash
KOLLABOR_GLMFAST_MAX_TOKENS=131072
```

```bash
# Z.AI Coding Plan ($3/mo): use with coding tools
KOLLABOR_GLMCODE_PROVIDER=custom
KOLLABOR_GLMCODE_BASE_URL=https://api.z.ai/api/coding/paas/v4
KOLLABOR_GLMCODE_API_KEY=your-zai-key
KOLLABOR_GLMCODE_MODEL=glm-5
KOLLABOR_GLMCODE_MAX_TOKENS=131072
```

##### Kimi (Moonshot AI)

```bash
# Kimi K2.5: max output 65K tokens, 262K context
KOLLABOR_KIMI_PROVIDER=custom
KOLLABOR_KIMI_BASE_URL=https://api.moonshot.ai/v1
KOLLABOR_KIMI_API_KEY=your-moonshot-key
KOLLABOR_KIMI_MODEL=kimi-k2.5
KOLLABOR_KIMI_MAX_TOKENS=65535
```

```bash
# Kimi Coding Plan: 256K context, coding-optimized
KOLLABOR_KIMICODE_PROVIDER=custom
KOLLABOR_KIMICODE_BASE_URL=https://api.kimi.com/coding/v1
KOLLABOR_KIMICODE_API_KEY=your-kimi-key
KOLLABOR_KIMICODE_MODEL=kimi-for-coding
KOLLABOR_KIMICODE_MAX_TOKENS=32768
```

##### OpenRouter (Multi-Provider Gateway)

OpenRouter provides access to 300+ models from all providers through a single API key. Model IDs use `provider/model` format.

```bash
# Any model via OpenRouter
KOLLABOR_OPENROUTER_PROVIDER=openrouter
KOLLABOR_OPENROUTER_API_KEY=sk-or-...
KOLLABOR_OPENROUTER_MODEL=anthropic/claude-opus-4.6
KOLLABOR_OPENROUTER_MAX_TOKENS=128000
```

```bash
# More OpenRouter model ID examples:
# openai/gpt-5.2              google/gemini-3-flash
# x-ai/grok-4.1-fast          z-ai/glm-5
# moonshotai/kimi-k2.5        deepseek/deepseek-v3.2
```

##### Switching Profiles

```bash
kollab --profile claude          # Use a specific profile
kollab --profile local --save    # Auto-create profile from env vars and save
```

Or use the `/profile` command interactively to list, switch, and create profiles.

#### System Prompt Configuration

```bash
# Direct string (highest priority)
KOLLABOR_SYSTEM_PROMPT="You are a helpful coding assistant."

# Custom file path
KOLLABOR_SYSTEM_PROMPT_FILE="./my_custom_prompt.md"
```

#### Using .env Files

Create a `.env` file in your project root:

```bash
# Local LLM profile
KOLLABOR_LOCAL_PROVIDER=custom
KOLLABOR_LOCAL_BASE_URL=http://localhost:1234/v1
KOLLABOR_LOCAL_MODEL=qwen3-0.6b

# Cloud profile
KOLLABOR_CLAUDE_PROVIDER=anthropic
KOLLABOR_CLAUDE_API_KEY=sk-ant-your-key-here
KOLLABOR_CLAUDE_MODEL=claude-sonnet-4-20250514

# Custom system prompt
KOLLABOR_SYSTEM_PROMPT_FILE="./prompts/specialized.md"
```

Load and run:

```bash
export $(cat .env | xargs)
kollab --profile local           # Use local LLM
kollab --profile claude          # Use Claude
```

See [ENV_VARS.md](ENV_VARS.md) for complete documentation and examples.

## Architecture

Kollabor follows a modular, event-driven architecture:

### Core Components

- **Application Core** (`kollabor/application.py`): Main orchestrator
- **Event System** (`kollabor/events/`): Central event bus with hook system
- **LLM Services** (`kollabor/llm/`): API communication, conversation management, tool execution
- **I/O System** (`kollabor/io/`): Terminal rendering, input handling, visual effects
- **Plugin System** (`kollabor/plugins/`): Dynamic plugin discovery and loading
- **Configuration** (`kollabor/config/`): Flexible configuration management
- **Storage** (`kollabor/storage/`): State management and persistence

### Plugin Development

Create custom plugins by inheriting from base plugin classes:

```python
from kollabor.plugins import BasePlugin
from kollabor.events import EventType

class MyPlugin(BasePlugin):
    def register_hooks(self):
        """Register plugin hooks."""
        self.event_bus.register_hook(
            EventType.PRE_USER_INPUT,
            self.on_user_input,
            priority=HookPriority.NORMAL
        )

    async def on_user_input(self, context):
        """Process user input before it's sent to the LLM."""
        # Your custom logic here
        return context

    def get_status_line(self):
        """Provide status information for the status bar."""
        return "MyPlugin: Active"
```

## Hook System

The comprehensive hook system allows plugins to intercept and modify behavior at every stage:

- `pre_user_input` - Before processing user input
- `pre_api_request` - Before API calls to LLM
- `post_api_response` - After receiving LLM responses
- `pre_message_display` - Before displaying messages
- `post_message_display` - After displaying messages
- And many more...

## Project Structure

```
kollabor/
├── kollabor/              # Core application modules
│   ├── application.py # Main orchestrator
│   ├── config/        # Configuration management
│   ├── events/        # Event bus and hooks
│   ├── io/            # Terminal I/O
│   ├── llm/           # LLM services
│   ├── plugins/       # Plugin system
│   └── storage/       # State management
├── plugins/           # Plugin implementations
├── docs/              # Documentation
├── tests/             # Test suite
└── main.py            # Application entry point
```

## Development

### Running Tests

```bash
# All tests
python tests/run_tests.py

# Specific test file
python -m unittest tests.test_llm_plugin

# Individual test case
python -m unittest tests.test_llm_plugin.TestLLMPlugin.test_thinking_tags_removal
```

### Code Quality

```bash
# Format code
python -m black kollabor/ plugins/ tests/ main.py

# Type checking
python -m mypy kollabor/ plugins/

# Linting
python -m flake8 kollabor/ plugins/ tests/ main.py --max-line-length=88

# Clean up cache files and build artifacts
python scripts/clean.py
```

## Requirements

- Python 3.12 or higher
- aiohttp 3.8.0 or higher

## License

MIT License - see LICENSE file for details

## Contributing

Contributions are welcome! Please see the documentation for development guidelines.

## Links

- [Documentation](https://github.com/malmazan/kollabor-cli/blob/main/docs/)
- [Bug Tracker](https://github.com/malmazan/kollabor-cli/issues)
- [Repository](https://github.com/malmazan/kollabor-cli)

## Acknowledgments

Built with modern Python async/await patterns and designed for extensibility and customization.
