Metadata-Version: 2.4
Name: invincat-cli
Version: 0.1.1
Summary: Terminal interface for Deep Agents - interactive AI agent with file operations, shell access, and sub-agent capabilities.
Project-URL: Homepage, https://github.com/dog-qiuqiu/invincat
Project-URL: Documentation, https://github.com/dog-qiuqiu/invincat#readme
Project-URL: Repository, https://github.com/dog-qiuqiu/invincat
Project-URL: Issues, https://github.com/dog-qiuqiu/invincat/issues
Project-URL: Changelog, https://github.com/dog-qiuqiu/invincat/releases
Author-email: dog-qiuqiu <dogqiuqiu@outlook.com>
Maintainer-email: dog-qiuqiu <dogqiuqiu@outlook.com>
License: MIT
Keywords: agents,ai,cli,deep-agent,langchain,langgraph,llm,terminal
Classifier: Development Status :: 4 - Beta
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Terminals
Classifier: Topic :: Text Processing :: Linguistic
Classifier: Typing :: Typed
Requires-Python: <4.0,>=3.11
Requires-Dist: aiosqlite<1.0.0,>=0.19.0
Requires-Dist: deepagents-acp>=0.0.4
Requires-Dist: deepagents==0.4.11
Requires-Dist: httpx<1.0.0,>=0.28.1
Requires-Dist: langchain-anthropic<2.0.0,>=1.4.0
Requires-Dist: langchain-google-genai<5.0.0,>=4.2.1
Requires-Dist: langchain-mcp-adapters<1.0.0,>=0.2.0
Requires-Dist: langchain-openai<2.0.0,>=1.1.8
Requires-Dist: langchain<2.0.0,>=1.2.13
Requires-Dist: langgraph-checkpoint-sqlite<4.0.0,>=3.0.0
Requires-Dist: langgraph-cli[inmem]<1.0.0,>=0.4.15
Requires-Dist: langgraph-sdk<1.0.0,>=0.3.11
Requires-Dist: langgraph<2.0.0,>=1.1.2
Requires-Dist: pillow<13.0.0,>=10.0.0
Requires-Dist: prompt-toolkit<4.0.0,>=3.0.52
Requires-Dist: pyperclip<2.0.0,>=1.11.0
Requires-Dist: python-dotenv<2.0.0,>=1.0.0
Requires-Dist: pyyaml>=6.0.0
Requires-Dist: requests<3.0.0,>=2.0.0
Requires-Dist: rich<15.0.0,>=14.0.0
Requires-Dist: tavily-python<1.0.0,>=0.7.21
Requires-Dist: textual-autocomplete<5.0.0,>=3.0.0
Requires-Dist: textual<9.0.0,>=8.0.0
Requires-Dist: tomli-w<2.0.0,>=1.0.0
Provides-Extra: dev
Requires-Dist: black<25.0.0,>=24.0.0; extra == 'dev'
Requires-Dist: mypy<2.0.0,>=1.10.0; extra == 'dev'
Requires-Dist: pre-commit<4.0.0,>=3.7.0; extra == 'dev'
Requires-Dist: pytest-asyncio<1.0.0,>=0.23.0; extra == 'dev'
Requires-Dist: pytest-cov<6.0.0,>=5.0.0; extra == 'dev'
Requires-Dist: pytest-mock<4.0.0,>=3.14.0; extra == 'dev'
Requires-Dist: pytest<9.0.0,>=8.0.0; extra == 'dev'
Requires-Dist: ruff<1.0.0,>=0.9.0; extra == 'dev'
Provides-Extra: test
Requires-Dist: pytest-asyncio<1.0.0,>=0.23.0; extra == 'test'
Requires-Dist: pytest-cov<6.0.0,>=5.0.0; extra == 'test'
Requires-Dist: pytest-mock<4.0.0,>=3.14.0; extra == 'test'
Requires-Dist: pytest<9.0.0,>=8.0.0; extra == 'test'
Requires-Dist: ruff<1.0.0,>=0.9.0; extra == 'test'
Description-Content-Type: text/markdown

# Invincat CLI

[中文文档](README_CN.md)

A Python-based terminal AI programming assistant — collaborate with AI directly in your project directory: read/write files, execute commands, browse the web, and maintain memory across sessions.

![](data/cli_en.png)

---

## Installation

**Requirements**: Python 3.11+

```bash
# Install from PyPI
pip install invincat-cli
```

Or install from source:

```bash
git clone https://github.com/dog-qiuqiu/invincat.git
cd invincat
pip install -e .
```

---

## Quick Start

```bash
# Start in your project directory
cd ~/my-project
invincat-cli
```

After the first launch, run `/model` to configure the model and API Key, then you can start the conversation directly.

---

## Model Configuration

### Configure via Interface

Run `/model` command to open the model management interface:

![](data/model_en.png)

1. Press `Ctrl+N` to register a new model
2. Fill in the provider, model name, and API Key
3. Select from the list and press `Enter` to activate

### Supported Providers

| Provider | Example Models |
|----------|----------------|
| `anthropic` | `claude-sonnet-4-6`, `claude-opus-4-7` |
| `openai` | `gpt-4o`, `o3` |
| `google_genai` | `gemini-2.0-flash`, `gemini-2.5-pro` |
| `openrouter` | Supports all models on OpenRouter |

For OpenAI-compatible interfaces (DeepSeek, Zhipu, local Ollama, etc.), simply set the `base_url` to connect.

### Environment Variables

| Variable | Description |
|----------|-------------|
| `ANTHROPIC_API_KEY` | Anthropic API Key |
| `OPENAI_API_KEY` | OpenAI API Key |
| `GOOGLE_API_KEY` | Google API Key |
| `OPENROUTER_API_KEY` | OpenRouter API Key |
| `TAVILY_API_KEY` | Tavily web search Key (optional) |

---

## Basic Usage

Type your question or task directly in the input box and press `Enter` to send. AI will automatically select the appropriate tools to complete the task:

```
Search for the latest usage of LangGraph interrupt
```
---

### Command Mode (`/` prefix)

```
/clear
/threads
/model
... ...
```

Press `Tab` to autocomplete available commands. See [Slash Commands](#slash-commands) for the complete list.

---

## File References

Use `@` in your message to reference files, and AI will read and understand their content:

```
@src/main.py Are there any potential performance issues in this file?
```
---

## Tool Approval

When AI performs operations like file writing, shell commands, or network requests, it will pause by default for confirmation:

**Auto-approve Mode**: Press `Shift+Tab` to toggle. When enabled, all tool calls are automatically approved, suitable for trusted task scenarios. The status bar will display an `AUTO` indicator.

> ⚠️ It's recommended to enable auto-approve only after you're familiar with the task content.

## Input Line Breaks

Press `Ctrl+J` in the input box to insert a line break, suitable for entering longer code or paragraphs.

---

## Context Management

### Micro Compression

A lightweight compression that runs automatically before each model call, **no LLM involved**, taking <1ms.

**How it works**: Groups conversation messages by "tool call groups", keeps the last 3 groups complete, and replaces large tool outputs in older groups with brief placeholders (keeping the first line summary and line count).

**Compressible Tool Outputs**:
| Tool | Compression Effect |
|------|-------------------|
| `read_file` | File content → `[cleared — read_file, 500 lines: def main():…]` |
| `edit_file` | diff output → placeholder |
| `execute` | shell output → placeholder |
| `grep`/`glob`/`ls` | search/list results → placeholder |
| `web_search`/`fetch_url` | web content → placeholder |

**Not Compressed**: agent/subagent results, `ask_user` responses, MCP tool outputs, `compact_conversation` results.

> 💡 Micro compression only affects the context sent to the model, does not modify persisted state, and complete history is still saved in checkpoints.

### Auto Compression

When context window usage exceeds **80%**, the system automatically compresses older messages into summaries to free up space, requiring no manual operation. The status bar token count turns orange above 70% and red above 90% as warnings.

### Manual Compression

```
/offload
```

Or equivalently `/compact`. After execution, it shows how many messages were compressed and how many tokens were freed.

## Memory System

AI can remember your preferences, project conventions, and important information across sessions.

### Memory Files

| Type | Path | Scope |
|------|------|-------|
| Global Memory | `~/.invincat/agent/AGENTS.md` | Universal for all projects (coding style, personal preferences) |
| Project Memory | `{project root}/.invincat/AGENTS.md` | Only for current Git repository (architecture conventions, tech stack) |

### Manual Memory Update

```
/remember
```

Triggers AI to actively organize content worth saving from the conversation and write it to memory files.

### Auto Memory Update

The system automatically checks for new content to save every certain number of rounds, or triggers early when detecting keywords like "standards", "conventions", "preferences" in the conversation.

**Configure Auto Memory**: Run `/auto-memory` to open the configuration interface, or manually set in `~/.invincat/config.toml`:

```toml
[auto_memory]
enabled = true   # Enable auto memory (default: true)
interval = 10    # Number of rounds between checks (default: 10)
on_exit = true   # Write marker on exit, trigger early next startup (default: true)
```

---

## Skill System

Skills are predefined workflow templates for reusing complex task steps.

### Using Skills

```
/skill:web-research Search for LangGraph best practices
/skill:code-review Check code quality in src/ directory
```

### Skill Locations

| Location | Path | Description |
|----------|------|-------------|
| Built-in Skills | Installed with package | `remember`, `skill-creator` |
| Global Custom | `~/.invincat/agent/skills/` | Available across projects |
| Project-level | `.invincat/skills/` | Only available in current project |

### Creating Custom Skills

```
/skill-creator
```

Starts an interactive wizard that guides you through creating and saving new skills.

---

## Session Management

### View and Switch Sessions

```
/threads
```

Opens the session browser, displaying all historical conversations (time, message count, branch, etc.).

### Start New Conversation

```
/clear
```

Clears the current conversation and starts a new session (old sessions are still saved and can be retrieved via `/threads`).

---

## Slash Commands

Type `/` in the input box and press `Tab` to view and autocomplete all commands.

### Session

| Command | Description |
|---------|-------------|
| `/clear` | Clear current conversation, start new session |
| `/threads` | Browse and restore historical sessions |
| `/quit` / `/q` | Exit program |

### Model & Interface

| Command | Description |
|---------|-------------|
| `/model` | Switch or manage model configurations |
| `/theme` | Switch color theme |
| `/language` | Switch interface language (Chinese / English) |
| `/tokens` | View token usage details |

### Context & Memory

| Command | Description |
|---------|-------------|
| `/offload` / `/compact` | Manually compress context, free tokens |
| `/remember` | Manually trigger memory update |
| `/auto-memory` | Configure auto memory behavior |

### Tools & Extensions

| Command | Description |
|---------|-------------|
| `/mcp` | View connected MCP servers and tools |
| `/editor` | Edit current input in external editor |
| `/skill-creator` | Interactive wizard for creating new skills |

### Others

| Command | Description |
|---------|-------------|
| `/help` | Display help information |
| `/version` | Display version number |
| `/reload` | Reload configuration files |
| `/trace` | Open current conversation in LangSmith (requires configuration) |

---

## FAQ

**Q: No response on first launch?**
You need to configure the model first. Run `/model` → Press `Ctrl+N` to register a model → Fill in the API Key.

**Q: How to interrupt a running task?**
Press `Esc` to interrupt the current AI response; if AI is waiting for tool approval, `Esc` acts as a rejection.

**Q: Context too long causing slow response?**
Run `/offload` to manually compress history, or wait for automatic compression (triggers when usage exceeds 80%).

**Q: How to make AI remember my coding preferences?**
Just tell AI directly, for example "Remember: my project uses 4-space indentation, no semicolons", and AI will automatically save it to memory files at the appropriate time. You can also run `/remember` to manually trigger saving.

**Q: How to share skills across different projects?**
Place skill files in the `~/.invincat/agent/skills/` directory for global availability; place in `.invincat/skills/` for current project only.
