Metadata-Version: 2.4
Name: bog-agents-cli
Version: 0.7.6
Summary: A coding agent in your terminal. 50+ commands, any LLM provider, persistent memory, git workflow, code review, plan mode, remote sandboxes, and CI/CD automation. One install, no code required.
Project-URL: Homepage, https://github.com/bogware/bog-agents
Project-URL: Repository, https://github.com/bogware/bog-agents
Project-URL: Issues, https://github.com/bogware/bog-agents/issues
Project-URL: Changelog, https://github.com/bogware/bog-agents/blob/main/libs/cli/CHANGELOG.md
Author-email: bogware <support@bogware.com>
Maintainer-email: bogware <support@bogware.com>
License: MIT
Keywords: agents,ai,bog-agents,cli,langchain,langgraph,llm,multi-provider,ollama,terminal
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Terminals
Requires-Python: <4.0,>=3.11
Requires-Dist: aiosqlite<1.0.0,>=0.19.0
Requires-Dist: bog-agents<0.8.0,>=0.7.5
Requires-Dist: croniter<4.0.0,>=3.0.0
Requires-Dist: cryptography>=46.0.7
Requires-Dist: httpx<1.0.0,>=0.28.1
Requires-Dist: keyring<27.0.0,>=25.0.0
Requires-Dist: langchain-mcp-adapters<1.0.0,>=0.2.0
Requires-Dist: langchain-openai<2.0.0,>=1.1.14
Requires-Dist: langchain<2.0.0,>=1.2.10
Requires-Dist: langgraph-checkpoint-sqlite<4.0.0,>=3.0.0
Requires-Dist: langgraph-cli[inmem]<1.0.0,>=0.4.15
Requires-Dist: langgraph-sdk<1.0.0,>=0.3.11
Requires-Dist: langgraph<2.0.0,>=1.1.2
Requires-Dist: langsmith<1.0.0,>=0.7.31
Requires-Dist: markdownify<2.0.0,>=0.13.0
Requires-Dist: pillow<13.0.0,>=12.2.0
Requires-Dist: prompt-toolkit<4.0.0,>=3.0.52
Requires-Dist: pyasn1>=0.6.3
Requires-Dist: pygments>=2.20.0
Requires-Dist: pyjwt>=2.12.0
Requires-Dist: pyperclip<2.0.0,>=1.11.0
Requires-Dist: python-dotenv<2.0.0,>=1.2.2
Requires-Dist: python-multipart>=0.0.26
Requires-Dist: pyyaml<7.0.0,>=6.0.0
Requires-Dist: requests<3.0.0,>=2.33.0
Requires-Dist: rich<15.0.0,>=14.0.0
Requires-Dist: textual-autocomplete<5.0.0,>=3.0.0
Requires-Dist: textual<9.0.0,>=8.0.0
Requires-Dist: tomli-w<2.0.0,>=1.0.0
Requires-Dist: uuid-utils<1.0.0,>=0.10.0
Provides-Extra: acp
Requires-Dist: bog-agents-acp>=0.0.4; extra == 'acp'
Provides-Extra: all
Requires-Dist: bog-agents-acp>=0.0.4; extra == 'all'
Requires-Dist: daytona<1.0.0,>=0.113.0; extra == 'all'
Requires-Dist: langchain-anthropic<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langchain-aws<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langchain-baseten<1.0.0,>=0.1.9; extra == 'all'
Requires-Dist: langchain-cohere<1.0.0,>=0.5.0; extra == 'all'
Requires-Dist: langchain-deepseek<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langchain-fireworks<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langchain-google-genai<5.0.0,>=4.0.0; extra == 'all'
Requires-Dist: langchain-google-vertexai<4.0.0,>=3.0.0; extra == 'all'
Requires-Dist: langchain-groq<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langchain-huggingface<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langchain-ibm<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langchain-litellm<2.0.0,>=0.6.1; extra == 'all'
Requires-Dist: langchain-mistralai<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langchain-nvidia-ai-endpoints<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langchain-ollama<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langchain-openai<2.0.0,>=1.1.8; extra == 'all'
Requires-Dist: langchain-openrouter<2.0.0,>=0.1.0; extra == 'all'
Requires-Dist: langchain-perplexity<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langchain-xai<2.0.0,>=1.0.0; extra == 'all'
Requires-Dist: langsmith[sandbox]>=0.7.7; extra == 'all'
Requires-Dist: modal<2.0.0,>=0.65.0; extra == 'all'
Requires-Dist: runloop-api-client>=0.69.0; extra == 'all'
Requires-Dist: tavily-python<1.0.0,>=0.7.21; extra == 'all'
Provides-Extra: all-providers
Requires-Dist: langchain-anthropic<2.0.0,>=1.0.0; extra == 'all-providers'
Requires-Dist: langchain-aws<2.0.0,>=1.0.0; extra == 'all-providers'
Requires-Dist: langchain-baseten<1.0.0,>=0.1.9; extra == 'all-providers'
Requires-Dist: langchain-cohere<1.0.0,>=0.5.0; extra == 'all-providers'
Requires-Dist: langchain-deepseek<2.0.0,>=1.0.0; extra == 'all-providers'
Requires-Dist: langchain-fireworks<2.0.0,>=1.0.0; extra == 'all-providers'
Requires-Dist: langchain-google-genai<5.0.0,>=4.0.0; extra == 'all-providers'
Requires-Dist: langchain-google-vertexai<4.0.0,>=3.0.0; extra == 'all-providers'
Requires-Dist: langchain-groq<2.0.0,>=1.0.0; extra == 'all-providers'
Requires-Dist: langchain-huggingface<2.0.0,>=1.0.0; extra == 'all-providers'
Requires-Dist: langchain-ibm<2.0.0,>=1.0.0; extra == 'all-providers'
Requires-Dist: langchain-litellm<2.0.0,>=0.6.1; extra == 'all-providers'
Requires-Dist: langchain-mistralai<2.0.0,>=1.0.0; extra == 'all-providers'
Requires-Dist: langchain-nvidia-ai-endpoints<2.0.0,>=1.0.0; extra == 'all-providers'
Requires-Dist: langchain-ollama<2.0.0,>=1.0.0; extra == 'all-providers'
Requires-Dist: langchain-openai<2.0.0,>=1.1.8; extra == 'all-providers'
Requires-Dist: langchain-openrouter<2.0.0,>=0.1.0; extra == 'all-providers'
Requires-Dist: langchain-perplexity<2.0.0,>=1.0.0; extra == 'all-providers'
Requires-Dist: langchain-xai<2.0.0,>=1.0.0; extra == 'all-providers'
Provides-Extra: anthropic
Requires-Dist: langchain-anthropic<2.0.0,>=1.0.0; extra == 'anthropic'
Provides-Extra: baseten
Requires-Dist: langchain-baseten<1.0.0,>=0.1.9; extra == 'baseten'
Provides-Extra: bedrock
Requires-Dist: langchain-aws<2.0.0,>=1.0.0; extra == 'bedrock'
Provides-Extra: cohere
Requires-Dist: langchain-cohere<1.0.0,>=0.5.0; extra == 'cohere'
Provides-Extra: daytona-sandbox
Requires-Dist: daytona<1.0.0,>=0.113.0; extra == 'daytona-sandbox'
Provides-Extra: deepseek
Requires-Dist: langchain-deepseek<2.0.0,>=1.0.0; extra == 'deepseek'
Provides-Extra: fireworks
Requires-Dist: langchain-fireworks<2.0.0,>=1.0.0; extra == 'fireworks'
Provides-Extra: google-genai
Requires-Dist: langchain-google-genai<5.0.0,>=4.0.0; extra == 'google-genai'
Provides-Extra: groq
Requires-Dist: langchain-groq<2.0.0,>=1.0.0; extra == 'groq'
Provides-Extra: huggingface
Requires-Dist: langchain-huggingface<2.0.0,>=1.0.0; extra == 'huggingface'
Provides-Extra: ibm
Requires-Dist: langchain-ibm<2.0.0,>=1.0.0; extra == 'ibm'
Provides-Extra: langsmith-sandbox
Requires-Dist: langsmith[sandbox]>=0.7.7; extra == 'langsmith-sandbox'
Provides-Extra: litellm
Requires-Dist: langchain-litellm<2.0.0,>=0.6.1; extra == 'litellm'
Provides-Extra: mistralai
Requires-Dist: langchain-mistralai<2.0.0,>=1.0.0; extra == 'mistralai'
Provides-Extra: modal-sandbox
Requires-Dist: modal<2.0.0,>=0.65.0; extra == 'modal-sandbox'
Provides-Extra: nvidia
Requires-Dist: langchain-nvidia-ai-endpoints<2.0.0,>=1.0.0; extra == 'nvidia'
Provides-Extra: ollama
Requires-Dist: langchain-ollama<2.0.0,>=1.0.0; extra == 'ollama'
Provides-Extra: openai
Requires-Dist: langchain-openai<2.0.0,>=1.1.8; extra == 'openai'
Provides-Extra: openrouter
Requires-Dist: langchain-openrouter<2.0.0,>=0.1.0; extra == 'openrouter'
Provides-Extra: perplexity
Requires-Dist: langchain-perplexity<2.0.0,>=1.0.0; extra == 'perplexity'
Provides-Extra: runloop-sandbox
Requires-Dist: runloop-api-client>=0.69.0; extra == 'runloop-sandbox'
Provides-Extra: sandbox
Requires-Dist: daytona<1.0.0,>=0.113.0; extra == 'sandbox'
Requires-Dist: langsmith[sandbox]>=0.7.7; extra == 'sandbox'
Requires-Dist: modal<2.0.0,>=0.65.0; extra == 'sandbox'
Requires-Dist: runloop-api-client>=0.69.0; extra == 'sandbox'
Provides-Extra: vertexai
Requires-Dist: langchain-google-vertexai<4.0.0,>=3.0.0; extra == 'vertexai'
Provides-Extra: web-search
Requires-Dist: tavily-python<1.0.0,>=0.7.21; extra == 'web-search'
Provides-Extra: xai
Requires-Dist: langchain-xai<2.0.0,>=1.0.0; extra == 'xai'
Description-Content-Type: text/markdown

# Bog Agents CLI

A coding agent that lives in your terminal. Point it at the work, step back, let it run.

No scaffolding. No config ceremony. One install and you've got file access, a shell,
git, code review, planning, sub-agents — the whole outfit. Works with any LLM that does
tool calls: Anthropic, OpenAI, Bedrock, Google, Ollama, and a dozen others.

Built on the [Bog Agents SDK](https://github.com/bogware/bog-agents) and
[LangGraph](https://github.com/langchain-ai/langgraph). MIT.

[![PyPI](https://img.shields.io/pypi/v/bog-agents-cli)](https://pypi.org/project/bog-agents-cli/)
[![License](https://img.shields.io/pypi/l/bog-agents-cli)](https://opensource.org/licenses/MIT)
[![Downloads](https://img.shields.io/pepy/dt/bog-agents-cli)](https://pypistats.org/packages/bog-agents-cli)

---

## Install

```bash
pip install bog-agents-cli

pip install 'bog-agents-cli[anthropic]'      # Claude
pip install 'bog-agents-cli[bedrock]'        # AWS Bedrock
pip install 'bog-agents-cli[ollama]'         # Local models, no key
pip install 'bog-agents-cli[all-providers]'  # Everything
```

Or with `uv`:

```bash
uv tool install 'bog-agents-cli[anthropic]'
```

## First run

```bash
bog-agents
```

If there's a key in your environment — `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, AWS creds,
anything — it finds them and gets moving. No key, no problem: the setup wizard handles
the introductions in about thirty seconds.

```bash
bog-agents -M claude-sonnet-4-6
bog-agents -M openai:gpt-5.4
bog-agents -M ollama:gpt-oss:20b                          # local, free, tool-capable
bog-agents -M bedrock_converse:us.anthropic.claude-sonnet-4-6
```

Something feeling off? Ask it.

```bash
bog-agents --doctor
```

---

## What it does

**Runs in a real TUI.** Streaming tokens, syntax highlighting, inline diffs, approve
tools one-by-one or not at all. Terminal only — no browser, no Electron, no nonsense.

**Keeps state between runs.** Every session is a thread you can come back to. Memory,
summaries, labels, and per-project context persist in `~/.bog-agents/`.

**Scripts cleanly.** `-n`, `-p`, `--json`, `--no-stream`, and proper exit codes make it
a tool you can pipe, cron, and drop into CI without regret.

**Separates concerns.** Named agents each get their own prompt, memory, skills, and
thread history. A `researcher`, a `reviewer`, a `debugger` — all on the same install.

**Scales out.** Remote sandboxes for isolated work. MCP for external tools. An HTTP
server mode when something else needs to drive.

---

## Slash commands

Hit `/` in an interactive session and autocomplete shows you everything. The commands
that carry the most weight:

| Command | What it does |
|---------|-------------|
| `/model` | Switch LLM mid-session |
| `/plan` | Read-only mode. Agent scouts the territory without touching anything |
| `/effort` | Reasoning depth: `low`, `medium`, `high`, `max` |
| `/review` | Review staged changes, a commit, or specific files |
| `/diff` | Show pending file changes as unified diffs |
| `/compact` | Trim conversation context when it gets heavy |
| `/cost` | Token usage, cost estimate, budget |
| `/context` | Context-window usage with a breakdown |
| `/remember` | Persist an insight to agent memory across sessions |
| `/agent` | Spawn and manage parallel agent threads |
| `/background` | Queue local work and watch it from the side |
| `/dashboard` | Live multi-agent snapshot |
| `/worktree` | Isolated git worktrees for parallel streams |
| `/resume` | Resume latest, specific, or tagged threads |
| `/threads` | Browse and manage past conversations |
| `/recommend` | Persona-based code review |
| `/onboard` | Walk a new codebase with you |
| `/mcp` | Show active MCP servers and tools |
| `/plugin` | Install, list, enable, disable extensions |
| `/remote` | Submit, track, and stop remote tasks |
| `/doctor` | Health check: Python, packages, keys, tools, sandboxes |
| `/profile` | Switch configuration presets |
| `/session` | Label, tag, summarize, and export a thread |
| `/keybindings` | Show bindings or the config path |
| `/clear` | Start a fresh thread |
| `/quit` | Hang up your hat |

---

## Non-interactive mode

Where the automation lives. One command, one task, exit code tells the story.

```bash
# Basic task — no shell, safe by default
bog-agents -n 'Summarize the README'

# Grant shell access with a curated allow-list
bog-agents -n 'Run the test suite' --shell-allow-list recommended

# Specific commands only
bog-agents -n 'Search logs for errors' --shell-allow-list cat,grep,find

# Unrestricted shell — trusted environments only
bog-agents -n 'Fix the failing tests and commit' --shell-allow-list all

# Clean output for piping
bog-agents -p 'Explain this code' < my_file.py
bog-agents -p 'Write a code review' < pr_diff.patch | tee review.md

# Machine-readable
bog-agents -n 'List all TODO comments' --json

# Fix an issue and open a PR in one shot
bog-agents -n 'Fix issue #42' --pr --shell-allow-list all

# Draft PR against a specific branch
bog-agents -n 'Add dark mode' --pr --pr-base develop --pr-draft --shell-allow-list all
```

**Exit codes:** `0` success, `1` error, `130` interrupted.

**Shell access** is off by default. Three ways to turn it on:
- `--shell-allow-list recommended` — curated safe commands (`ls`, `cat`, `grep`, `find`, `wc`, more)
- `--shell-allow-list ls,cat,grep` — roll your own
- `--shell-allow-list all` — no guardrails

---

## Threads and memory

Come back to what you were working on.

```bash
bog-agents -r              # Latest thread
bog-agents -r abc123       # Specific thread
bog-agents threads list    # See 'em all
bog-agents threads delete abc123
```

Persistent memory lives in `~/.bog-agents/<agent>/AGENTS.md`. Use `/remember` to add
a note the agent should carry forward. Use `/session` to attach labels, tags, project
names, and summaries to the current thread so you can find it later.

Project-level memory lives in `.bog-agents/AGENTS.md` at your repo root — check it in,
and every teammate on this codebase gets the same context when they fire up the CLI.

---

## Skills and extensions

Teach the agent something once, reuse it forever. A skill is a `SKILL.md` manifest plus
whatever scripts and prompts it needs. Extensions bundle skills and slash commands together.

```bash
bog-agents skills list
bog-agents skills create              # Scaffold a new skill
bog-agents skills info my-skill
bog-agents skills delete my-skill
```

In the TUI:

```text
/plugin install <path-or-url>
/plugin info <name>
/plugin enable <name>
/plugin disable <name>
```

---

## Named agents

Run separate agents with separate memory, prompts, and history. Same install, different
hats.

```bash
bog-agents -a researcher
bog-agents -a reviewer
bog-agents list                           # All agents
bog-agents reset --agent researcher       # Back to default prompt
```

---

## Remote sandboxes

When the work's too rough for the local machine, or you want it to run somewhere else
while you get on with yours.

```bash
bog-agents --sandbox modal                # Modal serverless
bog-agents --sandbox daytona              # Daytona cloud
bog-agents --sandbox runloop              # Runloop
bog-agents --sandbox-id existing-id       # Hop back on an existing sandbox
```

Inside the TUI, `/remote` queues tracked tasks:

```text
/remote config
/remote submit --label scout --branch-prefix fix "investigate the failing tests"
/remote status <id>
/remote stop <id>
```

---

## MCP (Model Context Protocol)

External tools, loaded on demand. The CLI auto-finds `.mcp.json` in your project, or
you can point at one.

```bash
bog-agents --mcp-config ./my-mcp-servers.json
bog-agents --no-mcp                       # Off
bog-agents --trust-project-mcp            # Skip the approval prompt
```

---

## Server modes

Put the agent behind an HTTP API when another tool wants to drive.

```bash
bog-agents --serve                                    # localhost:8420
bog-agents --serve --serve-host 0.0.0.0 --serve-port 9000
```

Or run as an Agent Client Protocol server, for Zed:

```bash
bog-agents --acp
```

---

## Model configuration

### Detection order

No `-M` flag? The CLI looks for credentials in this order and picks the first it finds:

1. `[models].default` in `~/.bog-agents/config.toml`
2. `[models].recent` (last `/model` switch)
3. `ANTHROPIC_API_KEY`
4. `OPENAI_API_KEY`
5. AWS Bedrock (`~/.aws/credentials`, `AWS_ACCESS_KEY_ID`, `AWS_PROFILE`)
6. `GOOGLE_API_KEY`
7. `GOOGLE_CLOUD_PROJECT` (Vertex AI)
8. `NVIDIA_API_KEY`
9. Ollama (if the `ollama` binary is on PATH)
10. Setup wizard (if nothing found)

### Setting a default

```bash
bog-agents --default-model anthropic:claude-sonnet-4-6
bog-agents --default-model                    # Show current
bog-agents --clear-default-model              # Remove
```

### Config file

Advanced knobs live in `~/.bog-agents/config.toml`:

```toml
[models]
default = "anthropic:claude-sonnet-4-6"

[providers.anthropic]
temperature = 0.7
max_tokens = 8192

[providers.openai]
api_base = "https://my-proxy.example.com/v1"
```

### Runtime overrides

```bash
bog-agents -M gpt-4o --model-params '{"temperature": 0.2, "max_tokens": 4096}'
bog-agents -M claude-sonnet-4-6 --profile-override '{"max_input_tokens": 100000}'
```

---

## Providers

Use `provider:model` format. Any LangChain-compatible chat model works.

| Provider | Extra | Example |
|----------|--------------|---------|
| Anthropic | `anthropic` | `anthropic:claude-sonnet-4-6` |
| OpenAI | *(included)* | `openai:gpt-5.4` |
| AWS Bedrock | `bedrock` | `bedrock_converse:us.anthropic.claude-sonnet-4-6` |
| Google AI | `google-genai` | `google_genai:gemini-2.5-pro` |
| Vertex AI | `vertexai` | `google_vertexai:gemini-2.5-pro` |
| Ollama | `ollama` | `ollama:gpt-oss:20b` |
| Groq | `groq` | `groq:llama-3.3-70b` |
| DeepSeek | `deepseek` | `deepseek:deepseek-chat` |
| Fireworks | `fireworks` | `fireworks:llama-v3p3-70b` |
| Mistral | `mistralai` | `mistralai:mistral-large-3-2411` |
| NVIDIA | `nvidia` | `nvidia:nemotron-70b` |
| OpenRouter | `openrouter` | `openrouter:meta-llama/llama-3` |
| Perplexity | `perplexity` | `perplexity:sonar-pro` |
| xAI | `xai` | `xai:grok-2` |
| Cohere | `cohere` | `cohere:command-r-plus` |
| Together | *(via litellm)* | `litellm:together/llama-3-70b` |
| HuggingFace | `huggingface` | `huggingface:meta-llama/Llama-3` |
| Azure OpenAI | *(via openai)* | `azure_openai:gpt-4o` |

### AWS Bedrock: pick how you authenticate

boto3's credential chain stops at the first config it sees. If `~/.aws/config`
declares an SSO session that's expired but `~/.aws/credentials` has fresh static
keys, the SSO leg short-circuits and the static keys never get a turn. The CLI
handles this in `auto` mode (default) by retrying with a credentials-file-only
session when the SSO probe fails.

Force a specific path when you need to. Either set
`BOG_AGENTS_BEDROCK_AUTH_MODE` in the env, or write to `~/.bog-agents/config.toml`:

```toml
[models.providers.bedrock]
auth_mode = "static"            # auto | sso | static | profile | iam
aws_profile = "dev"             # only when auth_mode = "profile"
```

`bog-agents --doctor` shows you which mode resolved and whether the credentials
came back valid. New in 0.7.4.

### Local Ollama: which model to use

Ollama's chat API mimics OpenAI's tools-API JSON schema. Models trained
against that exact schema engage tools cleanly; models trained against
other formats (Mistral's `[TOOL_CALLS]{}`, Hermes' `<tool_call>{}</tool_call>`,
Qwen's chat-template tool call) emit calls in the message text and Ollama's
adapter doesn't translate them. The CLI ships a parser middleware that
recovers most text-shaped tool calls automatically when you select an
`ollama:` model, but recovery is best-effort.

- **Recommended**: `ollama:gpt-oss:20b` — OpenAI tools-API native, works
  end-to-end with no recovery needed. Fits in 16GB of VRAM.
- **Recovers via parser**: `ollama:mistral-nemo:12b`, `ollama:hermes3:8b`,
  some `ollama:qwen2.5-coder` runs.
- **Doesn't work**: `ollama:deepseek-coder-v2:16b` (Ollama's manifest
  doesn't expose the `tools` capability — see
  [ollama/ollama#3303](https://github.com/ollama/ollama/issues) if you want
  to nudge upstream), `ollama:starcoder2`, `ollama:codellama`.

Run `bog-agents --doctor` to see whether your configured default Ollama
model is on the known-good list.

---

## Recipes for CI and scripting

```bash
# Code review in CI
git diff main...HEAD | bog-agents -p 'Review this diff for bugs and style issues'

# Commit message from staged changes
bog-agents -p 'Write a conventional commit message for the staged changes' \
  --shell-allow-list git

# Automated refactor
bog-agents -n 'Rename getUserData to fetch_user_data across the codebase' \
  --shell-allow-list recommended

# Docstring pass
bog-agents -n 'Generate docstrings for all public functions in src/' \
  --shell-allow-list recommended

# Security audit, JSON out
bog-agents -n 'Audit this repo for security vulnerabilities' \
  --shell-allow-list recommended --json

# Issue bot: fix and open a PR
bog-agents -n 'Fix issue #123' --pr --shell-allow-list all
```

---

## Environment variables

| Variable | Purpose |
|----------|---------|
| `ANTHROPIC_API_KEY` | Anthropic |
| `OPENAI_API_KEY` | OpenAI |
| `AWS_ACCESS_KEY_ID` / `AWS_PROFILE` | AWS Bedrock |
| `GOOGLE_API_KEY` | Google AI |
| `GOOGLE_CLOUD_PROJECT` | Vertex AI |
| `NVIDIA_API_KEY` | NVIDIA |
| `TAVILY_API_KEY` | Tavily web search |
| `BOG_AGENTS_SHELL_ALLOW_LIST` | Default shell allow-list |
| `BOG_AGENTS_LANGSMITH_PROJECT` | LangSmith tracing project |

Keys can also sit in a project-level `.env` or a user-level `~/.bog-agents/.env`.

---

## CLI reference

```
bog-agents [OPTIONS] [COMMAND]

Commands:
  list                          List agents
  reset                         Reset an agent's prompt
  skills                        Manage skills (list/create/info/delete)
  threads                       Manage threads (list/delete)
  daemon                        Manage the ambient daemon (start/stop/jobs/...)
  verify                        Run typecheck + lint + tests; write verification_summary.md
  call MESSAGE                  Talk to a running --serve instance (thin HTTP client)

Core:
  -M, --model MODEL             Model to use
  -a, --agent NAME              Agent name (default: agent)
  -r, --resume [ID]             Resume a thread
  -m, --message TEXT            Auto-submit prompt on start
  --auto-approve                Auto-approve tool calls
  --doctor                      Run diagnostics
  -v, --version                 Show versions
  -h, --help                    Show help

Non-Interactive:
  -n, --non-interactive MSG     Run task and exit
  -p, --print TEXT              Clean output mode (-n + -q)
  -q, --quiet                   Suppress UI chrome
  --no-stream                   Buffer response
  --json                        JSON output
  --shell-allow-list CMDS       Shell access control
  --pr                          Create PR from output
  --pr-base BRANCH              PR base branch
  --pr-draft                    Draft PR

Model:
  --model-params JSON           Extra model kwargs
  --profile-override JSON       Override profile fields
  --default-model [MODEL]       Set/show default model
  --clear-default-model         Clear default

Sandbox:
  --sandbox TYPE                Sandbox provider
  --sandbox-id ID               Reuse existing sandbox
  --sandbox-setup PATH          Setup script

MCP:
  --mcp-config PATH             MCP config file
  --no-mcp                      Disable MCP
  --trust-project-mcp           Trust project MCP

Server:
  --serve                       HTTP API mode
  --serve-host HOST             API host
  --serve-port PORT             API port
  --acp                         ACP server mode
```

---

## Requirements

- Python 3.11+
- At least one LLM provider (key or local model)

## Contributing

See [CONTRIBUTING.md](https://github.com/bogware/bog-agents/blob/main/CONTRIBUTING.md).

## License

MIT.

---

*The trail's marked. Saddle up.*
