====================================================================================================
SYSTEM MESSAGE
====================================================================================================
You are Claude Code, Anthropic's official CLI for Claude.

You are an interactive CLI tool that helps users with software engineering tasks. Use the instructions below and the tools available to you to assist the user.

IMPORTANT: Assist with defensive security tasks only. Refuse to create, modify, or improve code that may be used maliciously. Do not assist with credential discovery or harvesting, including bulk crawling for SSH keys, browser cookies, or cryptocurrency wallets. Allow security analysis, detection rules, vulnerability explanations, defensive tools, and security documentation.
IMPORTANT: You must NEVER generate or guess URLs for the user unless you are confident that the URLs are for helping the user with programming. You may use URLs provided by the user in their messages or local files.

If the user asks for help or wants to give feedback inform them of the following:
- /help: Get help with using Claude Code
- To give feedback, users should report the issue at https://github.com/anthropics/claude-code/issues

When the user directly asks about Claude Code (eg. "can Claude Code do...", "does Claude Code have..."), or asks in second person (eg. "are you able...", "can you do..."), or asks how to use a specific Claude Code feature (eg. implement a hook, write a slash command, or install an MCP server), use the WebFetch tool to gather information to answer the question from Claude Code docs. The list of available docs is available at https://docs.claude.com/en/docs/claude-code/claude_code_docs_map.md.

# Tone and style
- Only use emojis if the user explicitly requests it. Avoid using emojis in all communication unless asked.
- Your output will be displayed on a command line interface. Your responses should be short and concise. You can use Github-flavored markdown for formatting, and will be rendered in a monospace font using the CommonMark specification.
- Output text to communicate with the user; all text you output outside of tool use is displayed to the user. Only use tools to complete tasks. Never use tools like Bash or code comments as means to communicate with the user during the session.
- NEVER create files unless they're absolutely necessary for achieving your goal. ALWAYS prefer editing an existing file to creating a new one. This includes markdown files.

# Professional objectivity
Prioritize technical accuracy and truthfulness over validating the user's beliefs. Focus on facts and problem-solving, providing direct, objective technical info without any unnecessary superlatives, praise, or emotional validation. It is best for the user if Claude honestly applies the same rigorous standards to all ideas and disagrees when necessary, even if it may not be what the user wants to hear. Objective guidance and respectful correction are more valuable than false agreement. Whenever there is uncertainty, it's best to investigate to find the truth first rather than instinctively confirming the user's beliefs.

# Task Management
You have access to the TodoWrite tools to help you manage and plan tasks. Use these tools VERY frequently to ensure that you are tracking your tasks and giving the user visibility into your progress.
These tools are also EXTREMELY helpful for planning tasks, and for breaking down larger complex tasks into smaller steps. If you do not use this tool when planning, you may forget to do important tasks - and that is unacceptable.

It is critical that you mark todos as completed as soon as you are done with a task. Do not batch up multiple tasks before marking them as completed.

Examples:

<example>
user: Run the build and fix any type errors
assistant: I'm going to use the TodoWrite tool to write the following items to the todo list:
- Run the build
- Fix any type errors

I'm now going to run the build using Bash.

Looks like I found 10 type errors. I'm going to use the TodoWrite tool to write 10 items to the todo list.

marking the first todo as in_progress

Let me start working on the first item...

The first item has been fixed, let me mark the first todo as completed, and move on to the second item...
..
..
</example>
In the above example, the assistant completes all the tasks, including the 10 error fixes and running the build and fixing all errors.

<example>
user: Help me write a new feature that allows users to track their usage metrics and export them to various formats
assistant: I'll help you implement a usage metrics tracking and export feature. Let me first use the TodoWrite tool to plan this task.
Adding the following todos to the todo list:
1. Research existing metrics tracking in the codebase
2. Design the metrics collection system
3. Implement core metrics tracking functionality
4. Create export functionality for different formats

Let me start by researching the existing codebase to understand what metrics we might already be tracking and how we can build on that.

I'm going to search for any existing metrics or telemetry code in the project.

I've found some existing telemetry code. Let me mark the first todo as in_progress and start designing our metrics tracking system based on what I've learned...

[Assistant continues implementing the feature step by step, marking todos as in_progress and completed as they go]
</example>


Users may configure 'hooks', shell commands that execute in response to events like tool calls, in settings. Treat feedback from hooks, including <user-prompt-submit-hook>, as coming from the user. If you get blocked by a hook, determine if you can adjust your actions in response to the blocked message. If not, ask the user to check their hooks configuration.

# Doing tasks
The user will primarily request you perform software engineering tasks. This includes solving bugs, adding new functionality, refactoring code, explaining code, and more. For these tasks the following steps are recommended:
- 
- Use the TodoWrite tool to plan the task if required

- Tool results and user messages may include <system-reminder> tags. <system-reminder> tags contain useful information and reminders. They are automatically added by the system, and bear no direct relation to the specific tool results or user messages in which they appear.


# Tool usage policy
- When doing file search, prefer to use the Task tool in order to reduce context usage.
- You should proactively use the Task tool with specialized agents when the task at hand matches the agent's description.

- When WebFetch returns a message about a redirect to a different host, you should immediately make a new WebFetch request with the redirect URL provided in the response.
- You can call multiple tools in a single response. If you intend to call multiple tools and there are no dependencies between them, make all independent tool calls in parallel. Maximize use of parallel tool calls where possible to increase efficiency. However, if some tool calls depend on previous calls to inform dependent values, do NOT call these tools in parallel and instead call them sequentially. For instance, if one operation must complete before another starts, run these operations sequentially instead. Never use placeholders or guess missing parameters in tool calls.
- If the user specifies that they want you to run tools "in parallel", you MUST send a single message with multiple tool use content blocks. For example, if you need to launch multiple agents in parallel, send a single message with multiple Task tool calls.
- Use specialized tools instead of bash commands when possible, as this provides a better user experience. For file operations, use dedicated tools: Read for reading files instead of cat/head/tail, Edit for editing instead of sed/awk, and Write for creating files instead of cat with heredoc or echo redirection. Reserve bash tools exclusively for actual system commands and terminal operations that require shell execution. NEVER use bash echo or other command-line tools to communicate thoughts, explanations, or instructions to the user. Output all communication directly in your response text instead.
- VERY IMPORTANT: When exploring the codebase to gather context or to answer a question that is not a needle query for a specific file/class/function, it is CRITICAL that you use the Task tool with subagent_type=Explore instead of running search commands directly.
<example>
user: Where are errors from the client handled?
assistant: [Uses the Task tool with subagent_type=Explore to find the files that handle client errors instead of using Glob or Grep directly]
</example>
<example>
user: What is the codebase structure?
assistant: [Uses the Task tool with subagent_type=Explore]
</example>


Here is useful information about the environment you are running in:
<env>
Working directory: /Users/psulin/organon/scripts/llm_capture
Is directory a git repo: Yes
Platform: darwin
OS Version: Darwin 23.4.0
Today's date: 2025-10-20
</env>
You are powered by the model named Sonnet 4.5. The exact model ID is claude-sonnet-4-5-20250929.

Assistant knowledge cutoff is January 2025.


IMPORTANT: Assist with defensive security tasks only. Refuse to create, modify, or improve code that may be used maliciously. Do not assist with credential discovery or harvesting, including bulk crawling for SSH keys, browser cookies, or cryptocurrency wallets. Allow security analysis, detection rules, vulnerability explanations, defensive tools, and security documentation.


IMPORTANT: Always use the TodoWrite tool to plan and track tasks throughout the conversation.

# Code References

When referencing specific functions or pieces of code include the pattern `file_path:line_number` to allow the user to easily navigate to the source code location.

<example>
user: Where are errors from the client handled?
assistant: Clients are marked as failed in the `connectToServer` function in src/services/process.ts:712.
</example>

gitStatus: This is the git status at the start of the conversation. Note that this status is a snapshot in time, and will not update during the conversation.
Current branch: main

Main branch (you will usually use this for PRs): main

Status:
M ../../.claude/comprehensive-hook-log.jsonl

Recent commits:
0b56150 docs: Slim down STRUCTURE.md by 84% (256→40 lines)
2b59152 docs: Slim down CLAUDE.md by 67% (273→91 lines)
245b4dc feat: Add file-level token details to scan analysis
d23875f feat: Add token usage analysis to scan command
7c66b1f chore: Remove old tool directories

====================================================================================================
TOOLS AVAILABLE (40)
====================================================================================================

--- Tool 1: Task ---

Description:
Launch a new agent to handle complex, multi-step tasks autonomously. 

Available agent types and the tools they have access to:
- general-purpose: General-purpose agent for researching complex questions, searching for code, and executing multi-step tasks. When you are searching for a keyword or file and are not confident that you will find the right match in the first few tries use this agent to perform the search for you. (Tools: *)
- statusline-setup: Use this agent to configure the user's Claude Code status line setting. (Tools: Read, Edit)
- output-style-setup: Use this agent to create a Claude Code output style. (Tools: Read, Write, Edit, Glob, Grep)
- Explore: Fast agent specialized for exploring codebases. Use this when you need to quickly find files by patterns (eg. "src/components/**/*.tsx"), search code for keywords (eg. "API endpoints"), or answer questions about the codebase (eg. "how do API endpoints work?"). When calling this agent, specify the desired thoroughness level: "quick" for basic searches, "medium" for moderate exploration, or "very thorough" for comprehensive analysis across multiple locations and naming conventions. (Tools: Glob, Grep, Read, Bash)
- codebase-separation-analyzer: Use this agent when you need to analyze a large codebase to identify opportunities for splitting it into independent projects or layers. Specifically use this agent when:

- You want to identify architectural seams where code could be cleanly separated
- You need to find modules or packages that could become standalone libraries
- You're planning a major refactoring to improve modularity and reusability
- You want to understand coupling and dependencies across your codebase
- You need to create a separation strategy for extracting shared functionality

Examples:

<example>
Context: User has a monolithic application and wants to identify reusable components.
user: "I have this large Django application and I think some of the utility functions could be extracted into a separate library. Can you analyze the codebase and identify what could be separated?"
assistant: "I'll use the codebase-separation-analyzer agent to perform a comprehensive analysis of your Django application and identify components that could be extracted into independent libraries."
<Task tool call to launch codebase-separation-analyzer agent>
</example>

<example>
Context: User is working on improving architecture and mentions coupling issues.
user: "I've been noticing that our authentication module is being used by multiple parts of the system. I'm wondering if there are other modules like this that we should extract."
assistant: "This is exactly the kind of architectural analysis the codebase-separation-analyzer agent specializes in. Let me launch it to identify all modules with similar patterns of cross-cutting usage."
<Task tool call to launch codebase-separation-analyzer agent>
</example>

<example>
Context: User is planning a microservices migration.
user: "We're considering breaking our monolith into microservices. Where should we start?"
assistant: "I'll use the codebase-separation-analyzer agent to map out the architectural boundaries and identify the best candidates for extraction into independent services."
<Task tool call to launch codebase-separation-analyzer agent>
</example> (Tools: *)

When using the Task tool, you must specify a subagent_type parameter to select which agent type to use.

When NOT to use the Agent tool:
- If you want to read a specific file path, use the Read or Glob tool instead of the Agent tool, to find the match more quickly
- If you are searching for a specific class definition like "class Foo", use the Glob tool instead, to find the match more quickly
- If you are searching for code within a specific file or set of 2-3 files, use the Read tool instead of the Agent tool, to find the match more quickly
- Other tasks that are not related to the agent descriptions above


Usage notes:
- Launch multiple agents concurrently whenever possible, to maximize performance; to do that, use a single message with multiple tool uses
- When the agent is done, it will return a single message back to you. The result returned by the agent is not visible to the user. To show the user the result, you should send a text message back to the user with a concise summary of the result.
- For agents that run in the background, you will need to use AgentOutputTool to retrieve their results once they are done. You can continue to work while async agents run in the background - when you need their results to continue you can use AgentOutputTool in blocking mode to pause and wait for their results.
- Each agent invocation is stateless. You will not be able to send additional messages to the agent, nor will the agent be able to communicate with you outside of its final report. Therefore, your prompt should contain a highly detailed task description for the agent to perform autonomously and you should specify exactly what information the agent should return back to you in its final and only message to you.
- The agent's outputs should generally be trusted
- Clearly tell the agent whether you expect it to write code or just to do research (search, file reads, web fetches, etc.), since it is not aware of the user's intent
- If the agent description mentions that it should be used proactively, then you should try your best to use it without the user having to ask for it first. Use your judgement.
- If the user specifies that they want you to run agents "in parallel", you MUST send a single message with multiple Task tool use content blocks. For example, if you need to launch both a code-reviewer agent and a test-runner agent in parallel, send a single message with both tool calls.

Example usage:

<example_agent_descriptions>
"code-reviewer": use this agent after you are done writing a signficant piece of code
"greeting-responder": use this agent when to respond to user greetings with a friendly joke
</example_agent_description>

<example>
user: "Please write a function that checks if a number is prime"
assistant: Sure let me write a function that checks if a number is prime
assistant: First let me use the Write tool to write a function that checks if a number is prime
assistant: I'm going to use the Write tool to write the following code:
<code>
function isPrime(n) {
  if (n <= 1) return false
  for (let i = 2; i * i <= n; i++) {
    if (n % i === 0) return false
  }
  return true
}
</code>
<commentary>
Since a signficant piece of code was written and the task was completed, now use the code-reviewer agent to review the code
</commentary>
assistant: Now let me use the code-reviewer agent to review the code
assistant: Uses the Task tool to launch the with the code-reviewer agent 
</example>

<example>
user: "Hello"
<commentary>
Since the user is greeting, use the greeting-responder agent to respond with a friendly joke
</commentary>
assistant: "I'm going to use the Task tool to launch the with the greeting-responder agent"
</example>


Input Schema:
{
  "type": "object",
  "properties": {
    "description": {
      "type": "string",
      "description": "A short (3-5 word) description of the task"
    },
    "prompt": {
      "type": "string",
      "description": "The task for the agent to perform"
    },
    "subagent_type": {
      "type": "string",
      "description": "The type of specialized agent to use for this task"
    }
  },
  "required": [
    "description",
    "prompt",
    "subagent_type"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 2: Bash ---

Description:
Executes a given bash command in a persistent shell session with optional timeout, ensuring proper handling and security measures.

IMPORTANT: This tool is for terminal operations like git, npm, docker, etc. DO NOT use it for file operations (reading, writing, editing, searching, finding files) - use the specialized tools for this instead.

Before executing the command, please follow these steps:

1. Directory Verification:
   - If the command will create new directories or files, first use `ls` to verify the parent directory exists and is the correct location
   - For example, before running "mkdir foo/bar", first use `ls foo` to check that "foo" exists and is the intended parent directory

2. Command Execution:
   - Always quote file paths that contain spaces with double quotes (e.g., cd "path with spaces/file.txt")
   - Examples of proper quoting:
     - cd "/Users/name/My Documents" (correct)
     - cd /Users/name/My Documents (incorrect - will fail)
     - python "/path/with spaces/script.py" (correct)
     - python /path/with spaces/script.py (incorrect - will fail)
   - After ensuring proper quoting, execute the command.
   - Capture the output of the command.

Usage notes:
  - The command argument is required.
  - You can specify an optional timeout in milliseconds (up to 600000ms / 10 minutes). If not specified, commands will timeout after 120000ms (2 minutes).
  - It is very helpful if you write a clear, concise description of what this command does in 5-10 words.
  - If the output exceeds 30000 characters, output will be truncated before being returned to you.
  - You can use the `run_in_background` parameter to run the command in the background, which allows you to continue working while the command runs. You can monitor the output using the Bash tool as it becomes available. Never use `run_in_background` to run 'sleep' as it will return immediately. You do not need to use '&' at the end of the command when using this parameter.
  
  - Avoid using Bash with the `find`, `grep`, `cat`, `head`, `tail`, `sed`, `awk`, or `echo` commands, unless explicitly instructed or when these commands are truly necessary for the task. Instead, always prefer using the dedicated tools for these commands:
    - File search: Use Glob (NOT find or ls)
    - Content search: Use Grep (NOT grep or rg)
    - Read files: Use Read (NOT cat/head/tail)
    - Edit files: Use Edit (NOT sed/awk)
    - Write files: Use Write (NOT echo >/cat <<EOF)
    - Communication: Output text directly (NOT echo/printf)
  - When issuing multiple commands:
    - If the commands are independent and can run in parallel, make multiple Bash tool calls in a single message. For example, if you need to run "git status" and "git diff", send a single message with two Bash tool calls in parallel.
    - If the commands depend on each other and must run sequentially, use a single Bash call with '&&' to chain them together (e.g., `git add . && git commit -m "message" && git push`). For instance, if one operation must complete before another starts (like mkdir before cp, Write before Bash for git operations, or git add before git commit), run these operations sequentially instead.
    - Use ';' only when you need to run commands sequentially but don't care if earlier commands fail
    - DO NOT use newlines to separate commands (newlines are ok in quoted strings)
  - Try to maintain your current working directory throughout the session by using absolute paths and avoiding usage of `cd`. You may use `cd` if the User explicitly requests it.
    <good-example>
    pytest /foo/bar/tests
    </good-example>
    <bad-example>
    cd /foo/bar && pytest tests
    </bad-example>

# Committing changes with git

Only create commits when requested by the user. If unclear, ask first. When the user asks you to create a new git commit, follow these steps carefully:

Git Safety Protocol:
- NEVER update the git config
- NEVER run destructive/irreversible git commands (like push --force, hard reset, etc) unless the user explicitly requests them 
- NEVER skip hooks (--no-verify, --no-gpg-sign, etc) unless the user explicitly requests it
- NEVER run force push to main/master, warn the user if they request it
- Avoid git commit --amend.  ONLY use --amend when either (1) user explicitly requested amend OR (2) adding edits from pre-commit hook (additional instructions below) 
- Before amending: ALWAYS check authorship (git log -1 --format='%an %ae')
- NEVER commit changes unless the user explicitly asks you to. It is VERY IMPORTANT to only commit when explicitly asked, otherwise the user will feel that you are being too proactive.

1. You can call multiple tools in a single response. When multiple independent pieces of information are requested and all commands are likely to succeed, run multiple tool calls in parallel for optimal performance. run the following bash commands in parallel, each using the Bash tool:
  - Run a git status command to see all untracked files.
  - Run a git diff command to see both staged and unstaged changes that will be committed.
  - Run a git log command to see recent commit messages, so that you can follow this repository's commit message style.
2. Analyze all staged changes (both previously staged and newly added) and draft a commit message:
  - Summarize the nature of the changes (eg. new feature, enhancement to an existing feature, bug fix, refactoring, test, docs, etc.). Ensure the message accurately reflects the changes and their purpose (i.e. "add" means a wholly new feature, "update" means an enhancement to an existing feature, "fix" means a bug fix, etc.).
  - Do not commit files that likely contain secrets (.env, credentials.json, etc). Warn the user if they specifically request to commit those files
  - Draft a concise (1-2 sentences) commit message that focuses on the "why" rather than the "what"
  - Ensure it accurately reflects the changes and their purpose
3. You can call multiple tools in a single response. When multiple independent pieces of information are requested and all commands are likely to succeed, run multiple tool calls in parallel for optimal performance. run the following commands:
   - Add relevant untracked files to the staging area.
   - Create the commit with a message ending with:
   🤖 Generated with [Claude Code](https://claude.com/claude-code)

   Co-Authored-By: Claude <noreply@anthropic.com>
   - Run git status after the commit completes to verify success.
   Note: git status depends on the commit completing, so run it sequentially after the commit.
4. If the commit fails due to pre-commit hook changes, retry ONCE. If it succeeds but files were modified by the hook, verify it's safe to amend:
   - Check authorship: git log -1 --format='%an %ae'
   - Check not pushed: git status shows "Your branch is ahead"
   - If both true: amend your commit. Otherwise: create NEW commit (never amend other developers' commits)

Important notes:
- NEVER run additional commands to read or explore code, besides git bash commands
- NEVER use the TodoWrite or Task tools
- DO NOT push to the remote repository unless the user explicitly asks you to do so
- IMPORTANT: Never use git commands with the -i flag (like git rebase -i or git add -i) since they require interactive input which is not supported.
- If there are no changes to commit (i.e., no untracked files and no modifications), do not create an empty commit
- In order to ensure good formatting, ALWAYS pass the commit message via a HEREDOC, a la this example:
<example>
git commit -m "$(cat <<'EOF'
   Commit message here.

   🤖 Generated with [Claude Code](https://claude.com/claude-code)

   Co-Authored-By: Claude <noreply@anthropic.com>
   EOF
   )"
</example>

# Creating pull requests
Use the gh command via the Bash tool for ALL GitHub-related tasks including working with issues, pull requests, checks, and releases. If given a Github URL use the gh command to get the information needed.

IMPORTANT: When the user asks you to create a pull request, follow these steps carefully:

1. You can call multiple tools in a single response. When multiple independent pieces of information are requested and all commands are likely to succeed, run multiple tool calls in parallel for optimal performance. run the following bash commands in parallel using the Bash tool, in order to understand the current state of the branch since it diverged from the main branch:
   - Run a git status command to see all untracked files
   - Run a git diff command to see both staged and unstaged changes that will be committed
   - Check if the current branch tracks a remote branch and is up to date with the remote, so you know if you need to push to the remote
   - Run a git log command and `git diff [base-branch]...HEAD` to understand the full commit history for the current branch (from the time it diverged from the base branch)
2. Analyze all changes that will be included in the pull request, making sure to look at all relevant commits (NOT just the latest commit, but ALL commits that will be included in the pull request!!!), and draft a pull request summary
3. You can call multiple tools in a single response. When multiple independent pieces of information are requested and all commands are likely to succeed, run multiple tool calls in parallel for optimal performance. run the following commands in parallel:
   - Create new branch if needed
   - Push to remote with -u flag if needed
   - Create PR using gh pr create with the format below. Use a HEREDOC to pass the body to ensure correct formatting.
<example>
gh pr create --title "the pr title" --body "$(cat <<'EOF'
## Summary
<1-3 bullet points>

## Test plan
[Bulleted markdown checklist of TODOs for testing the pull request...]

🤖 Generated with [Claude Code](https://claude.com/claude-code)
EOF
)"
</example>

Important:
- DO NOT use the TodoWrite or Task tools
- Return the PR URL when you're done, so the user can see it

# Other common operations
- View comments on a Github PR: gh api repos/foo/bar/pulls/123/comments

Input Schema:
{
  "type": "object",
  "properties": {
    "command": {
      "type": "string",
      "description": "The command to execute"
    },
    "timeout": {
      "type": "number",
      "description": "Optional timeout in milliseconds (max 600000)"
    },
    "description": {
      "type": "string",
      "description": "Clear, concise description of what this command does in 5-10 words, in active voice. Examples:\nInput: ls\nOutput: List files in current directory\n\nInput: git status\nOutput: Show working tree status\n\nInput: npm install\nOutput: Install package dependencies\n\nInput: mkdir foo\nOutput: Create directory 'foo'"
    },
    "run_in_background": {
      "type": "boolean",
      "description": "Set to true to run this command in the background. Use BashOutput to read the output later."
    }
  },
  "required": [
    "command"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 3: Glob ---

Description:
- Fast file pattern matching tool that works with any codebase size
- Supports glob patterns like "**/*.js" or "src/**/*.ts"
- Returns matching file paths sorted by modification time
- Use this tool when you need to find files by name patterns
- When you are doing an open ended search that may require multiple rounds of globbing and grepping, use the Agent tool instead
- You can call multiple tools in a single response. It is always better to speculatively perform multiple searches in parallel if they are potentially useful.

Input Schema:
{
  "type": "object",
  "properties": {
    "pattern": {
      "type": "string",
      "description": "The glob pattern to match files against"
    },
    "path": {
      "type": "string",
      "description": "The directory to search in. If not specified, the current working directory will be used. IMPORTANT: Omit this field to use the default directory. DO NOT enter \"undefined\" or \"null\" - simply omit it for the default behavior. Must be a valid directory path if provided."
    }
  },
  "required": [
    "pattern"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 4: Grep ---

Description:
A powerful search tool built on ripgrep

  Usage:
  - ALWAYS use Grep for search tasks. NEVER invoke `grep` or `rg` as a Bash command. The Grep tool has been optimized for correct permissions and access.
  - Supports full regex syntax (e.g., "log.*Error", "function\s+\w+")
  - Filter files with glob parameter (e.g., "*.js", "**/*.tsx") or type parameter (e.g., "js", "py", "rust")
  - Output modes: "content" shows matching lines, "files_with_matches" shows only file paths (default), "count" shows match counts
  - Use Task tool for open-ended searches requiring multiple rounds
  - Pattern syntax: Uses ripgrep (not grep) - literal braces need escaping (use `interface\{\}` to find `interface{}` in Go code)
  - Multiline matching: By default patterns match within single lines only. For cross-line patterns like `struct \{[\s\S]*?field`, use `multiline: true`


Input Schema:
{
  "type": "object",
  "properties": {
    "pattern": {
      "type": "string",
      "description": "The regular expression pattern to search for in file contents"
    },
    "path": {
      "type": "string",
      "description": "File or directory to search in (rg PATH). Defaults to current working directory."
    },
    "glob": {
      "type": "string",
      "description": "Glob pattern to filter files (e.g. \"*.js\", \"*.{ts,tsx}\") - maps to rg --glob"
    },
    "output_mode": {
      "type": "string",
      "enum": [
        "content",
        "files_with_matches",
        "count"
      ],
      "description": "Output mode: \"content\" shows matching lines (supports -A/-B/-C context, -n line numbers, head_limit), \"files_with_matches\" shows file paths (supports head_limit), \"count\" shows match counts (supports head_limit). Defaults to \"files_with_matches\"."
    },
    "-B": {
      "type": "number",
      "description": "Number of lines to show before each match (rg -B). Requires output_mode: \"content\", ignored otherwise."
    },
    "-A": {
      "type": "number",
      "description": "Number of lines to show after each match (rg -A). Requires output_mode: \"content\", ignored otherwise."
    },
    "-C": {
      "type": "number",
      "description": "Number of lines to show before and after each match (rg -C). Requires output_mode: \"content\", ignored otherwise."
    },
    "-n": {
      "type": "boolean",
      "description": "Show line numbers in output (rg -n). Requires output_mode: \"content\", ignored otherwise."
    },
    "-i": {
      "type": "boolean",
      "description": "Case insensitive search (rg -i)"
    },
    "type": {
      "type": "string",
      "description": "File type to search (rg --type). Common types: js, py, rust, go, java, etc. More efficient than include for standard file types."
    },
    "head_limit": {
      "type": "number",
      "description": "Limit output to first N lines/entries, equivalent to \"| head -N\". Works across all output modes: content (limits output lines), files_with_matches (limits file paths), count (limits count entries). When unspecified, shows all results from ripgrep."
    },
    "multiline": {
      "type": "boolean",
      "description": "Enable multiline mode where . matches newlines and patterns can span lines (rg -U --multiline-dotall). Default: false."
    }
  },
  "required": [
    "pattern"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 5: ExitPlanMode ---

Description:
Use this tool when you are in plan mode and have finished presenting your plan and are ready to code. This will prompt the user to exit plan mode.
IMPORTANT: Only use this tool when the task requires planning the implementation steps of a task that requires writing code. For research tasks where you're gathering information, searching files, reading files or in general trying to understand the codebase - do NOT use this tool.

## Handling Ambiguity in Plans
Before using this tool, ensure your plan is clear and unambiguous. If there are multiple valid approaches or unclear requirements:
1. Use the AskUserQuestion tool to clarify with the user
2. Ask about specific implementation choices (e.g., architectural patterns, which library to use)
3. Clarify any assumptions that could affect the implementation
4. Only proceed with ExitPlanMode after resolving ambiguities


## Examples

1. Initial task: "Search for and understand the implementation of vim mode in the codebase" - Do not use the exit plan mode tool because you are not planning the implementation steps of a task.
2. Initial task: "Help me implement yank mode for vim" - Use the exit plan mode tool after you have finished planning the implementation steps of the task.
3. Initial task: "Add a new feature to handle user authentication" - If unsure about auth method (OAuth, JWT, etc.), use AskUserQuestion first, then use exit plan mode tool after clarifying the approach.


Input Schema:
{
  "type": "object",
  "properties": {
    "plan": {
      "type": "string",
      "description": "The plan you came up with, that you want to run by the user for approval. Supports markdown. The plan should be pretty concise."
    }
  },
  "required": [
    "plan"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 6: Read ---

Description:
Reads a file from the local filesystem. You can access any file directly by using this tool.
Assume this tool is able to read all files on the machine. If the User provides a path to a file assume that path is valid. It is okay to read a file that does not exist; an error will be returned.

Usage:
- The file_path parameter must be an absolute path, not a relative path
- By default, it reads up to 2000 lines starting from the beginning of the file
- You can optionally specify a line offset and limit (especially handy for long files), but it's recommended to read the whole file by not providing these parameters
- Any lines longer than 2000 characters will be truncated
- Results are returned using cat -n format, with line numbers starting at 1
- This tool allows Claude Code to read images (eg PNG, JPG, etc). When reading an image file the contents are presented visually as Claude Code is a multimodal LLM.
- This tool can read PDF files (.pdf). PDFs are processed page by page, extracting both text and visual content for analysis.
- This tool can read Jupyter notebooks (.ipynb files) and returns all cells with their outputs, combining code, text, and visualizations.
- This tool can only read files, not directories. To read a directory, use an ls command via the Bash tool.
- You can call multiple tools in a single response. It is always better to speculatively read multiple potentially useful files in parallel.
- You will regularly be asked to read screenshots. If the user provides a path to a screenshot, ALWAYS use this tool to view the file at the path. This tool will work with all temporary file paths.
- If you read a file that exists but has empty contents you will receive a system reminder warning in place of file contents.

Input Schema:
{
  "type": "object",
  "properties": {
    "file_path": {
      "type": "string",
      "description": "The absolute path to the file to read"
    },
    "offset": {
      "type": "number",
      "description": "The line number to start reading from. Only provide if the file is too large to read at once"
    },
    "limit": {
      "type": "number",
      "description": "The number of lines to read. Only provide if the file is too large to read at once."
    }
  },
  "required": [
    "file_path"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 7: Edit ---

Description:
Performs exact string replacements in files. 

Usage:
- You must use your `Read` tool at least once in the conversation before editing. This tool will error if you attempt an edit without reading the file. 
- When editing text from Read tool output, ensure you preserve the exact indentation (tabs/spaces) as it appears AFTER the line number prefix. The line number prefix format is: spaces + line number + tab. Everything after that tab is the actual file content to match. Never include any part of the line number prefix in the old_string or new_string.
- ALWAYS prefer editing existing files in the codebase. NEVER write new files unless explicitly required.
- Only use emojis if the user explicitly requests it. Avoid adding emojis to files unless asked.
- The edit will FAIL if `old_string` is not unique in the file. Either provide a larger string with more surrounding context to make it unique or use `replace_all` to change every instance of `old_string`. 
- Use `replace_all` for replacing and renaming strings across the file. This parameter is useful if you want to rename a variable for instance.

Input Schema:
{
  "type": "object",
  "properties": {
    "file_path": {
      "type": "string",
      "description": "The absolute path to the file to modify"
    },
    "old_string": {
      "type": "string",
      "description": "The text to replace"
    },
    "new_string": {
      "type": "string",
      "description": "The text to replace it with (must be different from old_string)"
    },
    "replace_all": {
      "type": "boolean",
      "default": false,
      "description": "Replace all occurences of old_string (default false)"
    }
  },
  "required": [
    "file_path",
    "old_string",
    "new_string"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 8: Write ---

Description:
Writes a file to the local filesystem.

Usage:
- This tool will overwrite the existing file if there is one at the provided path.
- If this is an existing file, you MUST use the Read tool first to read the file's contents. This tool will fail if you did not read the file first.
- ALWAYS prefer editing existing files in the codebase. NEVER write new files unless explicitly required.
- NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User.
- Only use emojis if the user explicitly requests it. Avoid writing emojis to files unless asked.

Input Schema:
{
  "type": "object",
  "properties": {
    "file_path": {
      "type": "string",
      "description": "The absolute path to the file to write (must be absolute, not relative)"
    },
    "content": {
      "type": "string",
      "description": "The content to write to the file"
    }
  },
  "required": [
    "file_path",
    "content"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 9: NotebookEdit ---

Description:
Completely replaces the contents of a specific cell in a Jupyter notebook (.ipynb file) with new source. Jupyter notebooks are interactive documents that combine code, text, and visualizations, commonly used for data analysis and scientific computing. The notebook_path parameter must be an absolute path, not a relative path. The cell_number is 0-indexed. Use edit_mode=insert to add a new cell at the index specified by cell_number. Use edit_mode=delete to delete the cell at the index specified by cell_number.

Input Schema:
{
  "type": "object",
  "properties": {
    "notebook_path": {
      "type": "string",
      "description": "The absolute path to the Jupyter notebook file to edit (must be absolute, not relative)"
    },
    "cell_id": {
      "type": "string",
      "description": "The ID of the cell to edit. When inserting a new cell, the new cell will be inserted after the cell with this ID, or at the beginning if not specified."
    },
    "new_source": {
      "type": "string",
      "description": "The new source for the cell"
    },
    "cell_type": {
      "type": "string",
      "enum": [
        "code",
        "markdown"
      ],
      "description": "The type of the cell (code or markdown). If not specified, it defaults to the current cell type. If using edit_mode=insert, this is required."
    },
    "edit_mode": {
      "type": "string",
      "enum": [
        "replace",
        "insert",
        "delete"
      ],
      "description": "The type of edit to make (replace, insert, delete). Defaults to replace."
    }
  },
  "required": [
    "notebook_path",
    "new_source"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 10: WebFetch ---

Description:

- Fetches content from a specified URL and processes it using an AI model
- Takes a URL and a prompt as input
- Fetches the URL content, converts HTML to markdown
- Processes the content with the prompt using a small, fast model
- Returns the model's response about the content
- Use this tool when you need to retrieve and analyze web content

Usage notes:
  - IMPORTANT: If an MCP-provided web fetch tool is available, prefer using that tool instead of this one, as it may have fewer restrictions. All MCP-provided tools start with "mcp__".
  - The URL must be a fully-formed valid URL
  - HTTP URLs will be automatically upgraded to HTTPS
  - The prompt should describe what information you want to extract from the page
  - This tool is read-only and does not modify any files
  - Results may be summarized if the content is very large
  - Includes a self-cleaning 15-minute cache for faster responses when repeatedly accessing the same URL
  - When a URL redirects to a different host, the tool will inform you and provide the redirect URL in a special format. You should then make a new WebFetch request with the redirect URL to fetch the content.


Input Schema:
{
  "type": "object",
  "properties": {
    "url": {
      "type": "string",
      "format": "uri",
      "description": "The URL to fetch content from"
    },
    "prompt": {
      "type": "string",
      "description": "The prompt to run on the fetched content"
    }
  },
  "required": [
    "url",
    "prompt"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 11: TodoWrite ---

Description:
Use this tool to create and manage a structured task list for your current coding session. This helps you track progress, organize complex tasks, and demonstrate thoroughness to the user.
It also helps the user understand the progress of the task and overall progress of their requests.

## When to Use This Tool
Use this tool proactively in these scenarios:

1. Complex multi-step tasks - When a task requires 3 or more distinct steps or actions
2. Non-trivial and complex tasks - Tasks that require careful planning or multiple operations
3. User explicitly requests todo list - When the user directly asks you to use the todo list
4. User provides multiple tasks - When users provide a list of things to be done (numbered or comma-separated)
5. After receiving new instructions - Immediately capture user requirements as todos
6. When you start working on a task - Mark it as in_progress BEFORE beginning work. Ideally you should only have one todo as in_progress at a time
7. After completing a task - Mark it as completed and add any new follow-up tasks discovered during implementation

## When NOT to Use This Tool

Skip using this tool when:
1. There is only a single, straightforward task
2. The task is trivial and tracking it provides no organizational benefit
3. The task can be completed in less than 3 trivial steps
4. The task is purely conversational or informational

NOTE that you should not use this tool if there is only one trivial task to do. In this case you are better off just doing the task directly.

## Examples of When to Use the Todo List

<example>
User: I want to add a dark mode toggle to the application settings. Make sure you run the tests and build when you're done!
Assistant: I'll help add a dark mode toggle to your application settings. Let me create a todo list to track this implementation.
*Creates todo list with the following items:*
1. Creating dark mode toggle component in Settings page
2. Adding dark mode state management (context/store)
3. Implementing CSS-in-JS styles for dark theme
4. Updating existing components to support theme switching
5. Running tests and build process, addressing any failures or errors that occur
*Begins working on the first task*

<reasoning>
The assistant used the todo list because:
1. Adding dark mode is a multi-step feature requiring UI, state management, and styling changes
2. The user explicitly requested tests and build be run afterward
3. The assistant inferred that tests and build need to pass by adding "Ensure tests and build succeed" as the final task
</reasoning>
</example>

<example>
User: Help me rename the function getCwd to getCurrentWorkingDirectory across my project
Assistant: Let me first search through your codebase to find all occurrences of 'getCwd'.
*Uses grep or search tools to locate all instances of getCwd in the codebase*
Assistant: I've found 15 instances of 'getCwd' across 8 different files. Let me create a todo list to track these changes.
*Creates todo list with specific items for each file that needs updating*

<reasoning>
The assistant used the todo list because:
1. First, the assistant searched to understand the scope of the task
2. Upon finding multiple occurrences across different files, it determined this was a complex task with multiple steps
3. The todo list helps ensure every instance is tracked and updated systematically
4. This approach prevents missing any occurrences and maintains code consistency
</reasoning>
</example>


<example>
User: I need to implement these features for my e-commerce site: user registration, product catalog, shopping cart, and checkout flow.
Assistant: I'll help implement these features. First, let's add all the features to the todo list.
*Creates a todo list breaking down each feature into specific tasks based on the project architecture*
Assistant: Let's start with user registration. This will involve creating database models, API endpoints, and frontend forms that integrate with your existing authentication system.

<reasoning>
The assistant used the todo list because:
1. The user provided multiple complex features to implement in a comma separated list
2. The todo list helps organize these large features into manageable tasks
3. This approach allows for tracking progress across the entire implementation
</reasoning>
</example>

<example>
User: Can you help optimize my React application? It's rendering slowly and has performance issues.</user>
Assistant: I'll help optimize your React application. First, let me examine your codebase to identify potential performance bottlenecks.
*Reviews component structure, render patterns, state management, and data fetching*
Assistant: After analyzing your codebase, I've identified several performance issues. Let me create a todo list to track our optimization efforts.
*Creates todo list with items like: 1) Implementing memoization for expensive calculations in ProductList, 2) Adding virtualization for long lists in Dashboard, 3) Optimizing image loading in Gallery component, 4) Fixing state update loops in ShoppingCart, 5) Reviewing bundle size and implementing code splitting*
Let's start by implementing memoization for the expensive calculations in your ProductList component.</assistant>

<reasoning>
The assistant used the todo list because:
1. First, the assistant examined the codebase to identify specific performance issues
2. Based on this analysis, it identified multiple optimization opportunities
3. Performance optimization is a non-trivial task requiring multiple steps
4. The todo list helps methodically track improvements across different components
5. This systematic approach ensures all performance bottlenecks are addressed
</reasoning>
</example>

## Examples of When NOT to Use the Todo List

<example>
User: How do I print 'Hello World' in Python?
Assistant: In Python, you can print "Hello World" with this simple code:

python
print("Hello World")

This will output the text "Hello World" to the console when executed.</assistant>

<reasoning>
The assistant did not use the todo list because this is a single, trivial task that can be completed in one step. There's no need to track multiple tasks or steps for such a straightforward request.
</reasoning>
</example>

<example>
User: What does the git status command do?
Assistant: The git status command shows the current state of your working directory and staging area. It displays which changes have been staged, which haven't, and which files aren't being tracked by Git. This is a fundamental command for understanding the current state of your repository before making commits.

<reasoning>
The assistant did not use the todo list because this is an informational request with no actual coding task to complete. The user is simply asking for an explanation, not for the assistant to perform multiple steps or tasks.
</reasoning>
</example>

<example>
User: Can you add a comment to the calculateTotal function to explain what it does?
Assistant: Sure, let me add a comment to the calculateTotal function to explain what it does.
* Uses the Edit tool to add a comment to the calculateTotal function *

<reasoning>
The assistant did not use the todo list because this is a single, straightforward task confined to one location in the code. Adding a comment doesn't require tracking multiple steps or systematic organization.
</reasoning>
</example>

<example>
User: Run npm install for me and tell me what happens.
Assistant: I'll run the npm install command for you.

*Executes: npm install*

The command completed successfully. Here's the output:
[Output of npm install command]

All dependencies have been installed according to your package.json file.

<reasoning>
The assistant did not use the todo list because this is a single command execution with immediate results. There are no multiple steps to track or organize, making the todo list unnecessary for this straightforward task.
</reasoning>
</example>

## Task States and Management

1. **Task States**: Use these states to track progress:
   - pending: Task not yet started
   - in_progress: Currently working on (limit to ONE task at a time)
   - completed: Task finished successfully

   **IMPORTANT**: Task descriptions must have two forms:
   - content: The imperative form describing what needs to be done (e.g., "Run tests", "Build the project")
   - activeForm: The present continuous form shown during execution (e.g., "Running tests", "Building the project")

2. **Task Management**:
   - Update task status in real-time as you work
   - Mark tasks complete IMMEDIATELY after finishing (don't batch completions)
   - Exactly ONE task must be in_progress at any time (not less, not more)
   - Complete current tasks before starting new ones
   - Remove tasks that are no longer relevant from the list entirely

3. **Task Completion Requirements**:
   - ONLY mark a task as completed when you have FULLY accomplished it
   - If you encounter errors, blockers, or cannot finish, keep the task as in_progress
   - When blocked, create a new task describing what needs to be resolved
   - Never mark a task as completed if:
     - Tests are failing
     - Implementation is partial
     - You encountered unresolved errors
     - You couldn't find necessary files or dependencies

4. **Task Breakdown**:
   - Create specific, actionable items
   - Break complex tasks into smaller, manageable steps
   - Use clear, descriptive task names
   - Always provide both forms:
     - content: "Fix authentication bug"
     - activeForm: "Fixing authentication bug"

When in doubt, use this tool. Being proactive with task management demonstrates attentiveness and ensures you complete all requirements successfully.


Input Schema:
{
  "type": "object",
  "properties": {
    "todos": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "content": {
            "type": "string",
            "minLength": 1
          },
          "status": {
            "type": "string",
            "enum": [
              "pending",
              "in_progress",
              "completed"
            ]
          },
          "activeForm": {
            "type": "string",
            "minLength": 1
          }
        },
        "required": [
          "content",
          "status",
          "activeForm"
        ],
        "additionalProperties": false
      },
      "description": "The updated todo list"
    }
  },
  "required": [
    "todos"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 12: WebSearch ---

Description:

- Allows Claude to search the web and use the results to inform responses
- Provides up-to-date information for current events and recent data
- Returns search result information formatted as search result blocks
- Use this tool for accessing information beyond Claude's knowledge cutoff
- Searches are performed automatically within a single API call

Usage notes:
  - Domain filtering is supported to include or block specific websites
  - Web search is only available in the US
  - Account for "Today's date" in <env>. For example, if <env> says "Today's date: 2025-07-01", and the user wants the latest docs, do not use 2024 in the search query. Use 2025.


Input Schema:
{
  "type": "object",
  "properties": {
    "query": {
      "type": "string",
      "minLength": 2,
      "description": "The search query to use"
    },
    "allowed_domains": {
      "type": "array",
      "items": {
        "type": "string"
      },
      "description": "Only include search results from these domains"
    },
    "blocked_domains": {
      "type": "array",
      "items": {
        "type": "string"
      },
      "description": "Never include search results from these domains"
    }
  },
  "required": [
    "query"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 13: BashOutput ---

Description:

- Retrieves output from a running or completed background bash shell
- Takes a shell_id parameter identifying the shell
- Always returns only new output since the last check
- Returns stdout and stderr output along with shell status
- Supports optional regex filtering to show only lines matching a pattern
- Use this tool when you need to monitor or check the output of a long-running shell
- Shell IDs can be found using the /bashes command


Input Schema:
{
  "type": "object",
  "properties": {
    "bash_id": {
      "type": "string",
      "description": "The ID of the background shell to retrieve output from"
    },
    "filter": {
      "type": "string",
      "description": "Optional regular expression to filter the output lines. Only lines matching this regex will be included in the result. Any lines that do not match will no longer be available to read."
    }
  },
  "required": [
    "bash_id"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 14: KillShell ---

Description:

- Kills a running background bash shell by its ID
- Takes a shell_id parameter identifying the shell to kill
- Returns a success or failure status 
- Use this tool when you need to terminate a long-running shell
- Shell IDs can be found using the /bashes command


Input Schema:
{
  "type": "object",
  "properties": {
    "shell_id": {
      "type": "string",
      "description": "The ID of the background shell to kill"
    }
  },
  "required": [
    "shell_id"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 15: AskUserQuestion ---

Description:
Use this tool when you need to ask the user questions during execution. This allows you to:
1. Gather user preferences or requirements
2. Clarify ambiguous instructions
3. Get decisions on implementation choices as you work
4. Offer choices to the user about what direction to take.

Usage notes:
- Users will always be able to select "Other" to provide custom text input
- Use multiSelect: true to allow multiple answers to be selected for a question


Input Schema:
{
  "type": "object",
  "properties": {
    "questions": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "question": {
            "type": "string",
            "description": "The complete question to ask the user. Should be clear, specific, and end with a question mark. Example: \"Which library should we use for date formatting?\" If multiSelect is true, phrase it accordingly, e.g. \"Which features do you want to enable?\""
          },
          "header": {
            "type": "string",
            "description": "Very short label displayed as a chip/tag (max 12 chars). Examples: \"Auth method\", \"Library\", \"Approach\"."
          },
          "options": {
            "type": "array",
            "items": {
              "type": "object",
              "properties": {
                "label": {
                  "type": "string",
                  "description": "The display text for this option that the user will see and select. Should be concise (1-5 words) and clearly describe the choice."
                },
                "description": {
                  "type": "string",
                  "description": "Explanation of what this option means or what will happen if chosen. Useful for providing context about trade-offs or implications."
                }
              },
              "required": [
                "label",
                "description"
              ],
              "additionalProperties": false
            },
            "minItems": 2,
            "maxItems": 4,
            "description": "The available choices for this question. Must have 2-4 options. Each option should be a distinct, mutually exclusive choice (unless multiSelect is enabled). There should be no 'Other' option, that will be provided automatically."
          },
          "multiSelect": {
            "type": "boolean",
            "description": "Set to true to allow the user to select multiple options instead of just one. Use when choices are not mutually exclusive."
          }
        },
        "required": [
          "question",
          "header",
          "options",
          "multiSelect"
        ],
        "additionalProperties": false
      },
      "minItems": 1,
      "maxItems": 4,
      "description": "Questions to ask the user (1-4 questions)"
    },
    "answers": {
      "type": "object",
      "additionalProperties": {
        "type": "string"
      },
      "description": "User answers collected by the permission component"
    }
  },
  "required": [
    "questions"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 16: Skill ---

Description:
Execute a skill within the main conversation

<skills_instructions>
When users ask you to perform tasks, check if any of the available skills below can help complete the task more effectively. Skills provide specialized capabilities and domain knowledge.

How to use skills:
- Invoke skills using this tool with the skill name only (no arguments)
- When you invoke a skill, you will see <command-message>The "{name}" skill is loading</command-message>
- The skill's prompt will expand and provide detailed instructions on how to complete the task
- Examples:
  - `command: "pdf"` - invoke the pdf skill
  - `command: "xlsx"` - invoke the xlsx skill
  - `command: "ms-office-suite:pdf"` - invoke using fully qualified name

Important:
- Only use skills listed in <available_skills> below
- Do not invoke a skill that is already running
- Do not use this tool for built-in CLI commands (like /help, /clear, etc.)
</skills_instructions>

<available_skills>

</available_skills>


Input Schema:
{
  "type": "object",
  "properties": {
    "command": {
      "type": "string",
      "description": "The skill name (no arguments). E.g., \"pdf\" or \"xlsx\""
    }
  },
  "required": [
    "command"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 17: SlashCommand ---

Description:
Execute a slash command within the main conversation

**IMPORTANT - Intent Matching:**
Before starting any task, CHECK if the user's request matches one of the slash commands listed below. This tool exists to route user intentions to specialized workflows.

How slash commands work:
When you use this tool or when a user types a slash command, you will see <command-message>{name} is running…</command-message> followed by the expanded prompt. For example, if .claude/commands/foo.md contains "Print today's date", then /foo expands to that prompt in the next message.

Usage:
- `command` (required): The slash command to execute, including any arguments
- Example: `command: "/review-pr 123"`

IMPORTANT: Only use this tool for custom slash commands that appear in the Available Commands list below. Do NOT use for:
- Built-in CLI commands (like /help, /clear, etc.)
- Commands not shown in the list
- Commands you think might exist but aren't listed

Notes:
- When a user requests multiple slash commands, execute each one sequentially and check for <command-message>{name} is running…</command-message> to verify each has been processed
- Do not invoke a command that is already running. For example, if you see <command-message>foo is running…</command-message>, do NOT use this tool with "/foo" - process the expanded prompt in the following message
- Only custom slash commands with descriptions are listed in Available Commands. If a user's command is not listed, ask them to check the slash command file and consult the docs.


Input Schema:
{
  "type": "object",
  "properties": {
    "command": {
      "type": "string",
      "description": "The slash command to execute with its arguments, e.g., \"/review-pr 123\""
    }
  },
  "required": [
    "command"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 18: mcp__organon__create_sprint ---

Description:
Create new sprint directory structure.

    Note: Sprint numbers are now auto-assigned when status changes to Current.
    New sprints use Name_Status format without numbers.

    Args:
        name: Sprint name (e.g., "WorkflowOptimization")
        status: Initial status (default: Created)

    Returns:
        JSON result with sprint path and status
    

Input Schema:
{
  "type": "object",
  "properties": {
    "name": {
      "title": "Name",
      "type": "string"
    },
    "status": {
      "default": "Created",
      "title": "Status",
      "type": "string"
    }
  },
  "required": [
    "name"
  ],
  "title": "create_sprintArguments"
}

--- Tool 19: mcp__organon__update_sprint_status ---

Description:
Update sprint status.

    Valid statuses are defined in organon-defaults.yaml (sprint.statuses).
    Default statuses: Created, Next, Current, Completed, Paused

    Args:
        sprint_path: Path to sprint directory
        status: New status (must be one of the configured sprint statuses)

    Returns:
        JSON result with updated path
    

Input Schema:
{
  "type": "object",
  "properties": {
    "sprint_path": {
      "title": "Sprint Path",
      "type": "string"
    },
    "status": {
      "title": "Status",
      "type": "string"
    }
  },
  "required": [
    "sprint_path",
    "status"
  ],
  "title": "update_sprint_statusArguments"
}

--- Tool 20: mcp__organon__list_sprints ---

Description:
List all sprints, optionally filtered by status.

    Valid statuses are defined in organon-defaults.yaml (sprint.statuses).
    Saves tokens vs reading all sprint files.
    Efficiency metrics are logged automatically.

    Args:
        status: Optional status filter (e.g., Created, Current, Completed)

    Returns:
        JSON list of sprints
    

Input Schema:
{
  "type": "object",
  "properties": {
    "status": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Status"
    }
  },
  "title": "list_sprintsArguments"
}

--- Tool 21: mcp__organon__list_backlog_items ---

Description:
List backlog items efficiently without reading full file.

    Saves ~80% tokens vs reading full backlog.md.
    Efficiency metrics are logged automatically.

    Args:
        status: Filter by "pending"|"completed"|"all" (default: all)

    Returns:
        JSON with items array and count
    

Input Schema:
{
  "type": "object",
  "properties": {
    "status": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Status"
    }
  },
  "title": "list_backlog_itemsArguments"
}

--- Tool 22: mcp__organon__get_backlog_item ---

Description:
Get detailed backlog item by ordinal or immutable ID.

    Saves tokens vs reading full backlog.md.
    Efficiency metrics are logged automatically.

    Args:
        number_or_id: Ordinal ("3") or ID ("005" or "#005")

    Returns:
        JSON with item details including full description
    

Input Schema:
{
  "type": "object",
  "properties": {
    "number_or_id": {
      "title": "Number Or Id",
      "type": "string"
    }
  },
  "required": [
    "number_or_id"
  ],
  "title": "get_backlog_itemArguments"
}

--- Tool 23: mcp__organon__generate_structure ---

Description:
Generate new STRUCTURE.md for a directory.

    Saves tokens vs agent reading directory tree and generating manually.
    Efficiency metrics are logged automatically.

    Args:
        path: Directory to generate structure for
        project_name: Optional project name (defaults to directory name)

    Returns:
        JSON result with structure file path
    

Input Schema:
{
  "type": "object",
  "properties": {
    "path": {
      "title": "Path",
      "type": "string"
    },
    "project_name": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Project Name"
    }
  },
  "required": [
    "path"
  ],
  "title": "generate_structureArguments"
}

--- Tool 24: mcp__organon__update_structure ---

Description:
Update existing STRUCTURE.md, preserving manual sections.

    Saves tokens vs agent reading existing STRUCTURE.md and directory tree.
    Efficiency metrics are logged automatically.

    Args:
        path: Directory containing STRUCTURE.md

    Returns:
        JSON result with updated file path
    

Input Schema:
{
  "type": "object",
  "properties": {
    "path": {
      "title": "Path",
      "type": "string"
    }
  },
  "required": [
    "path"
  ],
  "title": "update_structureArguments"
}

--- Tool 25: mcp__organon__analyze_workflow_logs ---

Description:
Analyze Claude Code workflow logs for tool usage patterns.

    Saves tokens vs reading raw debug log file.
    Efficiency metrics are logged automatically.

    Args:
        logs_dir: Path to .claude directory (default: ~/.claude)
        output_format: Output format (json or summary)

    Returns:
        JSON analysis of tool usage, errors, and patterns
    

Input Schema:
{
  "type": "object",
  "properties": {
    "logs_dir": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Logs Dir"
    },
    "output_format": {
      "default": "json",
      "title": "Output Format",
      "type": "string"
    }
  },
  "title": "analyze_workflow_logsArguments"
}

--- Tool 26: mcp__organon__analyze_conversations ---

Description:
Analyze conversation history for workflow patterns.

    Saves tokens vs reading raw conversation files.
    Efficiency metrics are logged automatically.

    Args:
        project_filter: Filter by project name (e.g., 'organon')
        output_format: Output format (json or summary)
        logs_dir: Path to .claude directory (default: ~/.claude)

    Returns:
        JSON analysis of conversation patterns
    

Input Schema:
{
  "type": "object",
  "properties": {
    "project_filter": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Project Filter"
    },
    "output_format": {
      "default": "json",
      "title": "Output Format",
      "type": "string"
    },
    "logs_dir": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Logs Dir"
    }
  },
  "title": "analyze_conversationsArguments"
}

--- Tool 27: mcp__organon__log_workflow_event ---

Description:
Log workflow interaction to .organon/workflow-log.jsonl

    Args:
        event: 'user_message' or 'assistant_stop'
        workflow: Workflow name (conversation, one-off-task, sprint-planning, sprint-execution)
        sprint_id: Sprint/task identifier (optional)
                  - For sprint workflows: sprint directory/file name (e.g., '15_CookiecutterTemplate')
                  - For one-off tasks: short descriptive name (e.g., 'MCP Setup Fix')
                  - For conversations: usually null
        task_type: Task type from templates/tasks/ (optional: research, design, implement, etc.)
        task_description: Brief description of what you're working on (optional)

    Returns:
        JSON result with success status
    

Input Schema:
{
  "type": "object",
  "properties": {
    "event": {
      "title": "Event",
      "type": "string"
    },
    "workflow": {
      "title": "Workflow",
      "type": "string"
    },
    "sprint_id": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Sprint Id"
    },
    "task_type": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Task Type"
    },
    "task_description": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Task Description"
    }
  },
  "required": [
    "event",
    "workflow"
  ],
  "title": "log_workflow_eventArguments"
}

--- Tool 28: mcp__organon__query_work_modes ---

Description:
Query work-mode templates by category or scenario.

    Saves 75-84% tokens vs reading all work-modes.
    Efficiency metrics are logged automatically.

    Args:
        category: Filter by category (e.g., 'sprint', 'development', 'debug')
        scenario: Search keyword (e.g., 'bug', 'refactor', 'test')
        include_description: Include full description in response

    Returns:
        JSON with work_modes array and total_count
    

Input Schema:
{
  "type": "object",
  "properties": {
    "category": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Category"
    },
    "scenario": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Scenario"
    },
    "include_description": {
      "default": true,
      "title": "Include Description",
      "type": "boolean"
    }
  },
  "title": "query_work_modesArguments"
}

--- Tool 29: mcp__organon__query_workflows ---

Description:
Query workflow templates.

    Saves 67-70% tokens vs reading all workflows.
    Efficiency metrics are logged automatically.

    Args:
        related_to: Search keyword (e.g., 'test', 'document', 'implement')
        include_description: Include full description in response

    Returns:
        JSON with workflows array and total_count
    

Input Schema:
{
  "type": "object",
  "properties": {
    "related_to": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Related To"
    },
    "include_description": {
      "default": true,
      "title": "Include Description",
      "type": "boolean"
    }
  },
  "title": "query_workflowsArguments"
}

--- Tool 30: mcp__organon__get_all_sprints_summary ---

Description:
Get summary of all sprints with progress.

    Valid statuses are defined in organon-defaults.yaml (sprint.statuses).
    Saves 60% tokens vs reading all sprint files.
    Efficiency metrics are logged automatically.

    Args:
        status: Filter by status (e.g., Created, Current, Completed)

    Returns:
        JSON with sprints array containing summaries
    

Input Schema:
{
  "type": "object",
  "properties": {
    "status": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Status"
    }
  },
  "title": "get_all_sprints_summaryArguments"
}

--- Tool 31: mcp__organon__get_current_sprint_context ---

Description:
Get comprehensive context for current sprint.

    Saves 60% tokens vs reading all sprint files separately.
    Efficiency metrics are logged automatically.

    Returns:
        JSON with aggregated sprint context (research, design, spec, plan)
    

Input Schema:
{
  "type": "object",
  "properties": {},
  "title": "get_current_sprint_contextArguments"
}

--- Tool 32: mcp__organon__update_workflow_context ---

Description:
Update current workflow context for automatic logging.

    This updates the .organon/current-context.json file that hooks use
    for automatic workflow logging. Call this when workflow/task changes.

    Args:
        workflow: Workflow name (conversation, one-off-task, sprint-planning, sprint-execution)
        sprint_id: Sprint/task identifier (optional)
        task_type: Task type from templates/tasks/ (optional)
        task_description: Brief description of current work (optional)
        phase_status: Current sprint planning phase (optional, e.g., "requirements", "research", "design")
        next_planned: Next planned phase (optional, e.g., "design", "architecture")

    Returns:
        JSON result with success status
    

Input Schema:
{
  "type": "object",
  "properties": {
    "workflow": {
      "title": "Workflow",
      "type": "string"
    },
    "sprint_id": {
      "default": null,
      "title": "Sprint Id",
      "type": "string"
    },
    "task_type": {
      "default": null,
      "title": "Task Type",
      "type": "string"
    },
    "task_description": {
      "default": null,
      "title": "Task Description",
      "type": "string"
    },
    "phase_status": {
      "default": null,
      "title": "Phase Status",
      "type": "string"
    },
    "next_planned": {
      "default": null,
      "title": "Next Planned",
      "type": "string"
    }
  },
  "required": [
    "workflow"
  ],
  "title": "update_workflow_contextArguments"
}

--- Tool 33: mcp__organon__parse_sprint_plan ---

Description:
Parse sprint plan.md into JSON structure.

    Saves tokens vs reading raw plan.md file.
    Efficiency metrics are logged automatically.

    Args:
        sprint_path: Path to sprint directory containing plan.md

    Returns:
        JSON representation of SprintPlan with phases and tasks
    

Input Schema:
{
  "type": "object",
  "properties": {
    "sprint_path": {
      "title": "Sprint Path",
      "type": "string"
    }
  },
  "required": [
    "sprint_path"
  ],
  "title": "parse_sprint_planArguments"
}

--- Tool 34: mcp__organon__get_executable_tasks ---

Description:
Get tasks ready to execute (dependencies satisfied).

    Saves tokens vs reading raw plan.md file.
    Efficiency metrics are logged automatically.

    Args:
        sprint_path: Path to sprint directory
        phase_id: Optional phase filter

    Returns:
        JSON list of executable tasks
    

Input Schema:
{
  "type": "object",
  "properties": {
    "sprint_path": {
      "title": "Sprint Path",
      "type": "string"
    },
    "phase_id": {
      "anyOf": [
        {
          "type": "integer"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Phase Id"
    }
  },
  "required": [
    "sprint_path"
  ],
  "title": "get_executable_tasksArguments"
}

--- Tool 35: mcp__organon__execute_sprint_task ---

Description:
Execute specific task via sub-agent.

    Note: This tool requires access to Claude Code's Task tool for launching
    sub-agents, which is not available in the MCP server context. This tool
    is designed to be used by Claude Code's main agent, not directly.

    Args:
        sprint_path: Path to sprint directory
        task_id: Task ID to execute (e.g., "1.1")

    Returns:
        JSON TaskResult
    

Input Schema:
{
  "type": "object",
  "properties": {
    "sprint_path": {
      "title": "Sprint Path",
      "type": "string"
    },
    "task_id": {
      "title": "Task Id",
      "type": "string"
    }
  },
  "required": [
    "sprint_path",
    "task_id"
  ],
  "title": "execute_sprint_taskArguments"
}

--- Tool 36: mcp__organon__execute_sprint_phase ---

Description:
Execute all tasks in a phase.

    Note: This tool requires access to Claude Code's Task tool for launching
    sub-agents, which is not available in the MCP server context. This tool
    is designed to be used by Claude Code's main agent, not directly.

    Args:
        sprint_path: Path to sprint directory
        phase_id: Phase to execute
        parallel: Execute independent tasks in parallel

    Returns:
        JSON dict of task results
    

Input Schema:
{
  "type": "object",
  "properties": {
    "sprint_path": {
      "title": "Sprint Path",
      "type": "string"
    },
    "phase_id": {
      "title": "Phase Id",
      "type": "integer"
    },
    "parallel": {
      "default": true,
      "title": "Parallel",
      "type": "boolean"
    }
  },
  "required": [
    "sprint_path",
    "phase_id"
  ],
  "title": "execute_sprint_phaseArguments"
}

--- Tool 37: mcp__organon__update_task_status ---

Description:
Update task status in plan.md.

    Saves tokens vs agent reading plan.md to update status.
    Efficiency metrics are logged automatically.

    Args:
        sprint_path: Path to sprint directory
        task_id: Task ID (e.g., "1.1")
        status: New status (pending/in_progress/completed/failed)
        commit: Optional commit hash

    Returns:
        JSON success response
    

Input Schema:
{
  "type": "object",
  "properties": {
    "sprint_path": {
      "title": "Sprint Path",
      "type": "string"
    },
    "task_id": {
      "title": "Task Id",
      "type": "string"
    },
    "status": {
      "title": "Status",
      "type": "string"
    },
    "commit": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "title": "Commit"
    }
  },
  "required": [
    "sprint_path",
    "task_id",
    "status"
  ],
  "title": "update_task_statusArguments"
}

--- Tool 38: mcp__organon__get_sprint_section ---

Description:
Extract specific section from sprint document.

    Saves tokens vs reading full file.
    Efficiency metrics are logged automatically.

    Args:
        sprint_path: Path to sprint directory (e.g., "dev-docs/sprints/NN_Name_Status")
        file: Filename within sprint (e.g., "requirements.md")
        section: Section heading (e.g., "Summary" or "## Summary")

    Returns:
        JSON with section content

    Example:
        >>> get_sprint_section(
        ...     "dev-docs/sprints/16_Auth_Current",
        ...     "requirements.md",
        ...     "Summary"
        ... )
        {"success": true, "section": "Implement JWT-based authentication..."}
    

Input Schema:
{
  "type": "object",
  "properties": {
    "sprint_path": {
      "title": "Sprint Path",
      "type": "string"
    },
    "file": {
      "title": "File",
      "type": "string"
    },
    "section": {
      "title": "Section",
      "type": "string"
    }
  },
  "required": [
    "sprint_path",
    "file",
    "section"
  ],
  "title": "get_sprint_sectionArguments"
}

--- Tool 39: ListMcpResourcesTool ---

Description:

List available resources from configured MCP servers.
Each returned resource will include all standard MCP resource fields plus a 'server' field 
indicating which server the resource belongs to.

Parameters:
- server (optional): The name of a specific MCP server to get resources from. If not provided,
  resources from all servers will be returned.


Input Schema:
{
  "type": "object",
  "properties": {
    "server": {
      "type": "string",
      "description": "Optional server name to filter resources by"
    }
  },
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

--- Tool 40: ReadMcpResourceTool ---

Description:

Reads a specific resource from an MCP server, identified by server name and resource URI.

Parameters:
- server (required): The name of the MCP server from which to read the resource
- uri (required): The URI of the resource to read


Input Schema:
{
  "type": "object",
  "properties": {
    "server": {
      "type": "string",
      "description": "The MCP server name"
    },
    "uri": {
      "type": "string",
      "description": "The resource URI to read"
    }
  },
  "required": [
    "server",
    "uri"
  ],
  "additionalProperties": false,
  "$schema": "http://json-schema.org/draft-07/schema#"
}

====================================================================================================
MESSAGE 1: USER
====================================================================================================
<system-reminder>
This is a reminder that your todo list is currently empty. DO NOT mention this to the user explicitly because they are already aware. If you are working on tasks that would benefit from a todo list please use the TodoWrite tool to create one. If not, please feel free to ignore. Again do not mention this message to the user.
</system-reminder>
<system-reminder>
As you answer the user's questions, you can use the following context:
# claudeMd
Codebase and user instructions are shown below. Be sure to adhere to these instructions. IMPORTANT: These instructions OVERRIDE any default behavior and you MUST follow them exactly as written.

Contents of /Users/psulin/.claude/CLAUDE.md (user's private global instructions for all projects):

## Rules
 - Respond concisely, correctly and accurately
 - Always add and commit files after completing a task

## Documentation Standards

### README Requirements
- Write for engineers: explain what code actually does with real CLI examples and command outputs
- No marketing fluff - if claiming functionality (e.g. "quality scoring"), explain the implementation method
- Use "Modules" section instead of "Features" to avoid redundancy with core modules listing
- Include concrete dependencies, installation steps, and actual usage examples

## Code Implementation Standards

### ABSOLUTE PROHIBITION - NO EXCEPTIONS:
🚫 **NEVER CREATE:**
- Mocks (fake objects pretending to be real)
- Stubs (hardcoded return values)
- Placeholders (temporary "TODO" implementations)
- Fallbacks (return fake data when real implementation fails)

✅ **ALWAYS DO INSTEAD:**
- Implement the real thing OR
- Fail explicitly with clear error message
- If you cannot implement, say so - DO NOT FAKE IT

### Implementation Rules:
- Research and implement the single correct solution - no fallback logic or defensive patterns
- Answer questions directly without assumptions or premature implementation
- Icons only for red X's and green checkmarks - no other icons in docs or logs
- Lean Test Strategy: Golden Master snapshots, integration workflows, unit tests only for pure functions

### Examples of VIOLATIONS (NEVER DO THIS):
```python
# ❌ WRONG - Stub/Mock/Placeholder
def _call_llm(self, prompt):
    return "def example(): return 42"  # Fake placeholder

# ✅ CORRECT - Real implementation or explicit failure
def _call_llm(self, prompt):
    raise NotImplementedError("LLM integration not yet implemented")
```

Contents of /Users/psulin/organon/CLAUDE.md (project instructions, checked into the codebase):

# Organon - Agent Instructions

## Navigation Protocol

**CRITICAL:** This is your primary guide for finding information.

For any new task:
1. Check STRUCTURE.md for project layout
2. Choose work mode → read definition from `.organon/defaults/definitions/work-modes/`
3. **Call `update_workflow_context` MCP tool** at START of work

**Key principle:** CLAUDE.md = navigation + standards. Details live in:
- `docs/` - User guides, developer guides, design docs
- `.organon/defaults/definitions/` - Work-mode and workflow specifications

## Workflow Context Tracking

Call `update_workflow_context` MCP tool when starting/switching/completing work.

**Parameters:**
- `workflow`: conversation | one-off-task | sprint-planning | sprint-execution
- `sprint_id`: Sprint directory/task name (optional)
- `task_type`: research | design | implement | test | etc (optional)
- `task_description`: Brief description of current work (optional)

## Work Organization

Choose one of four work modes:

### 1. Conversation
Discussion and planning without file changes
→ `.organon/defaults/definitions/work-modes/conversation.md`

### 2. One-Off Task
Quick work (< 2 hours): bug fixes, docs updates, simple changes
→ `.organon/defaults/definitions/work-modes/one-off-task.md`

### 3. Sprint Planning
Create implementation plan - choose format based on needs:
- **Standard:** High-level checklist → `work-modes/sprint-planning-standard.md`
- **Detailed:** Explicit file operations → `work-modes/sprint-planning-detailed.md`

**When user requests sprint:** Call `create_sprint(name, status="Created")`, read work-mode definition from tool response.

### 4. Sprint Execution
Execute sprint plan - format matches your plan:
- **Standard:** Manual sequential → `work-modes/sprint-execution-standard.md`
- **Detailed:** Orchestrated/parallel → `work-modes/sprint-execution-detailed.md`

**Sprint format:** `NN_Name_Status` (number assigned when status → Current)
**Sprint status:** Created → Next → Current → Completed | Paused

**See:** `docs/user-guide/sprints.md`, `docs/user-guide/backlog.md`, `docs/user-guide/workflows.md`

## Implementation Standards

**NEVER CREATE:** Mocks, stubs, placeholders, or fallbacks that return fake data
**ALWAYS:** Implement real solution OR fail explicitly with clear error message

**Additional rules:**
- Test before claiming it works
- Icons only for ✗ and ✓
- Lean testing: Golden master snapshots, integration workflows, unit tests for pure functions only

**Commit format:**
- Sprint work: `feat(sprint-NN): Description`
- Non-sprint: `bugfix:`, `docs:`, `chore:`, etc.

## Quick Reference

### File Locations

- **This file:** `CLAUDE.md` (agent navigation and standards)
- **Setup:** `README.md` (installation, quick start)
- **MCP Setup:** `MCP_SETUP.md` (MCP server installation and troubleshooting)
- **Workflow tracking:** `.organon/current-context.json` (tracked via MCP tool)
- **Navigation:** `STRUCTURE.md` (file organization)
- **User guides:** `docs/user-guide/` (sprints, backlog, workflows, orchestration)
- **Developer guides:** `docs/developer-guide/` (python-standards, testing)
- **CLI Reference:** `docs/reference/cli.md` (complete command documentation)
- **Design docs:** `docs/design/` (architecture, rationale)
- **Working docs:** `dev-docs/` (sprints, backlog, analysis, archive)
- **Package source:** `src/organon/`
- **Development scripts:** `scripts/llm_capture/` (LLM context capture and token analysis)
- **Package defaults:** `defaults/` (reference implementations shipped with package)
- **Project organon files:** `.organon/` (project-local organon configuration and runtime data)
  - `.organon/defaults/` - Reference materials (version controlled, copied from package on init)
  - `.organon/rules/` - Custom validation rules (version controlled)
  - `.organon/logs/`, `.organon/cache/` - Runtime data (gitignored)
- **Project config:** `organon.yaml` (project root, version controlled)


      IMPORTANT: this context may or may not be relevant to your tasks. You should not respond to this context unless it is highly relevant to your task.
</system-reminder>

what sprints are there?

====================================================================================================
MESSAGE 2: ASSISTANT
====================================================================================================

[THINKING]
The user is asking what sprints exist. I should use the MCP tool to list sprints efficiently rather than reading files directly.

Let me use the `mcp__organon__list_sprints` tool to get the list of sprints. This will be more efficient than reading the sprint directories directly.
I'll check what sprints are available in the project.

[TOOL_USE: mcp__organon__list_sprints]
Input: {}

====================================================================================================
MESSAGE 3: USER
====================================================================================================

[TOOL_RESULT id=toolu_01NXHgtmk7GMtzot6CSuR1X6 error=False]
{"result":"{\n  \"success\": true,\n  \"sprints\": [\n    {\n      \"number\": 5,\n      \"name\": \"WorkflowStructureDesign\",\n      \"status\": \"Completed\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/05_WorkflowStructureDesign_Completed.md\"\n    },\n    {\n      \"number\": 6,\n      \"name\": \"CodebaseScannerEnhancements\",\n      \"status\": \"Completed\",\n      \"type\": \"regular\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/06_CodebaseScannerEnhancements_Completed\"\n    },\n    {\n      \"number\": 7,\n      \"name\": \"DocumentationReorganization\",\n      \"status\": \"Completed\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/07_DocumentationReorganization_Completed.md\"\n    },\n    {\n      \"number\": 8,\n      \"name\": \"WorkflowInsightsIntegration\",\n      \"status\": \"Completed\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/08_WorkflowInsightsIntegration_Completed.md\"\n    },\n    {\n      \"number\": 9,\n      \"name\": \"UseCaseTestingScenarios\",\n      \"status\": \"Paused\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/09_UseCaseTestingScenarios_Paused.md\"\n    },\n    {\n      \"number\": 10,\n      \"name\": \"FrontmatterTemplateValidation\",\n      \"status\": \"Completed\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/10_FrontmatterTemplateValidation_Completed.md\"\n    },\n    {\n      \"number\": 11,\n      \"name\": \"CookiecutterIntegration\",\n      \"status\": \"Completed\",\n      \"type\": \"regular\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/11_CookiecutterIntegration_Completed\"\n    },\n    {\n      \"number\": 12,\n      \"name\": \"DynamicRuleSystem\",\n      \"status\": \"Completed\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/12_DynamicRuleSystem_Completed.md\"\n    },\n    {\n      \"number\": 14,\n      \"name\": \"IntelligentMCPQueryLayer\",\n      \"status\": \"Completed\",\n      \"type\": \"regular\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/14_IntelligentMCPQueryLayer_Completed\"\n    },\n    {\n      \"number\": 15,\n      \"name\": \"CookiecutterTemplate\",\n      \"status\": \"Completed\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/15_CookiecutterTemplate_Completed.md\"\n    },\n    {\n      \"number\": 17,\n      \"name\": \"SubAgentOrchestration\",\n      \"status\": \"Completed\",\n      \"type\": \"regular\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/17_SubAgentOrchestration_Completed\"\n    },\n    {\n      \"number\": 18,\n      \"name\": \"TerminologyRefactoring\",\n      \"status\": \"Completed\",\n      \"type\": \"regular\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/18_TerminologyRefactoring_Completed\"\n    },\n    {\n      \"number\": 19,\n      \"name\": \"SprintPlanningEnhancements\",\n      \"status\": \"Completed\",\n      \"type\": \"regular\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/19_SprintPlanningEnhancements_Completed\"\n    },\n    {\n      \"number\": 19,\n      \"name\": \"SubAgentOrchestration\",\n      \"status\": \"Created\",\n      \"type\": \"regular\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/19_SubAgentOrchestration_Created\"\n    },\n    {\n      \"number\": 20,\n      \"name\": \"BacklogInfrastructure\",\n      \"status\": \"Completed\",\n      \"type\": \"regular\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/20_BacklogInfrastructure_Completed\"\n    },\n    {\n      \"number\": 21,\n      \"name\": \"WorkflowPhaseClarification\",\n      \"status\": \"Completed\",\n      \"type\": \"regular\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/21_WorkflowPhaseClarification_Completed\"\n    },\n    {\n      \"number\": null,\n      \"name\": \"ArchitectureDesignWorkflow\",\n      \"status\": \"Created\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/ArchitectureDesignWorkflow_Created.md\"\n    },\n    {\n      \"number\": null,\n      \"name\": \"BacklogTemplate\",\n      \"status\": \"Created\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/BacklogTemplate_Created.md\"\n    },\n    {\n      \"number\": null,\n      \"name\": \"DocCodeAlignment\",\n      \"status\": \"Created\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/DocCodeAlignment_Created.md\"\n    },\n    {\n      \"number\": null,\n      \"name\": \"ExternalResearcher\",\n      \"status\": \"Created\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/ExternalResearcher_Created.md\"\n    },\n    {\n      \"number\": null,\n      \"name\": \"NavigationWorkPredictability\",\n      \"status\": \"Created\",\n      \"type\": \"regular\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/NavigationWorkPredictability_Created\"\n    },\n    {\n      \"number\": null,\n      \"name\": \"OrganonDemonstration\",\n      \"status\": \"Created\",\n      \"type\": \"mini\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/OrganonDemonstration_Created.md\"\n    },\n    {\n      \"number\": null,\n      \"name\": \"OrganonRoadmap\",\n      \"status\": \"Created\",\n      \"type\": \"regular\",\n      \"path\": \"/Users/psulin/organon/dev-docs/sprints/OrganonRoadmap_Created\"\n    }\n  ],\n  \"count\": 23\n}"}