Metadata-Version: 2.4
Name: keprompt
Version: 2.13.0
Summary: A prompt engineering tool for large language models
Author-email: Jerry Westrick <jerry@westrick.com>
License: MIT License
        
Project-URL: Homepage, https://github.com/JerryWestrick/keprompt
Project-URL: Documentation, https://github.com/JerryWestrick/keprompt/tree/main/ks
Project-URL: Repository, https://github.com/JerryWestrick/keprompt
Project-URL: Issues, https://github.com/JerryWestrick/keprompt/issues
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: rich>=13.0.0
Requires-Dist: rich_argparse>=1.0.0
Requires-Dist: requests>=2.31.0
Requires-Dist: textual>=0.41.0
Requires-Dist: toml>=0.10.2
Requires-Dist: peewee>=3.17.0
Requires-Dist: fastapi>=0.104.0
Requires-Dist: uvicorn>=0.24.0
Requires-Dist: markdown-it-py>=3.0.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: isort>=5.12.0; extra == "dev"
Requires-Dist: build>=0.10.0; extra == "dev"
Requires-Dist: twine>=4.0.0; extra == "dev"
Dynamic: license-file

# KePrompt

**A powerful command-line tool for prompt engineering and AI interaction**

KePrompt lets you work with multiple AI providers (OpenAI, Anthropic, Google, and more) using simple prompt files and a unified command-line interface. No Python programming required.

## Why KePrompt?

- **One tool, many AIs**: Switch between GPT-4, Claude, Gemini, and others with a single command
- **Simple prompt language**: Write prompts using an easy-to-learn syntax
- **Comprehensive cost tracking**: Automatic SQLite-based tracking of all API usage with detailed reporting
- **Conversation management**: Save and resume multi-turn conversations
- **Function calling**: Extend prompts with file operations, web requests, and custom functions
- **Web GUI**: Modern browser-based interface for interactive prompt development
- **Production ready**: Built-in logging, error handling, and debugging tools

## Prompt Engineering for Production

KePrompt is more than a prompting tool — it's a prompt engineering platform built around a key architectural principle: **the separation of application code and AI prompting logic**.

### Separation of Application and AI Prompting

Your application calls KePrompt; all prompting logic — model selection, system prompts, parameters, function declarations — lives in `.prompt` files. When a newer or cheaper model becomes available, you rewrite the prompt file. The application code doesn't change.

This separation turns prompt changes from code deployments into configuration updates, letting prompt engineers and developers work independently.

### Production Observability and Regression Testing

Every API call is logged in SQLite with full detail: request, response, token counts, costs, and execution timing. This production data becomes the foundation for rigorous prompt engineering:

1. **Extract** real production interactions from the database
2. **Build** regression test suites from actual usage patterns
3. **Rewrite** the prompt for a new model or optimization goal
4. **Test** the new prompt against production examples
5. **Validate** not just correctness, but also speed and cost — ensuring the new prompt meets performance and budget targets before it goes live

This closed loop — production data in, validated prompt out — is what elevates KePrompt from a "cute way to do AI prompting" to a proper prompt and knowledge engineering platform.

## Quick Start

### 0. Prepare Your Working Directory
```bash
# Create a new project directory with isolated Python environment
mkdir myproject
cd myproject
python3 -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
```

### 1. Install KePrompt
```bash
pip install keprompt
```

To install in developers mode use:
```bash
pip install -e ~/keprompt/ 
```



### 2. Initialize your workspace
```bash
keprompt init
```
This creates the `prompts/` directory, copies a default `hello.prompt`, installs default functions, initializes the database, and downloads the model registry — all in one command.

Use `keprompt init --force` to overwrite existing files.

### 3. Set up your API key
```bash
# Add to your .env file or export directly
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
# ... or add to ~/.env
```

### 4. Run the included hello prompt
```bash
keprompt chat new --prompt hello
```

🎉 **You should see the AI's response!** The system automatically tracks costs and saves the conversation.

Try it with different models and questions:
```bash
keprompt chat new --prompt hello --model anthropic/claude-sonnet-4-20250514
keprompt chat new --prompt hello --set question 'tell me a joke'
keprompt chat new --prompt hello --model deepseek/deepseek-chat --set question 'what is 2+2?'
```

## Your First Real Prompt

Let's create something more useful - a file analyzer:

```bash
cat > prompts/analyze.prompt << 'EOF'
.prompt "name":"File Analyzer", "version":"1.0.0", "params":{"model":"gpt-4o", "filename":"file_to_analyze"}
.# Analyze any text file
.llm {"model": "<<model>>"}
.system You are an expert text analyst. Provide clear, actionable insights.
.user Please analyze this file:

.include <<filename>>

Provide a summary, key points, and any recommendations.
.exec
EOF
```

Run it with a parameter:
```bash
keprompt chats create --prompt analyze --set filename "README.md"
```

Run it with 2 parameters:
```bash
keprompt chats create --prompt analyze --set filename "README.md" --set model "openrouter/openai/gpt-oss-20b"
```

## Modern CLI Interface

KePrompt uses an intuitive object-verb command structure:

```bash
keprompt <object> <verb> [options]
```

### Core Objects
- **init** - Initialize workspace (one-stop setup)
- **prompts** - List and manage prompt files
- **chats** - Create and manage conversations
- **models** - Browse available AI models
- **providers** - View AI providers
- **functions** - List available functions
- **server** - Start/stop web interface
- **database** - Manage conversation database

### Common Commands
```bash
# List available prompts
keprompt prompts get

# Create a new conversation
keprompt chats create --prompt hello --set name "Alice"

# Continue a conversation
keprompt chats reply <chat-id> "Tell me more"

# List conversations with costs
keprompt chats get

# Browse available models
keprompt models get --company OpenAI

# Start web interface
keprompt server start --web-gui
```

See the [Knowledge Engineer's Guide](ks/02-knowledge-engineers-guide.md) for the full reference.

## Web GUI Interface

KePrompt includes a modern web-based interface for interactive development:

```bash
# Start server with web GUI
keprompt server start --web-gui

# Specify port (optional)
keprompt server start --web-gui --port 8080

# Development mode with auto-reload
keprompt server start --web-gui --reload
```

Then open your browser to `http://localhost:8080`

**Features:**
- Interactive chat interface
- Real-time cost tracking
- Prompt editor with syntax highlighting
- Model selection and comparison
- Function testing and debugging
- Conversation history browser

## Core Concepts

### Prompt Files
- Stored in `prompts/` directory with `.prompt` extension
- Use simple line-based syntax starting with `.` for commands
- Support variables, functions, and multi-turn conversations

### The Prompt Language
| Command | Purpose | Example |
|---------|---------|---------|
| `.prompt` | **REQUIRED** - Define prompt metadata | `.prompt "name":"My Prompt", "version":"1.0.0"` |
| `.llm` | Configure AI model | `.llm {"model": "gpt-4o"}` |
| `.functions` | **Declare allowed functions** (no `.functions` = no functions) | `.functions readfile, writefile` |
| `.system` | Set system message | `.system You are a helpful assistant` |
| `.user` | Add user message | `.user What is the weather like?` |
| `.tool_call` | **Represent an LLM tool call** (manual/replay/debug) | `.tool_call readfile(filename="data.txt") id=call_abc123` |
| `.tool_result` | **Represent a tool result** (manual/replay/debug) | `.tool_result id=call_abc123 name=readfile` |
| `.exec` | Send to AI and get response | `.exec` |
| `.cmd` | Call a function | `.cmd readfile(filename="data.txt")` |
| `.print` | Output to console | `.print The result is: <<last_response>>` |

### Representing LLM Tool Calls and Tool Results

KePrompt supports two *statement* types that let you represent **tool calls produced by the LLM API** and the corresponding **tool responses**.

These are most useful for:
- Replaying/debugging conversations
- Creating test fixtures / example chats
- Manually reconstructing a conversation that includes tool use

They are distinct from `.cmd`, which **executes** a local function during prompt execution.

#### `.tool_call` (LLM → tool)

**Syntax**
```text
.tool_call function_name(param=value, ...) id=call_id
```

**Example**
```text
.tool_call readfile(filename="data.txt") id=call_001
```

#### `.tool_result` (tool → LLM)

**Syntax**
```text
.tool_result id=call_id name=function_name
<result text can be multi-line>
```

**Example**
```text
.tool_result id=call_001 name=readfile
File contents:
Hello from data.txt
```

#### End-to-end example (manual reconstruction)
```text
.prompt "name":"Toolcall Example", "version":"1.0.0", "params":{"model":"gpt-4o-mini"}
.user Please read data.txt and summarize it.

# These two statements represent what the *LLM API* would have produced,
# and the corresponding tool response KePrompt would send back:
.tool_call readfile(filename="data.txt") id=call_001
.tool_result id=call_001 name=readfile
Hello from data.txt

.assistant Summary: The file contains a short greeting.
```

### Prompt Metadata (Required)

**Every prompt file must start with a `.prompt` statement** that defines metadata:

```bash
.prompt "name":"My Prompt Name", "version":"1.0.0", "params":{"model":"gpt-4o-mini"}
```

**Required fields:**
- `name`: Human-readable prompt name (used in cost tracking)
- `version`: Semantic version for tracking changes

**Optional fields:**
- `params`: Default parameters and documentation

**Examples:**
```bash
# Simple prompt
.prompt "name":"Hello World", "version":"1.0.0"

# With parameters
.prompt "name":"Code Reviewer", "version":"2.1.0", "params":{"model":"gpt-4o", "language":"python"}

# Research assistant
.prompt "name":"Research Assistant", "version":"1.5.0", "params":{"model":"claude-3-5-sonnet-20241022", "depth":"comprehensive"}
```

### Variables
Use `<<variable>>` syntax for substitution:
```bash
# In your prompt file
.user Hello <<name>>, today is <<date>>

# Run with parameters
keprompt chats create --prompt greeting --set name "Alice" --set date "Monday"
```

### Built-in Functions
- `readfile(filename)` - Read file contents
- `writefile(filename, content)` - Write to file (with backup)
- `wwwget(url)` - Fetch web content
- `execcmd(cmd)` - Execute shell command

### Function Access Control

By default, **the model has NO access to any functions**. You must explicitly declare which functions the model can use with `.functions`:

```
.prompt "name":"Example", "version":"1.0.0", "params":{"model":"gpt-4o"}
.functions readfile, wwwget
.system You are a research assistant.
.user Analyze this file and fetch related info from the web.
.exec
```

This is a security feature — it prevents models (especially delegated sub-agents) from accessing capabilities they don't need.

## Common Workflows

### Research Assistant
```bash
cat > prompts/research.prompt << 'EOF'
.prompt "name":"Research Assistant", "version":"1.0.0", "params":{"model":"claude-3-5-sonnet-20241022", "topic":"research_topic"}
.llm {"model": "claude-3-5-sonnet-20241022"}
.system You are a research assistant. Provide thorough, well-sourced information.
.user Research this topic: <<topic>>
.cmd wwwget(url="https://en.wikipedia.org/wiki/<<topic>>")
Based on this information, provide a comprehensive overview with key facts and recent developments.
.exec
EOF

keprompt chats create --prompt research --set topic "Artificial_Intelligence"
```

### Code Review
```bash
cat > prompts/review.prompt << 'EOF'
.prompt "name":"Code Reviewer", "version":"1.0.0", "params":{"model":"gpt-4o", "codefile":"path/to/file"}
.llm {"model": "gpt-4o"}
.system You are a senior software engineer. Provide constructive code reviews.
.user Please review this code file:

.include <<codefile>>

Focus on: code quality, potential bugs, performance, and best practices.
.exec
EOF

keprompt chats create --prompt review --set codefile "src/main.py"
```

### Interactive Chat Session
```bash
# Start a conversation
CHAT_ID=$(keprompt chats create --prompt hello --json | jq -r '.data.chat_id')

# Continue the conversation
keprompt chats reply $CHAT_ID "Can you explain that in more detail?"
keprompt chats reply $CHAT_ID "What about edge cases?"

# View full conversation
keprompt chats get $CHAT_ID
```

## Working with Models

### List available models
```bash
# See all models
keprompt models get

# Filter by provider
keprompt models get --company OpenAI
keprompt models get --company Anthropic

# Search by name
keprompt models get --name "gpt-4*"
keprompt models get --name "*sonnet*"
```

### Compare costs
```bash
# Show pricing for all GPT models
keprompt models get --name "gpt*" --company OpenAI
```

### Update model registry
```bash
# Fetch latest models from providers
keprompt models update
```

## Cost Tracking & Analysis

KePrompt automatically tracks all API usage with comprehensive cost analysis.

### View Conversation Costs
```bash
# List recent conversations with costs
keprompt chats get --limit 20

# View specific chat details
keprompt chats get <chat-id>

# Get cost summary
sqlite3 prompts/chats.db "SELECT SUM(total_cost) FROM chats"
```

### Database Management
```bash
# View database info
keprompt database get

# Clean up old conversations
keprompt chats delete --days 30

# Keep only recent conversations
keprompt chats delete --count 100
```

## Custom Functions

KePrompt supports custom functions written in any language. See the detailed guide at [ks/creating-keprompt-functions.context.md](ks/creating-keprompt-functions.context.md)

### Quick Example

Create an executable in `prompts/functions/`:

```python
#!/usr/bin/env python3
import json, sys

def get_weather(city: str) -> str:
    # Your weather API logic here
    return f"Weather in {city}: Sunny, 72°F"

FUNCTIONS = [
    {
        "name": "get_weather",
        "description": "Get current weather for a city",
        "parameters": {
            "type": "object",
            "properties": {
                "city": {"type": "string", "description": "City name"}
            },
            "required": ["city"],
            "additionalProperties": False
        }
    }
]

if __name__ == "__main__":
    if sys.argv[1] == "--list-functions":
        print(json.dumps(FUNCTIONS))
    elif sys.argv[1] == "get_weather":
        args = json.loads(sys.stdin.read())
        print(get_weather(**args))
```

Make it executable:
```bash
chmod +x prompts/functions/weather
```

Use in prompts:
```bash
cat > prompts/weather_check.prompt << 'EOF'
.prompt "name":"Weather Check", "version":"1.0.0", "params":{"city":"default_city"}
.llm {"model": "gpt-4o-mini"}
.functions get_weather
.user What's the weather like in <<city>>? Based on the weather, suggest appropriate clothing.
.exec
EOF

keprompt chats create --prompt weather_check --set city "San Francisco"
```

For comprehensive documentation on creating custom functions, see [ks/creating-keprompt-functions.context.md](ks/creating-keprompt-functions.context.md)

## Conversation Management

### Create and Continue Conversations
```bash
# Start a new conversation
keprompt chats create --prompt hello

# Continue with a chat ID
keprompt chats reply a1b2c3d4 "Tell me more about that"

# Show full conversation history
keprompt chats reply a1b2c3d4 --full "Thanks for the explanation"
```

### List and View Conversations
```bash
# List all conversations
keprompt chats get

# View specific conversation
keprompt chats get a1b2c3d4

# List with filters
keprompt chats get --limit 10
```

### View Conversation Formats

KePrompt offers multiple viewing formats to help you understand and debug your conversations:

```bash
# View conversation messages (default)
keprompt chats get a1b2c3d4 --format=messages

# View prompt source code (statements)
keprompt chats get a1b2c3d4 --format=statements

# View cost summary and metadata
keprompt chats get a1b2c3d4 --format=summary

# View raw JSON data
keprompt chats get a1b2c3d4 --format=raw
```

**Format Aliases** (shortcuts for convenience):
- `--format=msg` / `msgs` / `message` → messages view
- `--format=stmt` / `stmts` / `statement` → statements view  
- `--format=sum` → summary view
- `--format=json` → raw JSON view

**Format Descriptions:**

| Format | Purpose | Shows |
|--------|---------|-------|
| `messages` | View the conversation | User/assistant dialogue with model info |
| `statements` | View source code | Prompt statements (`.user`, `.exec`, `.set`, etc.) |
| `summary` | View metrics | Costs, tokens, API calls, timing |
| `raw` | View complete data | Full JSON with all metadata |

**Examples:**
```bash
# Debug which statements were executed
keprompt chats get a1b2c3d4 --format=stmt --pretty

# Quick cost check
keprompt chats get a1b2c3d4 --format=sum --pretty

# Export conversation for analysis
keprompt chats get a1b2c3d4 --format=json > conversation.json
```

### Clean Up
```bash
# Delete specific conversation
keprompt chats delete a1b2c3d4

# Delete old conversations
keprompt chats delete --days 30

# Keep only recent conversations
keprompt chats delete --count 100
```

## Server Management

### Start Server
```bash
# Start with web GUI
keprompt server start --web-gui

# Specify port
keprompt server start --web-gui --port 8080

# Development mode with auto-reload
keprompt server start --web-gui --reload

# Start in specific directory
keprompt server start --web-gui --directory /path/to/project
```

### Manage Servers
```bash
# List running servers
keprompt server list --active-only

# Check status
keprompt server status

# Stop server
keprompt server stop

# Stop all servers
keprompt server stop --all
```

## Output Formats

### Human-Readable (Default in Terminal)
Rich formatted tables with colors and alignment.

### Machine-Readable (JSON)
```bash
# Get JSON output
keprompt chats get --json

# Use with jq
keprompt chats get --json | jq '.data[] | select(.total_cost > 0.01)'

# Get chat ID programmatically
CHAT_ID=$(keprompt chats create --prompt hello --json | jq -r '.data.chat_id')

# Extract a value from the JSON output (example: last_response)
# (Requires jq: https://stedolan.github.io/jq/)
keprompt chat new --prompt Test --json | jq -r '.meta.variables.last_response'

# If you don't have jq, you can do the same with python3:
keprompt chat new --prompt Test --json | \
  python3 -c 'import json,sys; d=json.load(sys.stdin); print(d["meta"]["variables"]["last_response"])'
```

## Tips & Best Practices

### 1. Start Simple
Begin with basic prompts and gradually add complexity.

### 2. Use the Web GUI for Development
The web interface provides a better development experience with real-time feedback.

```bash
keprompt server start --web-gui --reload
```

### 3. Manage Costs
- Use cheaper models for development (`gpt-4o-mini`, `claude-3-haiku`)
- Monitor costs with `keprompt chats get`
- Check model pricing with `keprompt models get`

### 4. Organize Your Prompts
```
prompts/
├── research/
│   ├── academic.prompt
│   └── market.prompt
├── coding/
│   ├── review.prompt
│   └── debug.prompt
└── content/
    ├── blog.prompt
    └── social.prompt
```

### 5. Version Control
Keep your prompts in git to track what works best.

### 6. Test Across Models
The same prompt may work differently with different models. Test and compare.

## Troubleshooting

### Common Issues

**"No models found"**
```bash
keprompt models update
```

**"API key not found"**
```bash
# Add to .env file
echo 'OPENAI_API_KEY=sk-...' >> .env
# Or export directly
export OPENAI_API_KEY="sk-..."
```

**"Prompt not found"**
```bash
# List available prompts
keprompt prompts get
# Check prompts directory exists
ls prompts/
```

**"Server already running"**
```bash
# Check status
keprompt server status
# Stop existing server
keprompt server stop
```

### Getting Help

```bash
# Show all options
keprompt --help

# Get help for specific object
keprompt chats --help
keprompt server --help
```

## Documentation

- [Prompt Language](ks/01-prompt-language.md) - Writing .prompt files
- [Knowledge Engineer's Guide](ks/02-knowledge-engineers-guide.md) - Comprehensive guide
- [Statements & Messages](ks/03-statements-and-messages.md) - Architecture reference
- [Creating Functions](ks/creating-keprompt-functions.context.md) - Custom function development

## What's Next?

- **Explore the examples** in the `prompts/` directory
- **Try the web GUI** for interactive development
- **Create custom functions** for your specific needs
- **Integrate with your workflow** using the JSON API

## Contributing

KePrompt is open source! Contributions welcome at [GitHub](https://github.com/JerryWestrick/keprompt).

## License

[MIT](LICENSE)

---

*KePrompt: Making AI interaction simple, powerful, and cost-effective.*


