Metadata-Version: 2.4
Name: sdd-kit
Version: 2.2.4
Summary: Spec-Driven Development CLI with IDE-native AI
Requires-Python: >=3.11
Description-Content-Type: text/markdown
Requires-Dist: anthropic>=0.31.0
Requires-Dist: databricks-sdk>=0.20.0
Requires-Dist: sentence-transformers>=2.2.2
Requires-Dist: tiktoken>=0.7
Requires-Dist: pyyaml>=6.0
Requires-Dist: typer>=0.12.0
Requires-Dist: rich>=13.0.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: mcp>=1.0.0

# sdd-kit v2.0.6

**The Spec-Driven Development (SDD) Toolkit for AI-Native Engineering.**
*Intelligently assemble AI context from enterprise data lakehouses (Databricks/Snowflake) directly into your IDE.*

[![PyPI version](https://badge.fury.io/py/sdd-kit.svg)](https://badge.fury.io/py/sdd-kit)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

---

## 🚀 Quick Install

```bash
# 1. Install the CLI
pip install sdd-kit

# 2. Install the VS Code extension (No API key required!)
sdd install-extension
```

---

## ✨ What makes sdd-kit different?

`sdd-kit` isn't just another AI tool. It's a **context orchestration engine** that bridges the gap between your enterprise data (Gold/Silver schemas) and your AI agent (Cursor/Copilot/ChatGPT).

- **IDE-Native AI**: Use your existing IDE's AI credentials. No more `ANTHROPIC_API_KEY` errors in the terminal.
- **Lakehouse Aware**: Pull live schema definitions and documentation from Databricks or Snowflake.
- **Token Budgeting**: Automatically fits massive codebases into tight 8,000-token context windows.
- **Zero-Extension Mode**: Full support for Cursor/Windsurf via MCP (Model Context Protocol).

---

## 🛠️ Main Workflows

### 1. Existing Projects (Audit & Evolve)
Perfect for onboarding a "maintaining" project into SDD without touching existing files.

```bash
cd my-legacy-project
sdd onboard          # surgically adds .sdd/ and .cursor/
@sdd /audit          # Run in VS Code Chat to audit the codebase
@sdd /specify-next   # Define a new feature based on existing code
```

### 2. New Projects (Spec-to-Code)
Initialize a project from a domain-specific "Gold" knowledge base.

```bash
sdd init my-app --domain banking
@sdd /specify        # Generate a perfect spec.md
@sdd /plan           # Generate plan.md and tasks.md
```

---

## 💬 Slash Commands

When you use **VS Code** (after running `sdd install-extension`) or **Cursor**, you get these powerful commands directly in your AI Chat panel:

| Command | Purpose |
|---------|---------|
| `/audit` | Scan existing code for architecture and technical debt |
| `/specify` | Generate a comprehensive master specification |
| `/plan` | Create a step-by-step implementation plan and checklist |
| `/integrate`| Reverse-engineer or generate live Lakehouse integrations |
| `/doctor` | Run diagnostics on your local development environment |
| `/sync-kb` | Mirror enterprise knowledge for offline development |

---

## 🛡️ The 12 SDD Rules
The toolkit enforces a rigorous methodology for AI-assisted engineering:

1. **Think Before Coding** — State assumptions. Ask if unclear.
2. **Simplicity First** — Minimum code that solves the problem.
3. **Surgical Changes** — Touch only what the task requires.
4. **Context Budget Discipline** — No full files. 8K max tokens.
5. **Existing Projects Are Not Broken** — Recommend delta only.
6. **Offline is First-Class** — All commands work without internet.

---

## 🔧 Configuration (Optional)

If you want to pull live data from your lakehouse, set these variables:

```bash
# Databricks
export DATABRICKS_HOST=https://adb-xxx.azuredatabricks.net
export DATABRICKS_TOKEN=dapi...

# Snowflake
export SNOWFLAKE_ACCOUNT=xxx
export SNOWFLAKE_USER=xxx
```

---

## 📄 License

MIT © 2026 Your Company
