Metadata-Version: 2.4
Name: my-dev-team
Version: 0.1.0
Summary: An autonomous, LangGraph-powered AI development agency.
Author-email: Alexander Bobrovsky <bobrovsky@seznam.cz>
License: MIT
Project-URL: Homepage, https://github.com/bobrovsky420/my-dev-team
Project-URL: Bug Tracker, https://github.com/bobrovsky420/my-dev-team/issues
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: langgraph>=1.0.9
Requires-Dist: langgraph-checkpoint-sqlite>=3.0.3
Requires-Dist: langchain>=1.2.10
Requires-Dist: aiosqlite>=0.22.1
Requires-Dist: pydantic>=2.12.5
Requires-Dist: python-dotenv>=1.2.1
Requires-Dist: pyyaml>=6.0.3

# My Dev Team 🚀

An autonomous, LangGraph-powered AI development agency. **My Dev Team** takes raw project requirements and processes them through a multi-agent workflow (Product Manager, System Architect, Developers, and QA) to incrementally build, test, and deliver production-ready code.

## Features

* **Multi-Agent Architecture:** Specialized AI agents handle distinct phases of the software development lifecycle.
* **Semantic Model Routing:** Automatically routes tasks to the most cost-effective or capable LLMs based on the task type (reasoning, coding, or fast-utility).
* **Strict Test-Driven Development (TDD):** Testing is never an afterthought. Tasks are generated with embedded testing criteria, and the Developer writes unit tests alongside implementation code for immediate QA validation.
* **State Recovery & Resiliency:** Powered by asynchronous SQLite checkpointing. If an API rate limit is hit or a workflow is interrupted, you can resume the exact thread without losing a single token of progress.
* **Incremental Development:** The System Architect breaks down requirements into a manageable backlog of strictly formatted JSON tasks.
* **Self-Healing Code:** The Developer, Reviewer, and QA Engineer agents continuously loop until unit tests pass and code meets specifications.
* **Structured Outputs:** Powered by Pydantic and LangChain, ensuring zero "Markdown spillage" and robust state management.
* **Extensible:** Easily add custom tools like `HumanInTheLoop` or `WorkspaceSaver`.

## Installation

You can install the package directly via pip:

```sh
pip install my-dev-team
```

(For local development, clone the repository and run `pip install -e .`)

## 1. Preparing Your Project File

The crew requires a text file outlining your project requirements. By default, it looks for a specific header format to extract the project name and thread ID.

Create a file named `project.txt`:

```
Subject: NEW PROJECT: Web Scraper CLI

I need a Python command-line tool that scrapes articles from a given URL.
It should extract the title, author, and main body text, and save the output as a JSON file.

Requirements:
- Use BeautifulSoup4 for parsing.
- Include a `--url` argument and an `--output` argument.
- Write unit tests for the parsing logic.
```

## 2. Usage (CLI)

The fastest way to use the framework is via the terminal command included in the package.

```sh
devteam project.txt
```

### Advanced CLI Options

You can easily switch between cloud providers and local models, and adjust rate limits based on your API tier:

```sh
# Run entirely locally for free using Ollama, with no rate limit!
devteam project.txt --provider ollama

# Run using OpenAI's flagship models, limited to 15 requests per minute
devteam project.txt --provider openai --rpm 15

# Resume an interrupted run exactly where it left off
dev-team --resume web_scraper_cli_20260312_083500
```

#### Available Arguments:

* `project_file`: (Optional if resuming) Path to your project requirements text file.
* `--resume`: Resume a specific thread ID (e.g., my_app_20260312_083500).
* `--provider`: Choose the LLM backend. Options: groq, ollama (default), openai.
* `--rpm`: API requests per minute. Set to 0 to disable rate limiting (default: 0).

Note: Ensure you have the corresponding API keys (e.g., `GROQ_API_KEY`, `OPENAI_API_KEY`) set in your `.env` file, or ensure your local Ollama instance is running.

## 3. Intelligent Model Routing (LLM Factory)

**My Dev Team** doesn't just use one model for everything. It uses an advanced **Semantic Routing** architecture via `LLMFactory`.

Instead of hardcoding a specific model (like `gpt-5.3-codex`), each agent requests a specific capability category and temperature. The Factory evaluates your chosen `--provider` and dynamically spins up the most cost-effective, capable model for that exact task.

#### The Categories

* `reasoning`: For the System Architect and Product Manager. Maps to deep-thinking models.
* `code-generator`: For the Senior Developer. Maps to strict, syntax-heavy models.
* `code-analyzer`: For the QA and Reviewer agents. Maps to deep-context evaluation models.
* `fast-utility`: For the Reporter. Maps to blazing-fast, ultra-cheap models for simple text summarization.

## 4. Usage (Python API)

If you want to integrate the crew into your own application, customize the LLM Factory's routing table, or override specific agent behaviors, use the clean Python API:

```python
import asyncio
import aiosqlite
from pathlib import Path
from dotenv import load_dotenv

from langgraph.checkpoint.sqlite.aio import AsyncSqliteSaver
from devteam import VirtualCrew, ProjectManager
from devteam.agents import (
    ProductManager, SystemArchitect, SeniorDeveloper,
    CodeReviewer, QAEngineer, FinalQAEngineer, Reporter
)
from devteam.extensions import HumanInTheLoop, WorkspaceSaver

load_dotenv()

def build_crew(project_folder: Path, llm_factory: LLMFactory, checkpointer: AsyncSqliteSaver, rpm: int = 0) -> VirtualCrew:
    # Initialize agents using built-in prompt templates
    agents = {
        'pm': ProductManager.from_config('product-manager.md'),
        'architect': SystemArchitect.from_config('system-architect.md'),
        'developer': SeniorDeveloper.from_config('senior-developer.md'),
        'reviewer': CodeReviewer.from_config('code-reviewer.md'),
        'qa': QAEngineer.from_config('qa-engineer.md'),
        'final_qa': FinalQAEngineer.from_config('final-qa-engineer.md'),
        # Example: Forcing the reporter to use a more creative reasoning model
        'reporter': Reporter.from_config('reporter.md', model_category='reasoning', temperature=0.7)
    }

    # Add extensions like saving files to disk or requiring human approval
    extensions = [
        WorkspaceSaver(workspace_dir=workspace_dir),
        HumanInTheLoop()
    ]

    return VirtualCrew(
        manager=ProjectManager(),
        agents=agents,
        extensions=extensions,
        checkpointer=checkpointer,
        rate_limiter=RateLimiter(requests_per_minute=rpm) if rpm > 0 else None
    )

async def main():
    requirements = "Build a simple Python calculator CLI with basic arithmetic."
    workspace = Path('./workspaces/calculator_app')
    workspace.mkdir(parents=True, exist_ok=True)

    db_path = workspace / 'state.db'

    async with aiosqlite.connect(db_path) as conn:
        checkpointer = AsyncSqliteSaver(conn)
        crew = build_crew(workspace, provider='groq', checkpointer=checkpointer, rpm=30)

        print("🚀 Starting the AI Dev Team...")
        final_state = await crew.execute(
            thread_id="calc_run_01",
            requirements=requirements
        )

    if final_state.abort_requested:
        print("❌ Workflow aborted by user or validation failure.")
    elif final_state.success:
        print("🎉 Project completed successfully!")
        print(f"Total Revisions: {final_state.total_revisions}")
        if final_state.final_report:
            print(final_state.final_report)
    else:
        print("🚨 Release failed: Integration bugs found!")
        for bug in final_state.integration_bugs:
            print(f" - {bug}")

if __name__ == "__main__":
    asyncio.run(main())
```

## AI Agents

1) **Product Manager:** Analyzes requirements, asks clarifying questions, and writes detailed Technical Specifications.
2) **System Architect:** Breaks specifications down into a cohesive backlog of developer tasks.
3) **Senior Developer:** Incrementally writes code and unit tests for the current task.
4) **Code Reviewer:** Analyzes the generated code for security, style, and logic issues.
5) **QA Engineer:** Mentally simulates execution and evaluates the code against the task requirements.
6) **Final QA Engineer:** Performs a full-repository integration test once all tasks are complete.
7) **Reporter:** Generates a comprehensive final Markdown report for stakeholders.
