Metadata-Version: 2.4
Name: aicodestat
Version: 0.0.1
Summary: A local-first metrics tool that analyzes how you use AI coding assistants.
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: fastapi>=0.104.0
Requires-Dist: uvicorn[standard]>=0.24.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: rich>=13.0.0
Requires-Dist: questionary>=2.0.0
Requires-Dist: httpx>=0.25.0
Requires-Dist: mcp>=1.0.0
Provides-Extra: dev
Requires-Dist: pytest>=7.4.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"

## CodeStat · AI Code Metrics

> Quantify how much AI actually contributes to your codebase.

[![CI](https://img.shields.io/github/actions/workflow/status/2hangchen/CodeStat/ci.yml?style=flat-square&label=CI)](https://github.com/2hangchen/CodeStat/actions)
[![License](https://img.shields.io/github/license/2hangchen/CodeStat?style=flat-square)](LICENSE)

`CodeStat` is a local metrics tool that analyzes how you use AI coding assistants:
how many lines are generated by AI, how many are kept, and how this evolves over time.

> 中文文档见：[`README.zh-CN.md`](./README.zh-CN.md)

---

## Features

- **Global dashboard for all data**  
  - AI generated lines, adopted lines, adoption & generation rates  
  - File count, session count, quick bar chart overview

- **Multi‑dimension queries**  
  - **By file**: see how much of a file comes from AI and how much you kept  
  - **By session**: analyze one coding session with detailed diff lines  
  - **By project**: aggregate metrics for an entire repository

- **Agent / model comparison**  
  - Compare multiple sessions (agents / models / settings) side‑by‑side  
  - See which one actually produces more adopted code instead of just more tokens

- **Local‑first & privacy‑friendly**  
  - All metrics are computed locally from your own diffs  
  - No source code or prompts are sent to any remote service

- **Nice CLI UX**  
  - Rich‑based tables & colors, arrow‑key navigation  
  - Minimal but informative header (MCP status + repo info)

---

## Demo

> TODO: add real screenshots / GIFs from your terminal

- **Global dashboard**

  *(insert GIF or screenshot here)*

- **Session metrics with diff lines**

  *(insert GIF or screenshot here)*

---

## Quickstart

### Install

```bash
git clone https://github.com/2hangchen/CodeStat.git
cd CodeStat
pip install -r requirements.txt
```

> Once published to PyPI you can alternatively run:  
> `pip install codestat-ai`

### Start the CLI

```bash
python .\cli\main.py
```

Use `↑/↓` to move, `Enter` to confirm.  
Choose **“📈 Global Dashboard (All Data)”** to see an overview of your local metrics.

---

## Typical Workflows

- **Measure your own AI usage**  
  - Record one or more coding sessions with your IDE + MCP server  
  - Run `CodeStat` and inspect:
    - AI generated vs adopted lines
    - Which files receive the most AI help

- **Compare agents / models / prompts**  
  - Map different sessions to different agents / models  
  - Use **Compare Agents** to get a per‑session comparison table

- **Project‑level health check**  
  - For a given repo, run project metrics to see:
    - Where AI contributes the most
    - Whether AI‑generated code is actually being kept
