Metadata-Version: 2.4
Name: coverity-metrics
Version: 1.0.3
Summary: Comprehensive metrics and dashboard generator for Coverity static analysis
Author: Jouni Lehto
License: MIT
Project-URL: Homepage, https://github.com/lejouni/coverity_metrics
Project-URL: Documentation, https://github.com/lejouni/coverity_metrics/blob/main/README.md
Project-URL: Repository, https://github.com/lejouni/coverity_metrics
Project-URL: Bug Tracker, https://github.com/lejouni/coverity_metrics/issues
Keywords: coverity,static-analysis,metrics,dashboard,code-quality,security
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development :: Quality Assurance
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: psycopg2-binary>=2.9.0
Requires-Dist: pandas>=2.0.0
Requires-Dist: matplotlib>=3.7.0
Requires-Dist: seaborn>=0.12.0
Requires-Dist: python-dateutil>=2.8.0
Requires-Dist: openpyxl>=3.1.0
Requires-Dist: jinja2>=3.1.0
Requires-Dist: plotly>=5.18.0
Requires-Dist: tqdm>=4.66.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: flake8>=6.0.0; extra == "dev"
Requires-Dist: mypy>=1.0.0; extra == "dev"
Dynamic: license-file

# Coverity Metrics

A Python-based project to generate comprehensive metrics from Coverity's PostgreSQL database.

## Overview

This tool analyzes Coverity static analysis data stored in PostgreSQL and generates various metrics to help you understand code quality, defect trends, and development team activity.

**Quick Start:**
```bash
# Install
pip install -e .

# Configure
cp config.json.example config.json
# Edit config.json with your database credentials

# Generate interactive dashboard
coverity-dashboard

# View technical debt and security metrics
# Check the "Trends & Progress" tab for technical debt estimation
# Check the "OWASP Top 10" and "CWE Top 25" tabs (project-level) for security
# Check the "Leaderboards" tab for team performance rankings
```

**What You Get:**
- 📊 Interactive HTML dashboards with Plotly visualizations
- 💰 Technical debt estimation (estimated hours/days to remediate)
- 🔒 OWASP Top 10 2025 security compliance mapping
- 🛡️ CWE Top 25 2025 dangerous weakness tracking
- 🏆 Team and project leaderboards for gamification
- 📈 Defect velocity and trend analysis
- 🎯 File hotspots and complexity metrics
- 👥 User activity and triage progress

## Features

**🆕 Latest Enhancements (2025):**
- **💰 Technical Debt Estimation**: Automated calculation of remediation effort (hours/days/weeks) based on defect severity
- **🔒 OWASP Top 10 2025**: Map defects to the latest OWASP web application security risks using CWE codes
- **🛡️ CWE Top 25 2025**: Track MITRE's most dangerous software weaknesses with industry rankings
- **🏆 Competitive Leaderboards**: Rank projects and users by fix velocity, improvements, and triage activity
- **📊 Enhanced Trends**: Defect velocity, cumulative trends, and fix-vs-introduction rate analysis

---

The tool provides the following metric categories:

### 1. **Defect Metrics**
- **Total Defects by Project**: Count of defects grouped by project with active/fixed breakdown
- **Defects by Severity**: Distribution across High/Medium/Low impact levels
- **Defects by Category**: Top defect categories (e.g., Security, Null pointer, Resource leak)
- **Defects by Checker**: Specific checkers finding the most defects
- **Defect Density**: Defects per 1000 lines of code (KLOC) by project/stream
- **File Hotspots**: Files with the highest concentration of defects

### 2. **Triage Metrics**
- **Defects by Triage Status**: Distribution by action (Fix Required, Ignore, etc.)
- **Defects by Classification**: Bug, False Positive, Intentional, etc.
- **Defects by Owner**: Defect ownership and assignment statistics

### 3. **Code Quality Metrics**
- **Code Metrics by Stream**: Lines of code, comment ratios, file counts
- **Function Complexity**: Distribution of cyclomatic complexity
- **Most Complex Functions**: Identify high-complexity functions needing refactoring
- **Comment Ratio**: Code documentation percentage

### 4. **Trend Metrics**
- **Weekly Defect Trend**: Defect count trends over time
- **Weekly File Count Trend**: Codebase growth tracking
- **Snapshot History**: Analysis run history with defect changes
- **Defect Velocity Trends**: Introduction vs fix rates over time
- **Cumulative Trend Analysis**: Long-term defect accumulation patterns
- **Technical Debt Estimation**: Hours/days/weeks to remediate all defects
  - Based on defect impact levels (High=4h, Medium=2h, Low=1h, Unspecified=0.5h)
  - Breakdown by severity with visual indicators
  - Total person-weeks capacity needed

### 5. **User Activity Metrics**
- **Login Statistics**: User engagement with the system
- **Active Triagers**: Most active users in defect triage
- **Session Analytics**: Average session duration per user

### 6. **Security Compliance Metrics** (NEW!)
- **OWASP Top 10 2025**: Map defects to OWASP security categories
  - CWE-based mapping to 10 critical web application security risks
  - Severity breakdown (High/Medium/Low) per category
  - Coverage analysis showing which OWASP categories affect your code
  - Project-level security dashboards
- **CWE Top 25 2025**: Track MITRE's Most Dangerous Software Weaknesses
  - 25 ranked weaknesses based on real-world vulnerability data
  - Defect counts mapped to specific CWE IDs
  - Industry-standard danger scores and rankings
  - Helps prioritize remediation by recognized danger levels

### 7. **Competitive Leaderboards** (NEW!)
- **Top Projects by Fix Rate**: Projects ranked by defect elimination velocity
- **Most Improved Projects**: Projects with best defect reduction trends
- **Top Projects by Triage Activity**: Most active triage engagement
- **Top Fixers (Users)**: Developers who eliminated the most defects
- **Top Triagers (Users)**: Most active users in defect classification
- **Most Collaborative Users**: Users working across multiple projects

### 8. **Performance Metrics**
- **Database Statistics**: Database size and growth tracking
- **Commit Performance**: Analysis duration (min/max/average times)
- **Snapshot Performance**: Recent commit performance with queue times
- **Defect Discovery Rate**: Daily/weekly defect discovery trends
- **System Analytics**: Largest tables, resource utilization

### 9. **Summary Metrics**
- Overall counts: projects, streams, defects, files, functions, LOC
- High severity defect counts
- Active user counts

## Installation

### From Source (Recommended)

```bash
# Clone or download this repository
git clone https://github.com/lejouni/coverity_metrics.git
cd coverity_metrics

# Install the package with all dependencies
pip install -e .
```

This installs the package in editable mode, making the CLI commands (`coverity-dashboard`, `coverity-metrics`, `coverity-export`) available system-wide.

### From PyPI (Future)

```bash
# When published to PyPI
pip install coverity-metrics
```

### Requirements

The package includes these dependencies (automatically installed):
- `psycopg2-binary` - PostgreSQL database adapter
- `pandas` - Data analysis and manipulation
- `matplotlib` - Plotting library
- `seaborn` - Statistical data visualization
- `python-dateutil` - Date/time utilities
- `openpyxl` - Excel file support for CSV exports
- `jinja2` - HTML template engine for dashboard generation
- `plotly` - Interactive charts and visualizations
- `tqdm` - Progress bars

## Configuration

The tool requires configuration through `config.json`. Create this file with your Coverity instance(s) connection details:

```bash
cp config.json.example config.json
# Edit config.json with your database credentials
```

### Configuration File Format

```json
{
  "instances": [
    {
      "name": "Production",
      "description": "Production Coverity Instance",
      "enabled": true,
      "database": {
        "host": "coverity-server.company.com",
        "port": 5432,
        "database": "cim",
        "user": "coverity_ro",
        "password": "your_password_here"
      },
      "color": "#2c3e50"
    }
  ]
}
```

**Important:** 
- Add at least one instance with `"enabled": true`
- For single-instance mode: Configure one instance
- For multi-instance mode: Configure 2+ instances (auto-detected)
- Add `config.json` to `.gitignore` to protect credentials

## Database Schema

The tool works with the following key Coverity database tables:

- **defect**, **stream_defect**, **defect_instance** - Defect information
- **checker**, **checker_properties** - Checker and severity data (includes CWE codes)
- **triage_state**, **defect_triage** - Triage information
- **stream**, **stream_file**, **stream_function** - Code structure
- **snapshot**, **snapshot_element** - Analysis snapshots and defect lifecycle
- **project**, **project_stream** - Project organization
- **users**, **user_login** - User activity
- **weekly_issue_count**, **weekly_file_count** - Trend data
- **dynamic_enum** - Classification, action, and severity enumerations

**NEW - Security Metrics Support:**
- **checker_properties.cwe** - CWE (Common Weakness Enumeration) codes used for OWASP Top 10 and CWE Top 25 mapping
- **dynamic_enum** - Severity values (Major, Moderate, Minor, Unspecified) mapped to security risk levels

## Usage

After installation, you can use the package in two ways: **Command-Line Interface (CLI)** or **Python Library**.

### Command-Line Interface (CLI)

The package provides three CLI commands for different use cases:

| Command | Purpose | Output | Best For |
|---------|---------|--------|----------|
| **coverity-dashboard** | Visual HTML dashboard | Interactive HTML files with charts | Presentations, visual analysis, sharing |
| **coverity-metrics** | Console text report | Terminal output (stdout) | Quick checks, CI/CD, piping |
| **coverity-export** | Data export | CSV files | Excel analysis, archiving, integrations |

**Key Differences:**

- **coverity-dashboard**: Creates beautiful interactive HTML dashboards with Plotly charts, saved to `output/` directory. Auto-opens in browser for easy viewing. Supports multi-instance aggregation.

- **coverity-metrics**: Prints all metrics as formatted text tables directly to your terminal. No files created. Great for quick command-line checks or redirecting to log files (`coverity-metrics > report.txt`).

- **coverity-export**: Exports raw metric data to timestamped CSV files in `exports/` directory. Perfect for importing into Excel, Power BI, or custom analysis tools.

**Note**: All three tools require direct PostgreSQL database access. CSV exports cannot be used as input to generate dashboards—they're export-only for external analysis.

---

#### 1. Generate Dashboard (Main Tool)

```bash
# Basic usage - auto-detects instance type from config.json
coverity-dashboard

# Filter by specific project across all instances
coverity-dashboard --project "MyProject"

# Generate for specific instance only
coverity-dashboard --instance Production

# Change trend analysis period (default: 365 days)
coverity-dashboard --days 180

# Custom output folder
coverity-dashboard --output reports/2026

# Enable caching for better performance
coverity-dashboard --cache --cache-ttl 86400

# Generate without opening browser
coverity-dashboard --no-browser  

# Use different configuration file
coverity-dashboard --config my-config.json
```

**Auto-Detection Behavior:**
- **config.json is required** with at least one enabled instance configured
- If `config.json` has **2+ enabled instances**: Multi-instance mode (generates aggregated + per-instance + per-project dashboards)
- If `config.json` has **1 enabled instance**: Single-instance mode (generates dashboard for that instance)
- Use `--project` to filter by specific project only
- Use `--instance` to generate for specific instance only (multi-instance mode)
- Use `--single-instance-mode` to force single-instance behavior even with multiple instances

### CLI Parameters Reference

#### coverity-dashboard Parameters

| Parameter | Short | Type | Default | Description |
|-----------|-------|------|---------|-------------|
| `--project` | `-p` | string | None | Filter metrics by specific project name |
| `--output` | `-o` | string | `output` | Output folder path for dashboard files |
| `--no-browser` | - | flag | False | Do not open dashboard in browser automatically |
| `--config` | `-c` | string | `config.json` | Path to configuration file |
| `--instance` | `-i` | string | None | Generate dashboard for specific instance only |
| `--single-instance-mode` | - | flag | False | Force single-instance mode even with multiple instances in config |
| `--cache` | - | flag | False | Enable caching to speed up subsequent generations |
| `--cache-dir` | - | string | `cache` | Directory for cache files |
| `--cache-ttl` | - | integer | `24` | Cache time-to-live in hours |
| `--clear-cache` | - | flag | False | Clear all cached data before generating |
| `--cache-stats` | - | flag | False | Display cache statistics and exit |
| `--no-cache` | - | flag | False | Force refresh data from database, bypass cache |
| `--days` | `-d` | integer | `365` | Number of days for trend analysis |
| `--track-progress` | - | flag | False | Enable progress tracking for large operations |
| `--resume` | - | string | None | Resume from interrupted session (provide session ID) |

**Examples:**
```bash
# Basic dashboard with caching
coverity-dashboard --cache

# Filter by project with 180-day trends
coverity-dashboard --project "MyApp" --days 180

# Generate without browser, custom output
coverity-dashboard --no-browser --output reports/weekly

# Clear cache and regenerate
coverity-dashboard --clear-cache --no-cache

# View cache statistics
coverity-dashboard --cache-stats
```

#### coverity-metrics Parameters

**No command-line parameters available.** This tool runs with default settings and outputs to the terminal.

The tool:
- Automatically uses the first enabled instance from `config.json`
- Prints formatted tables directly to stdout
- Can be redirected to files: `coverity-metrics > report.txt`

#### coverity-export Parameters

**No command-line parameters available.** This tool runs with default settings.

The tool:
- Automatically uses the first enabled instance from `config.json`
- Exports to `exports/` directory with timestamped filenames
- Creates CSV files for all available metrics

---

#### 2. Console Metrics Report

**Outputs**: Text tables printed to terminal (no files created)

```bash
# Generate console metrics report
coverity-metrics

# Redirect to file
coverity-metrics > daily-report.txt

# Redirect with timestamp
coverity-metrics > "report-$(date +%Y%m%d).txt"
```

**Use Cases:**
- Quick command-line checks
- Automated CI/CD pipelines
- SSH sessions without GUI
- Piping to log files or other tools

**Note:** This tool has no command-line parameters. To filter by project or instance, modify `config.json` before running.

#### 3. CSV Export

**Outputs**: Timestamped CSV files in `exports/` directory

```bash
# Export metrics to CSV
coverity-export
```

**Files Created:**
- `defects_by_project_YYYYMMDD_HHMMSS.csv`
- `defects_by_severity_YYYYMMDD_HHMMSS.csv`
- `defect_density_YYYYMMDD_HHMMSS.csv`
- `file_hotspots_YYYYMMDD_HHMMSS.csv`
- `code_metrics_YYYYMMDD_HHMMSS.csv`
- ...and more

**Use Cases:**
- Excel pivot tables and analysis
- Power BI / Tableau dashboards
- Custom Python/R data analysis
- Archiving historical metrics
- Third-party tool integrations

**Note:** This tool has no command-line parameters. Files are always saved to the `exports/` directory with timestamps.

---

### Typical Workflow

**Daily Quick Check:**
```bash
# Fast terminal check
coverity-metrics
```

**Weekly Team Review:**
```bash
# Generate visual dashboard for presentation
coverity-dashboard --cache
# Opens interactive HTML in browser
```

**Monthly Executive Report:**
```bash
# Visual dashboard
coverity-dashboard --days 90 --cache

# Export data for custom Excel charts
coverity-export
```

**Complete Analysis Workflow:**
```bash
# 1. Quick overview in terminal
coverity-metrics

# 2. Generate interactive dashboard
coverity-dashboard --cache --no-browser

# 3. Export raw data for deep analysis
coverity-export

# Now you have:
# - Console output for quick reference
# - HTML dashboard (output/dashboard.html) for presentations
# - CSV files (exports/*.csv) for custom Excel analysis
```

### Python Library Usage

You can also use the package programmatically in your Python code:

```python
from coverity_metrics import CoverityMetrics, MultiInstanceMetrics, InstanceConfig

# Single instance usage
metrics = CoverityMetrics(
    connection_params={
        'host': 'localhost',
        'port': 5432,
        'database': 'coverity',
        'user': 'postgres',
        'password': 'your_password'
    },
    project_name='MyProject'  # Optional project filter
)

# Get metrics with default limits (top N results)
top_categories = metrics.get_defects_by_checker_category(limit=10)  # Top 10
file_hotspots = metrics.get_file_hotspots(limit=20)  # Top 20

# Get ALL data using fetch_all parameter
all_categories = metrics.get_defects_by_checker_category(fetch_all=True)  # All categories
all_hotspots = metrics.get_file_hotspots(fetch_all=True)  # All files with defects
all_snapshots = metrics.get_snapshot_history(fetch_all=True)  # All snapshot history

# NEW! Technical Debt Estimation
tech_debt = metrics.get_technical_debt_summary()
print(f"Total effort: {tech_debt['total_hours']} hours ({tech_debt['total_days']} days)")
print(f"High impact: {tech_debt['breakdown']['High']['hours']} hours")

# NEW! Security Compliance Metrics
owasp_metrics = metrics.get_owasp_top10_metrics()  # OWASP Top 10 2025
cwe_metrics = metrics.get_cwe_top25_metrics()      # CWE Top 25 2025

# NEW! Leaderboard Metrics
top_fixers = metrics.get_top_users_by_fixes(days=30, limit=10)
top_projects = metrics.get_top_projects_by_fix_rate(days=30, limit=10)
improved_projects = metrics.get_most_improved_projects(days=90, limit=10)

# Other methods with fetch_all support:
# - get_defects_by_checker_name(limit=20, fetch_all=False)
# - get_defects_by_owner(limit=20, fetch_all=False)
# - get_most_complex_functions(limit=20, fetch_all=False)

# Multi-instance usage
instances = [
    InstanceConfig("Production", {...connection_params...}),
    InstanceConfig("Development", {...connection_params...})
]

multi = MultiInstanceMetrics(instances)
aggregated = multi.get_aggregated_metrics()
```

See [INSTALL.md](INSTALL.md) for detailed API examples.

### Dashboard Features
- **Project Filtering**: View metrics for all projects or filter by specific project
- **Project Navigation**: Easy navigation between project-specific dashboards
- **Tabbed Interface**: Organized into multiple specialized views:
  - **Overview**: Summary metrics, defect distribution, severity analysis
  - **Code Quality**: Complexity metrics, hotspots, code coverage
  - **Performance & Analytics**: Database stats, commit performance
  - **Trends & Progress**: Velocity trends, triage progress, **technical debt estimation**
  - **Leaderboards**: 🏆 Competitive rankings (projects, users, fixers, triagers)
  - **OWASP Top 10**: 🔒 Security compliance (project-level only)
  - **CWE Top 25**: 🛡️ Dangerous weakness tracking (project-level only)
- Summary cards with key metrics and visual indicators
- Interactive Plotly charts for severity distribution, project comparison
- File hotspots with detailed tables and defects per KLOC
- Code quality metrics visualization
- Function complexity distribution
- Top defect checkers and categories
- **Technical Debt Metrics** (NEW!):
  - Total estimated hours/days/weeks to fix all defects
  - Breakdown by impact level (High/Medium/Low/Unspecified)
  - Industry-standard effort estimates per severity
  - Visual cards with color-coded severity indicators
- **Security Compliance** (NEW!):
  - OWASP Top 10 2025 categories with CWE mappings
  - CWE Top 25 2025 most dangerous weaknesses
  - Severity breakdown per category/weakness
  - Project-level security dashboards only
- **Leaderboard Rankings** (NEW!):
  - Top 10 projects by fix velocity, improvement, triage activity
  - Top 10 users by actual fixes (code eliminations)
  - Top 10 triagers by classification activity
  - Most collaborative users across projects
- **Performance metrics**:
  - Database size and statistics
  - Commit/analysis performance (min/max/average times)
  - Recent snapshot performance with queue times
  - Defect discovery rate trends
  - Largest database tables
- Responsive design for mobile/tablet viewing
- Print-friendly layout

**Dashboard Files Generated:**
- `output/dashboard.html` - Global view of all projects
- `output/dashboard_{ProjectName}.html` - Project-specific dashboards

### Multi-Instance Support

**For environments with multiple Coverity instances, the tool now auto-detects your configuration:**

Configure multiple Coverity instances in `config.json`:

```json
{
  "instances": [
    {
      "name": "Production",
      "description": "Production Coverity Instance",
      "enabled": true,
      "database": {
        "host": "coverity-prod.company.com",
        "port": 5432,
        "database": "cim",
        "user": "coverity_ro",
        "password": "your_password"
      },
      "color": "#2c3e50"
    },
    {
      "name": "Development",
      "description": "Development Coverity Instance",
      "enabled": true,
      "database": {
        "host": "coverity-dev.company.com",
        "port": 5432,
        "database": "cim",
        "user": "coverity_ro",
        "password": "your_password"
      },
      "color": "#3498db"
    }
  ],
  "aggregated_view": {
    "enabled": true,
    "name": "All Instances"
  }
}
```

**Simplified Multi-Instance Commands:**

```bash
# Generate everything - automatically creates:
#   - Aggregated dashboard across all instances
#   - Individual dashboard for each instance
#   - Project dashboards for all projects in each instance
coverity-dashboard

# Filter by specific project across all instances
coverity-dashboard --project MyApp

# Generate for specific instance only (with all its projects)
coverity-dashboard --instance Production

# Generate specific project on specific instance only
coverity-dashboard --instance Production --project MyApp

# Use custom configuration file
coverity-dashboard --config my-config.json
```

**What Gets Generated Automatically:**

When you run `coverity-dashboard` with a multi-instance config.json:
1. **Aggregated Dashboard** (`output/dashboard_aggregated.html`) - Combined view of all instances
2. **Instance Dashboards** (`output/{InstanceName}/dashboard.html`) - One per instance
3. **Project Dashboards** (`output/{InstanceName}/dashboard_{ProjectName}.html`) - All projects for each instance

**Multi-Instance Dashboard Features:**
- **Aggregated View**: Combined metrics from all Coverity instances
- **Instance Comparison Charts**: Side-by-side defect count comparison
- **Color-Coded Instances**: Visual differentiation of instances
- **Cross-Instance Project List**: All projects with instance attribution
- **Per-Instance Dashboards**: Individual dashboards for each instance
- **Instance Filtering**: Navigate between instances easily

For detailed multi-instance setup and usage, see [MULTI_INSTANCE_GUIDE.md](MULTI_INSTANCE_GUIDE.md)

### Performance & Caching

**For large deployments with many instances/projects, enable caching to dramatically improve performance:**

```bash
# Enable caching (24-hour TTL by default)
coverity-dashboard --cache

# Custom cache TTL (48 hours)
coverity-dashboard --cache --cache-ttl 48

# View cache statistics
coverity-dashboard --cache-stats

# Clear expired cache entries
coverity-dashboard --clear-cache

# Force refresh (bypass cache)
coverity-dashboard --no-cache
```

**Performance Benefits:**
- **First run**: Same time as without caching (cache is built)
- **Subsequent runs**: 90-95% faster (uses cached data)
- **Example**: 30 minutes → 2 minutes for 10 instances × 100 projects

**Progress Tracking for Large Operations:**

```bash
# Enable progress tracking (for resumable operations)
coverity-dashboard --cache --track-progress

# Resume interrupted session
coverity-dashboard --cache --resume SESSION_ID
```

For detailed caching configuration, performance tuning, and troubleshooting, see [CACHING_GUIDE.md](CACHING_GUIDE.md)

### Export to CSV

Export all metrics to CSV files:

```bash
coverity-export
```

This creates timestamped CSV files in the `exports/` directory for Excel analysis.

### Use Individual Metrics

You can also use the metrics module programmatically:

```python
from coverity_metrics import CoverityMetrics

# Initialize with connection parameters
connection_params = {
    'host': 'localhost',
    'port': 5432,
    'database': 'coverity',
    'user': 'postgres',
    'password': 'your_password'
}

metrics = CoverityMetrics(connection_params=connection_params)

# Get specific metrics (top N results)
defects_by_severity = metrics.get_defects_by_severity()
print(defects_by_severity)

# Get defect density
density = metrics.get_defect_density_by_project()
print(density)

# Get top 10 file hotspots
hotspots = metrics.get_file_hotspots(limit=10)
print(hotspots)

# Get ALL file hotspots (not just top 10)
all_hotspots = metrics.get_file_hotspots(fetch_all=True)
print(f"Found {len(all_hotspots)} files with defects")

# Get overall summary
summary = metrics.get_overall_summary()
for key, value in summary.items():
    print(f"{key}: {value}")
```

### Available Metric Methods

All methods return pandas DataFrames for easy manipulation:

**Defect Metrics:**
- `get_total_defects_by_project()`
- `get_defects_by_severity()`
- `get_defects_by_checker_category(limit=20, fetch_all=False)`
- `get_defects_by_checker_name(limit=20, fetch_all=False)`
- `get_defect_density_by_project()`
- `get_file_hotspots(limit=20, fetch_all=False)`

**Triage Metrics:**
- `get_defects_by_triage_status()`
- `get_defects_by_classification()`
- `get_defects_by_owner(limit=20, fetch_all=False)`

**Code Quality Metrics:**
- `get_code_metrics_by_stream()`
- `get_function_complexity_distribution()`
- `get_most_complex_functions(limit=20, fetch_all=False)`

**Trend Metrics:**
- `get_defect_trend_weekly(weeks=12)`
- `get_file_count_trend_weekly(weeks=12)`
- `get_snapshot_history(stream_name=None, limit=20, fetch_all=False)`
- `get_defect_velocity_trend(days=90)` - NEW! Introduction vs fix rates
- `get_cumulative_defect_trend(days=90)` - NEW! Long-term accumulation
- `get_defect_trend_summary(days=90)` - NEW! Velocity metrics and trend direction
- `get_technical_debt_summary()` - NEW! Estimated remediation effort

**Security Compliance Metrics:**
- `get_owasp_top10_metrics()` - NEW! OWASP Top 10 2025 category mapping
- `get_cwe_top25_metrics()` - NEW! CWE Top 25 2025 dangerous weaknesses

**Leaderboard Metrics:**
- `get_top_projects_by_fix_rate(days=30, limit=10)` - NEW! Projects by fix velocity
- `get_most_improved_projects(days=90, limit=10)` - NEW! Best improvement trends
- `get_top_projects_by_triage_activity(days=30, limit=10)` - NEW! Most active triage
- `get_top_users_by_fixes(days=30, limit=10)` - NEW! Users by actual code fixes
- `get_top_triagers(days=30, limit=10)` - NEW! Most active triagers
- `get_most_collaborative_users(days=30, limit=10)` - NEW! Cross-project activity

**User Activity:**
- `get_user_login_statistics(days=30)`
- `get_most_active_triagers(days=30, limit=10)`

**Performance Metrics:**
- `get_database_statistics()` - Database size and statistics
- `get_largest_tables(limit=10)` - Largest database tables by size
- `get_snapshot_performance(limit=20)` - Recent commit/analysis performance
- `get_commit_time_statistics()` - Commit time averages and statistics
- `get_defect_discovery_rate(days=30)` - Defect discovery trends over time

**Summary:**
- `get_overall_summary()`
- `get_available_projects()` - List all available projects

**Note on `fetch_all` parameter:**
- When `fetch_all=False` (default): Returns top N results based on the `limit` parameter
- When `fetch_all=True`: Returns ALL available results (ignores `limit`)
- Use `fetch_all=True` for complete data exports or comprehensive analysis
- Example: `metrics.get_file_hotspots(fetch_all=True)` returns ALL files with defects, not just top 20

## Recommended Metrics for Different Use Cases

### For Management/Executive Reports:
1. **Overall Summary** - High-level statistics
2. **Defects by Severity** - Risk assessment
3. **Defect Density by Project** - Quality comparison across projects
4. **Weekly Defect Trend** - Progress over time
5. **Defects by Triage Status** - Workload and backlog
6. **Technical Debt Summary** - NEW! Estimated remediation effort
7. **Top Projects by Fix Rate** - NEW! Team performance ranking

### For Development Teams:
1. **File Hotspots** - Identify problematic files
2. **Most Complex Functions** - Refactoring candidates
3. **Defects by Category** - Common error patterns
4. **Defects by Owner** - Individual workload
5. **Snapshot History** - Analysis run results
6. **Top Fixers** - NEW! Recognize high performers
7. **CWE Top 25** - NEW! Focus on dangerous weaknesses

### For Quality Assurance:
1. **Defects by Checker** - Tool effectiveness
2. **Defects by Classification** - False positive rate
3. **Code Metrics by Stream** - Code coverage
4. **Function Complexity** - Code maintainability
5. **Defect Density** - Quality benchmarks
6. **Technical Debt Summary** - NEW! Remediation planning

### For Security Teams:
1. **OWASP Top 10 Metrics** - NEW! Web application security risks
2. **CWE Top 25 Metrics** - NEW! Most dangerous weaknesses
3. **Defects by Severity** - Critical vulnerability counts
4. **Security Category Defects** - Security-specific findings
5. **Technical Debt (High Severity)** - NEW! Security fix effort estimation

### For Team Leads:
1. **Active Triagers** - Team engagement
2. **Defects by Owner** - Work distribution
3. **User Login Statistics** - Tool adoption
4. **Weekly Trends** - Team velocity
5. **Top Fixers and Triagers** - NEW! Team performance metrics
6. **Most Improved Projects** - NEW! Progress recognition

## Project Structure

```
coverity_metrics/
├── config.json                # Database configuration (create from config.json.example)
├── config.json.example        # Configuration template
├── __init__.py                # Package initialization
├── __version__.py             # Version information
├── db_connection.py           # Database connection handling
├── metrics.py                 # Core metrics calculation logic
├── metrics_cache.py           # Caching implementation for performance
├── multi_instance_metrics.py  # Multi-instance support
├── owasp_mapping.py           # NEW! OWASP Top 10 2025 CWE mappings (494 CWEs)
├── cwe_top25_mapping.py       # NEW! CWE Top 25 2025 rankings and scores
├── cli/
│   ├── dashboard.py           # Dashboard generator (main CLI)
│   ├── report.py              # CLI metrics report
│   └── export.py              # CSV export utility
├── templates/                 # HTML dashboard templates
│   └── dashboard.html         # Main dashboard template with all tabs
├── static/                    # CSS/JS assets for dashboards
│   ├── css/
│   └── js/
├── cache/                     # Cache directory (auto-created)
├── output/                    # Generated dashboards (auto-created)
├── exports/                   # CSV exports (auto-created)
├── requirements.txt           # Python dependencies
├── setup.py                   # Package setup
├── pyproject.toml             # Modern Python packaging
├── README.md                  # This file
├── INSTALL.md                 # Detailed installation guide
├── USAGE_GUIDE.md             # Comprehensive usage examples
├── MULTI_INSTANCE_GUIDE.md    # Multi-instance setup and usage
├── CACHING_GUIDE.md           # Performance optimization guide
└── RELEASE_NOTES.md           # Version history and changelog
```

## Extending the Tool

You can easily add new metrics by extending the `CoverityMetrics` class:

```python
class CoverityMetrics:
    # ... existing methods ...
    
    def get_custom_metric(self):
        """Your custom metric description"""
        query = """
            SELECT ...
            FROM ...
        """
        results = self.db.execute_query_dict(query)
        return pd.DataFrame(results)
```

## Troubleshooting

### Database Connection Issues
- Verify PostgreSQL is running: Check Coverity services
- Check credentials in `config.json`
- Ensure PostgreSQL port (default 5432) is accessible
- Verify at least one instance is enabled in config.json

### Missing Data
- Some metrics may return empty if:
  - No snapshots have been committed
  - Streams haven't been analyzed
  - Defects haven't been triaged

### Performance
- For large databases, some queries may take time
- Consider adding database indexes on frequently queried columns
- Use the `limit` parameter to restrict result sizes

## Security Notes

- Database passwords are stored in `config.json` 
- **Always** add config.json to `.gitignore` before committing
- Use read-only database credentials when possible
- Set appropriate file system permissions on config.json
- Never commit database credentials to version control

```bash
# Recommended file permissions (Linux/Mac)
chmod 600 config.json

# Add to .gitignore
echo "config.json" >> .gitignore
```
- Use environment variables or secure vaults in production

## License

This tool is provided as-is for use with Coverity installations.

## Support

For issues or questions:
1. Check the Coverity documentation for database schema details
2. Review the SQL queries in `metrics.py` to understand data sources
3. Use `schema_explorer.py` to investigate your specific database structure
