{% endif %}
{% if current_instance and instance_info %}
⚙️Instance Information
{% if instance_info.coverity_version %}
Coverity Version
{{ instance_info.coverity_version }}
{% if instance_info.coverity_build %}
Build: {{ instance_info.coverity_build }}
{% endif %}
{% endif %}
{% if instance_info.db_uptime_formatted %}
Database Uptime
{{ instance_info.db_uptime_formatted }}
{% if instance_info.db_start_time %}
Started: {{ (instance_info.db_start_time if instance_info.db_start_time is string else instance_info.db_start_time.strftime('%Y-%m-%d %H:%M'))[:16] if instance_info.db_start_time else 'N/A' }}
{% endif %}
{% endif %}
{% if instance_info.first_snapshot %}
System Active Period
{{ instance_info.usage_period_days }} days
Since: {{ (instance_info.first_snapshot if instance_info.first_snapshot is string else instance_info.first_snapshot.strftime('%Y-%m-%d'))[:10] if instance_info.first_snapshot else 'N/A' }}
{% endif %}
{% if instance_info.last_activity_formatted %}
Last Activity
{{ instance_info.last_activity_formatted }}
{% if instance_info.last_snapshot %}
{{ (instance_info.last_snapshot if instance_info.last_snapshot is string else instance_info.last_snapshot.strftime('%Y-%m-%d %H:%M'))[:16] if instance_info.last_snapshot else 'N/A' }}
{% endif %}
{% endif %}
{% if instance_info.database_name %}
Database
{{ instance_info.database_name }}
{% if instance_info.active_connections %}
{{ instance_info.active_connections }} active connections
{% endif %}
{% endif %}
{% if instance_info.system_unique_id %}
System ID
{{ instance_info.system_unique_id }}
{% endif %}
{% endif %}
{% if not current_project %}
{% endif %}
{% if current_project and owasp_metrics %}
{% endif %}
{% if current_project and cwe_top25_metrics %}
{% endif %}
Total Projects
ℹ
Number of distinct Coverity projects configured in this instance. Projects represent separate codebases or applications being analyzed.
{{ summary.total_projects|default(0) }}
Active Streams
ℹ
Number of active analysis streams. A stream is a code branch or version being tracked. Each project can have multiple streams (e.g., main, develop, release branches).
{{ summary.total_streams|default(0) }}
Active Defects
ℹ
Total count of unresolved defects found by Coverity analysis. This includes all defects not marked as Fixed or dismissed. Excludes false positives and intentional issues.
{{ summary.total_defects|default(0) }}
High Severity
ℹ
Count of defects classified as 'Major' impact level. These represent serious issues requiring immediate attention. Severity is determined by the checker's impact classification.
{{ summary.high_severity_defects|default(0) }}
Total Files
ℹ
Total number of source code files analyzed across all streams. This count includes all files submitted for analysis in the latest snapshots.
{{ summary.total_files|default(0) }}
Lines of Code
ℹ
Total lines of code (LOC) counted across all analyzed files. This includes code lines only, excluding comments and blank lines. Summed from all active streams.
{{ "{:,}".format(summary.total_loc|default(0)) }}
Functions
ℹ
Total number of functions/methods identified in the analyzed codebase. Functions are counted across all files and streams currently being tracked.
{{ summary.total_functions|default(0) }}
Active Users
ℹ
{% if current_project %}Number of unique users who have triage, comment, or snapshot commit activity on this project. Excludes 'system' and 'reporter' users.{% else %}Total number of user accounts in the Coverity system, excluding deleted and system users.{% endif %}
{% if ver.first_used %}
{{ (ver.first_used if ver.first_used is string else ver.first_used.strftime('%Y-%m-%d %H:%M'))[:16] if ver.first_used else 'N/A' }}
{% else %}
N/A
{% endif %}
{% if ver.last_used %}
{{ (ver.last_used if ver.last_used is string else ver.last_used.strftime('%Y-%m-%d %H:%M'))[:16] if ver.last_used else 'N/A' }}
{% else %}
N/A
{% endif %}
{% endfor %}
{% endif %}
Defects by Severity
ℹ
Distribution of defects across severity levels (Major, Moderate, Minor, Unspecified). Severity is based on the checker's impact classification in Coverity. Major = High impact, Moderate = Medium impact, Minor = Low impact.
{% if current_project %}Defects by Stream{% else %}Defects by Project{% endif %}
ℹ
{% if current_project %}Total defect count per stream within this project, showing both active defects and fixed defects.{% else %}Total defect count per project, showing both active defects and fixed defects. Calculated by counting all defects (including triaged ones) grouped by project name.{% endif %}
{% if current_project %}Stream Name{% else %}Project Name{% endif %}
Total Defects
Active
Fixed
{% for row in defects_by_project %}
{{ row.project_name }}
{{ row.defect_count }}
{{ row.active_defects }}
{{ row.fixed_defects }}
{% endfor %}
Top Defect Categories
ℹ
Most common defect categories found in the codebase. Categories group related checkers (e.g., Security, Null pointer, Resource leak). Count represents defects in each category, ranked by frequency.
Category
Count
{% for row in defects_by_category[:10] %}
{{ row.category }}
{{ row.defect_count }}
{% endfor %}
Defect Density (per KLOC)
ℹ
Defects per thousand lines of code (KLOC). Calculated as: (Total Defects / Lines of Code) × 1000. Lower values indicate better code quality. Useful for comparing projects of different sizes.
Stream
Defects/KLOC
Total LOC
{% for row in defect_density %}
{{ row.stream_name }}
{{ row.defects_per_kloc }}
{{ "{:,}".format(row.total_loc) }}
{% endfor %}
File Hotspots - Top Problem Files
ℹ
Files with the highest defect concentration. Shows total defects per file and density (defects per KLOC). Calculated by: Defects per KLOC = (File Defects / File LOC) × 1000. Helps identify files needing refactoring.
{% if file_hotspots|length > 0 %}
Attention Required: These files have the highest defect concentrations.
File Path
Defect Count
LOC
Defects/KLOC
{% for row in file_hotspots[:15] %}
{{ row.file_path }}
{{ row.defect_count }}
{{ row.loc }}
{{ row.defects_per_kloc }}
{% endfor %}
{% else %}
No significant file hotspots detected. Great job!
{% endif %}
Code Quality Metrics by Stream
ℹ
Code metrics for each stream: LOC (lines of code), files, functions, and comment ratio. Comment ratio = (Comment lines / Code lines) × 100%. Higher comment ratios indicate better documentation.
Stream
Files
Total LOC
Avg File LOC
Comment Ratio
{% for row in code_metrics %}
{{ row.stream_name }}
{{ row.file_count }}
{{ "{:,}".format(row.total_loc) }}
{{ row.avg_file_loc|round(0)|int }}
{{ row.comment_ratio_pct }}%
{% endfor %}
{% if complexity_distribution|length > 0 %}
Function Complexity Distribution
ℹ
Distribution of cyclomatic complexity across all functions. Complexity measures the number of independent paths through code. Generally: 1-10 = Simple, 11-20 = Moderate, 21-50 = Complex, >50 = Very Complex.
Complexity Range
Function Count
Average Complexity
{% for row in complexity_distribution %}
{{ row.complexity_range }}
{{ row.function_count }}
{{ row.avg_complexity }}
{% endfor %}
{% endif %}
Top Defect Checkers
ℹ
Checkers finding the most defects. Each checker detects specific issue types (e.g., NULL_RETURNS, RESOURCE_LEAK). Count shows how many defects each checker has identified across all code.
Checker Name
Category
Impact
Count
{% for row in top_checkers[:15] %}
{{ row.checker_name }}
{{ row.category }}
{{ row.impact }}
{{ row.defect_count }}
{% endfor %}
{% if triage_summary %}
Current Triage Progress
ℹ
Breakdown of defects by triage status (e.g., Fix Required, Ignore, False Positive). Shows how defects are classified and what action is recommended. Helps track triage backlog.
Total Outstanding
{{ triage_summary.total_defects }}
Classified
{{ triage_summary.classified_count }}
Unclassified
{{ triage_summary.unclassified_count }}
Triage Completion
{{ (triage_summary.triage_completion_percentage|round(1)) if triage_summary.triage_completion_percentage else '0' }}%
Bugs Confirmed
{{ triage_summary.bug_count }}
False Positives
{{ triage_summary.false_positive_count }}
Intentional
{{ triage_summary.intentional_count }}
Action Assigned
{{ triage_summary.action_assigned_count }}
{% endif %}
{% if tech_debt_summary and tech_debt_summary.total_defects > 0 %}
💰 Estimated Technical Debt
ℹ
Estimated effort to fix all defects. Calculated using: Major = 4 hours, Moderate = 2 hours, Minor = 1 hour, Unspecified = 0.5 hours. Total converted to person-days (8h/day) and person-weeks (40h/week).
Estimated effort required to remediate all outstanding defects based on impact levels.
Formula: High (4h), Medium (2h), Low (1h), Unspecified (0.5h)
💡 Note: These estimates are based on industry-standard effort levels per defect impact.
Actual remediation time may vary based on code complexity, team experience, and defect context.
{% endif %}
{% if trend_summary and trend_summary.total_new %}
📊 Defect Introduction vs Fix Velocity ({{ trend_period_text }})
ℹ
Shows rate of new defects introduced vs. defects fixed over time. Calculated from snapshot history showing defect state changes. Positive fix rate indicates improving code quality.
Trend Status:
{% if trend_summary.trend_direction == 'improving' %}
✅ IMPROVING - Fixing defects faster than introducing new ones
{% elif trend_summary.trend_direction == 'declining' %}
⚠️ DECLINING - Introducing defects faster than fixing them
{% else %}
ℹ️ STABLE - Introduction and fix rates are balanced
{% endif %}
Avg New Defects/Day
{{ (trend_summary.avg_new_per_day|round(1)) if trend_summary.avg_new_per_day else '0' }}
↑
Avg Fixed Defects/Day
{{ (trend_summary.avg_fixed_per_day|round(1)) if trend_summary.avg_fixed_per_day else '0' }}
↓
Net Change
{{ ('+' if trend_summary.net_change > 0 else '') if trend_summary.net_change else '' }}{{ trend_summary.net_change if trend_summary.net_change else '0' }}
Fix Rate Efficiency
{{ (trend_summary.fix_rate_pct|round(1)) if trend_summary.fix_rate_pct else '0' }}%
Total New (90d)
{{ trend_summary.total_new }}
Total Fixed (90d)
{{ trend_summary.total_fixed }}
{% endif %}
{% if cumulative_trends|length > 0 %}
Cumulative Defect Introduction & Fix Trends
ℹ
Cumulative count of defects introduced and fixed over time. Shows total defects added and removed since tracking began. Gap between lines represents net technical debt accumulation or reduction.
Daily Defect Introduction & Fix Velocity
ℹ
Daily counts of new defects found and existing defects fixed. Calculated from snapshot-to-snapshot comparisons. Shows team's day-to-day responsiveness to code quality issues.
{{ (fix_rate_metrics.avg_days_to_fix|round(1)) if fix_rate_metrics.avg_days_to_fix is not none else 'N/A' }}
Median Fix Time
{{ (fix_rate_metrics.median_days_to_fix|round(1)) if fix_rate_metrics.median_days_to_fix is not none else 'N/A' }} days
Fastest Fix
{{ (fix_rate_metrics.min_days_to_fix|round(1)) if fix_rate_metrics.min_days_to_fix is not none else 'N/A' }} days
Slowest Fix
{{ (fix_rate_metrics.max_days_to_fix|round(1)) if fix_rate_metrics.max_days_to_fix is not none else 'N/A' }} days
{% endif %}
{% if defect_trends|length > 0 %}
Defect Trends Over Time ({{ trend_period_text }})
ℹ
Total defect count over time from snapshot history. Shows whether defect backlog is growing, shrinking, or stable. Downward trend indicates improving quality.
Period
New Defects
Fixed Defects
Outstanding
Net Change
{% for row in defect_trends %}
{{ row.period }}
{{ row.new_defects or 0 }}
{{ row.fixed_defects or 0 }}
{{ row.outstanding_defects|int }}
{{ '+' if row.net_change > 0 else '' }}{{ row.net_change or 0 }}
{% endfor %}
{% endif %}
{% if defect_aging|length > 0 %}
Outstanding Defect Age Distribution
ℹ
Age distribution of unresolved defects. Shows how long defects have been outstanding (0-30 days, 30-90 days, 90+ days). Calculated from first detection date to current date. Older defects may be harder to fix.
Info: This shows how long outstanding defects have been open.
Age Range
Defect Count
Avg Age (Days)
High
Medium
Low
{% for row in defect_aging %}
{{ row.age_range }}
{{ row.defect_count }}
{{ row.avg_age_days }}
{{ row.high_severity }}
{{ row.medium_severity }}
{{ row.low_severity }}
{% endfor %}
{% endif %}
{% if triage_trends|length > 0 and current_project %}
Triage Classification by Stream
ℹ
All currently outstanding defects broken down by stream and their current triage classification. Streams with the most Unclassified defects appear first — these are where triage attention is most urgently needed. Green = resolved (False Positive / Intentional), Red = confirmed Bug, Grey = not yet triaged.
Stream
{% endif %}
{% if checker_classification|length > 0 %}
Checker Classification Breakdown
ℹ
Top checker rules ranked by the number of False Positive + Intentional classifications. Only explicitly classified defects are included. A checker with many False Positives is a tuning candidate (too noisy); high Intentional counts signal accepted technical debt patterns that may warrant a policy review.
Checker
{% endif %}
{% if top_projects_classification|length > 0 %}
{% if current_project %}Top Streams by Triage Classification{% else %}Top Projects by Triage Classification{% endif %}
ℹ
{% if current_project %}Streams within this project{% else %}Projects{% endif %} ranked by Intentional classification count. A high Intentional ratio — especially relative to total defect count — may indicate defects are being marked Intentional to pass a security quality gate rather than being genuinely addressed. Sorted by most Intentional first.
{% if current_project %}Stream{% else %}Project{% endif %}
{% endif %}
{% if not current_project %}
{% if user_activity_stats %}
👥 User Activity & License Statistics ({{ trend_period_text }})
ℹ
User login activity and license utilization metrics. Shows active users, login frequency, and license consumption. Helps optimize license allocation and identify inactive accounts.
Licensed Users
{{ user_activity_stats.total_licensed_users }}
Total user licenses
Users with Login
{{ user_activity_stats.users_with_login }}
{{ user_activity_stats.login_user_percentage }}% of licenses
Active Users
{{ user_activity_stats.active_users }}
{{ user_activity_stats.active_user_percentage }}% of licenses
{{ (100 - user_activity_stats.active_user_percentage)|round(1) }}% of licenses
Activity Summary:
{% if user_activity_stats.active_user_percentage >= 50 %}
Good license utilization! {{ user_activity_stats.active_user_percentage }}% of users are actively using Coverity (commits or triage) in the last {{ trend_period_text.lower() }}.
{% elif user_activity_stats.active_user_percentage >= 30 %}
Moderate license utilization. {{ user_activity_stats.active_user_percentage }}% of users are actively using Coverity. Consider user engagement improvements.
{% else %}
Low license utilization detected. Only {{ user_activity_stats.active_user_percentage }}% of users are actively using Coverity. Review license allocation and user training.
{% endif %}
📊 Definition of Activity:
Licensed Users: All user accounts in the system (excludes deleted and internal users: 'system', 'reporter')
Users with Login: Users who have logged in at least once (ever)
Active Users: Users who logged in OR performed triage actions within {{ trend_period_text.lower() }}
Inactive Licenses: Licensed users without recent activity
{% endif %}
Database Statistics
ℹ
Coverity PostgreSQL database size and growth metrics. Shows total database size, table count, and disk usage. Useful for capacity planning and performance monitoring.
Database Size
{{ db_stats.db_size }}
Total Snapshots
{{ db_stats.total_snapshots }}
Avg Commit Time
{{ commit_stats.avg_duration_seconds }}s
Total Commits
{{ commit_stats.total_commits }}
Largest Database Tables
ℹ
Top database tables by disk size. Shows table name, row count, and size in MB/GB. Helps identify tables consuming most storage and candidates for archiving or optimization.
Table Name
Size
{% for row in largest_tables %}
{{ row.table_name }}
{{ row.size }}
{% endfor %}
Analysis/Commit Performance
ℹ
Analysis execution performance metrics: minimum, maximum, and average commit times. Calculated from snapshot commit timestamps. Shows how long analyses take to complete.
{% if commit_activity and commit_activity.total_commits > 0 %}
Commit Activity Patterns
ℹ
Analysis of when commits/snapshots occur: busiest/quietest times (3-hour blocks) and days of week. Shows commit count, average duration, files changed, and defects per hour/day. Helps optimize CI/CD schedules.
ℹ️ Activity Analysis: Based on {{ commit_activity.total_commits }} commits/snapshots, showing when development activity is most and least active.
Recent Snapshot Performance
ℹ
Performance details for recent analysis commits: queue time, analysis duration, and total time. Calculated from snapshot timestamps (submit_time, queue_time, commit_time). Helps identify performance bottlenecks.
Date ▲▼
Stream ▲▼
Total Defects ▲▼
New/Eliminated ▲▼
Files ▲▼
Duration ▲▼
Queue Time ▲▼
{% for row in snapshot_performance %}
{{ (row.date_created if row.date_created is string else row.date_created.strftime('%Y-%m-%d %H:%M'))[:16] if row.date_created else 'N/A' }}
{{ row.stream_name }}
{{ row.total_defect_count or 0 }}
+{{ row.new_defect_count or 0 }}-{{ row.eliminated_defect_count or 0 }}
{{ row.total_file_count or 0 }}
{{ row.duration_seconds or 0 }} sec
{{ row.queue_time_seconds or 0 }} sec
{% endfor %}
{% if defect_discovery|length > 0 %}
Defect Discovery Rate ({{ trend_period_text }})
ℹ
Rate at which new defects are discovered over time. Calculated as count of newly introduced defects per day/week. Helps identify periods of high defect injection and quality degradation.
Date
Snapshots
New Defects
Eliminated
Files Analyzed
{% for row in defect_discovery[:30] %}
{{ row.snapshot_date }}
{{ row.snapshot_count }}
{{ row.new_defects or 0 }}
{{ row.eliminated_defects or 0 }}
{{ row.files_analyzed or 0 }}
{% endfor %}
{% endif %}
{% endif %}
{% if top_projects_by_fix_rate or top_projects_by_triage or top_users_by_fixes or top_triagers or most_collaborative_users %}
{% if current_project %}
🏆 Individual Contributors
ℹ
Top individual contributors ranked by defect fixes and triage activity. Shows who is actively improving code quality and engaging with Coverity findings.
Recognizing top contributors in defect resolution, triage activity, and collaboration for this project.
{% else %}
🏆 Team Leaderboards
ℹ
Project rankings by various quality metrics: fix rate, improvement trends, and triage activity. Calculated from snapshot comparisons and triage statistics. Helps identify high-performing teams.
Recognizing top performers in defect resolution, triage activity, and continuous improvement.
{% endif %}
{% if not current_project %}
📊 Project Performance
ℹ
Project-level leaderboards showing top teams by fix velocity (last 30 days), improvement percentage (last 90 days), and triage completion. Rankings calculated from snapshot comparisons and triage activity metrics.
{% if top_projects_by_fix_rate and top_projects_by_fix_rate|length > 0 %}
🚀 Fastest Fix Velocity
Projects that eliminated the most defects from code within the analysis period — ranked by total defects removed, with average fixes per day calculated across active snapshot dates ({{ trend_period_text }})
👥 Individual Contributors
ℹ
User-level leaderboards showing top fixers (defect eliminations, last 30 days), top triagers (classification actions, last 30 days), and team champions (collaboration activity). Rankings based on actual code changes and triage submissions.
{% endif %}
{% if top_users_by_fixes and top_users_by_fixes|length > 0 %}
🔧 Top Fixers
Users with most defect fixes submitted/released (Last 30 Days)
Fix Velocity: Defects with "Fix Submitted" or "Fix Released" action
Triage Completion: Percentage of defects with classification assigned
Collaboration: Comments added to defects for team communication
{% else %}
ℹ️ No Leaderboard Data Available
Leaderboards will appear when projects have defect fix activity, improvements, or triage progress.
{% endif %}
{% if current_project %}
🔒 OWASP Top 10 2025 Security Analysis
ℹ
Maps defects to OWASP Top 10 2025 web security risks using CWE codes. Shows PASS/FAILED status for each category. FAILED = has defects requiring attention. Helps prioritize security fixes.
Defects mapped to OWASP Top 10 2025 categories based on CWE (Common Weakness Enumeration) codes.
This analysis helps prioritize security remediation efforts based on the most critical web application security risks.
{% if owasp_metrics and owasp_metrics|length > 0 %}
{% set failed_count = owasp_metrics|selectattr('status', 'equalto', 'FAILED')|list|length %}
{% set passed_count = owasp_metrics|selectattr('status', 'equalto', 'PASS')|list|length %}
Categories with Defects (FAILED)
{{ failed_count }} / 10
Categories Passed
{{ passed_count }} / 10
{% for item in owasp_metrics %}
{{ item.category.split(':')[0] }}: {{ item.category.split(':')[1] if ':' in item.category else item.category }}
{% if owasp_details and item.category in owasp_details %}
{% set details = owasp_details[item.category] %}
{% if details.all_defects and details.all_defects|length > 0 %}
{% if defect.severity == 'Major' %}
High
{% elif defect.severity == 'Moderate' %}
Med
{% elif defect.severity == 'Minor' %}
Low
{% else %}
-
{% endif %}
{{ defect.file|e }}
{{ defect.function|e }}
{% endfor %}
{% endif %}
{% if details.checker_breakdown and details.checker_breakdown|length > 0 %}
🔍 Top Checkers ({{ details.checker_breakdown|length }} of {{ details.total_checkers }} total)
{% for checker in details.checker_breakdown %}
{{ checker.checker }}{{ checker.defect_count }}
{% endfor %}
{% endif %}
{% endif %}
{% endif %}
{% endfor %}
📚 About OWASP Top 10 2025:
The OWASP Top 10 is a standard awareness document representing a broad consensus about the most critical security risks to web applications.
This report maps your defects to these categories using CWE codes to help prioritize security remediation.
PASS: No outstanding defects found in this OWASP category
FAILED: Outstanding defects detected in this category requiring attention
CWEs: Number of unique CWE codes contributing to this category
{% else %}
ℹ️ No OWASP Data Available
OWASP Top 10 analysis is only available for project-level dashboards. Please select a specific project to view this data.
{% endif %}
{% endif %}
{% if current_project and cwe_top25_metrics %}
🛡️ CWE Top 25 Most Dangerous Software Weaknesses (2025)
ℹ
Maps defects to MITRE's CWE Top 25 2025 most dangerous weaknesses. Shows PASS/FAILED status and danger rankings (1-25). Based on real-world vulnerability data from NVD. Lower rank = more dangerous.
Defects mapped to MITRE's CWE Top 25 list - the most widespread and critical software weaknesses.
Rankings are based on real-world vulnerability data and help prioritize remediation by industry-recognized danger level.
🎯 Security Focus: {{ cwe_top25_metrics|selectattr('status', 'equalto', 'FAILED')|list|length }} of {{ cwe_top25_metrics|length }} CWE Top 25 weaknesses detected in this project.
💡 Tip: Click on any FAILED row to view detailed defect information.
{% if defect.severity == 'Major' %}
High
{% elif defect.severity == 'Moderate' %}
Med
{% elif defect.severity == 'Minor' %}
Low
{% else %}
-
{% endif %}
{{ defect.file|e }}
{{ defect.function|e }}
{% endfor %}
{% endif %}
{% endif %}
{% endfor %}
📚 About CWE Top 25 2025:
The CWE Top 25 Most Dangerous Software Weaknesses is a demonstrative list of the most widespread and critical weaknesses
that can lead to serious vulnerabilities in software. Compiled by MITRE using real-world vulnerability data from the
National Vulnerability Database (NVD), this list helps developers and security teams prioritize their remediation efforts.
Status: PASS (no defects) or FAILED (has defects for this CWE)
Rank: Position in MITRE's CWE Top 25 list (1-25, lower rank = more dangerous)
Score: MITRE's calculated danger score based on prevalence and impact
Total: Number of outstanding defects for this specific CWE in your project
Coverage: This table shows all 25 CWE entries ({{ cwe_top25_metrics|selectattr('status', 'equalto', 'FAILED')|list|length }} failed, {{ cwe_top25_metrics|selectattr('status', 'equalto', 'PASS')|list|length }} passed)
Expand Details: Click on any FAILED row to view all defects, including CID, checker, severity, file, and function