{% extends "base.html" %} {% from "components/help_macros.html" import tooltip, help_panel, help_step, help_tip, privacy_note %} {% set active_page = 'metrics' %} {% block title %}Cost Analytics - Deep Research System{% endblock %} {% block extra_head %} {% endblock %} {% block content %}
Calculating costs and generating insights
Important: These cost estimates are based on current public pricing from LLM providers and your actual token usage. Actual costs may vary due to pricing changes, promotional rates, or billing adjustments. Always refer to your official provider bills for exact charges. Local models (Ollama, self-hosted) show $0.00 as they do not incur API costs.
Local Model Savings Disclaimer: The "Local Models Savings" calculation is a rough estimate comparing your local model usage to hypothetical commercial API costs.
This estimate uses conservative baseline pricing (~$0.0015 per 1K tokens) and should be viewed as an approximation only.
Important Caveats:
• Quality Differences: Commercial models may provide different output quality, accuracy, or capabilities
• Hidden Costs: Local models require hardware investment, electricity, maintenance, and setup time
• Performance Variations: Speed, reliability, and availability differences between local and commercial models
• Scale Considerations: Commercial APIs may be more cost-effective for very high or very low usage patterns
• Feature Differences: Commercial services often include additional features, support, and guarantees
Use these estimates as a general guide only. The true value of local models includes privacy, control, and independence benefits beyond just cost savings.