Metadata-Version: 2.4
Name: django-mercury-performance
Version: 0.1.1b1
Summary: A performance testing framework for Django that helps you understand and fix performance issues, not just detect them
Author-email: Django Mercury Team <mathewstormdev@gmail.com>
Maintainer-email: Mathew Storm <mathewstormdev@gmail.com>
License: GPL-3.0-or-later
Project-URL: Homepage, https://pypi.org/project/django-mercury-performance/
Project-URL: Documentation, https://github.com/80-20-Human-In-The-Loop/Django-Mercury-Performance-Testing/wiki
Project-URL: Repository, https://github.com/80-20-Human-In-The-Loop/Django-Mercury-Performance-Testing
Project-URL: Bug Tracker, https://github.com/80-20-Human-In-The-Loop/Django-Mercury-Performance-Testing/issues
Project-URL: Changelog, https://github.com/Django-Mercury/Performance-Testing/blob/main/CHANGELOG.md
Keywords: django,performance,testing,monitoring,optimization,n+1,queries,profiling,mercury,rest-framework,api,benchmarking
Classifier: Development Status :: 3 - Alpha
Classifier: License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)
Classifier: Framework :: Django
Classifier: Framework :: Django :: 5.0
Classifier: Framework :: Django :: 5.1
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Software Development :: Testing :: Unit
Classifier: Topic :: Software Development :: Quality Assurance
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE.txt
Requires-Dist: Django<6.0,>=3.2
Requires-Dist: djangorestframework>=3.12.0
Requires-Dist: psutil>=5.8.0
Requires-Dist: memory-profiler>=0.60.0
Requires-Dist: colorlog>=6.6.0
Requires-Dist: jsonschema>=4.0.0
Requires-Dist: toml>=0.10.2
Requires-Dist: rich>=12.0.0
Provides-Extra: rich
Requires-Dist: rich>=12.0.0; extra == "rich"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-cov>=3.0.0; extra == "dev"
Requires-Dist: black>=22.0.0; extra == "dev"
Requires-Dist: isort>=5.10.0; extra == "dev"
Requires-Dist: mypy>=1.0.0; extra == "dev"
Requires-Dist: django-stubs[compatible-mypy]>=4.2.0; extra == "dev"
Requires-Dist: djangorestframework-stubs>=3.14.0; extra == "dev"
Requires-Dist: types-psutil>=5.9.0; extra == "dev"
Requires-Dist: rich>=12.0.0; extra == "dev"
Requires-Dist: flake8>=4.0.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: coverage>=6.0.0; extra == "dev"
Requires-Dist: twine>=5.0.0; extra == "dev"
Requires-Dist: setuptools>=70.0; extra == "dev"
Requires-Dist: build>=1.0.0; extra == "dev"
Provides-Extra: docs
Requires-Dist: sphinx>=4.5.0; extra == "docs"
Requires-Dist: sphinx-rtd-theme>=1.0.0; extra == "docs"
Provides-Extra: debug
Requires-Dist: django-debug-toolbar>=3.2.0; extra == "debug"
Requires-Dist: django-silk>=4.3.0; extra == "debug"
Dynamic: license-file

# Django Mercury Performance Testing

[![PyPI version](https://badge.fury.io/py/django-mercury-performance.svg)](https://badge.fury.io/py/django-mercury-performance)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![Django 3.2-5.1](https://img.shields.io/badge/django-3.2--5.1-green.svg)](https://docs.djangoproject.com/)
[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-red.svg)](https://www.gnu.org/licenses/gpl-3.0)

**Simple, powerful performance monitoring for Django tests.**

```python
from django_mercury import monitor

with monitor() as result:
    response = client.get('/api/users/')

# Automatic threshold checking - raises AssertionError on violations
# Full report included in exception message
```

## Why Mercury?

**Most performance tools just detect problems.** Mercury explains them in your test output, with clear context and actionable fixes.

**No configuration required.** Works out of the box with sensible defaults. Customize when you need to.

**Built for real Django projects.** Detects N+1 queries, slow responses, and excessive database calls automatically.

## Installation

```bash
pip install django-mercury-performance
```

## Quick Start

### Basic Usage

```python
from django_mercury import monitor
from django.test import TestCase

class UserAPITest(TestCase):
    def test_user_list_performance(self):
        """Monitor performance with zero configuration."""
        with monitor() as result:
            response = self.client.get('/api/users/')

        # If thresholds exceeded, AssertionError with full report is raised
        # Otherwise, check metrics manually:
        print(f"Response time: {result.response_time_ms:.2f}ms")
        print(f"Queries: {result.query_count}")
```

### Custom Thresholds

```python
# Override defaults inline
with monitor(response_time_ms=50, query_count=5) as result:
    response = self.client.get('/api/users/')

# Or configure per-file
MERCURY_PERFORMANCE_THRESHOLDS = {
    'response_time_ms': 100,
    'query_count': 10,
    'n_plus_one_threshold': 8,
}

# Or in Django settings.py
MERCURY_PERFORMANCE_THRESHOLDS = {
    'response_time_ms': 200,
    'query_count': 20,
    'n_plus_one_threshold': 10,
}
```

**Configuration hierarchy:** Inline > File-level > Django settings > Defaults

### Detailed Reports

```python
with monitor() as result:
    response = self.client.get('/api/users/')

# Print full performance breakdown
result.explain()
```

**Example output:**

```
============================================================
MERCURY PERFORMANCE REPORT
============================================================

📊 METRICS:
   Response time: 156.32ms (threshold: 100ms)
   Query count:   45 (threshold: 10)

🔄 N+1 PATTERNS DETECTED:
   ❌ FAIL [23x] SELECT * FROM "auth_user" WHERE "id" = ?
        → SELECT * FROM "auth_user" WHERE "id" = 1
        → SELECT * FROM "auth_user" WHERE "id" = 2
        → SELECT * FROM "auth_user" WHERE "id" = 3

   ⚠️  WARN [8x] SELECT * FROM "user_profile" WHERE "user_id" = ?

❌ FAILURES:
   ⏱️  Response time 156.32ms exceeded threshold 100ms (+56.32ms over)
   🔢 Query count 45 exceeded threshold 10 (+35 extra queries)
   🔄 N+1 pattern detected: 23 similar queries (threshold: 10)
      Pattern: SELECT * FROM "auth_user" WHERE "id" = ?

============================================================
```

## What Gets Monitored

### Response Time
Measures end-to-end execution time using high-precision `perf_counter()`.

**Default threshold:** 200ms

### Query Count
Tracks all database queries executed during the monitored block using Django's `CaptureQueriesContext`.

**Default threshold:** 20 queries

### N+1 Query Detection
Automatically normalizes SQL queries and detects repeated patterns:

```sql
-- These are detected as the same pattern:
SELECT * FROM users WHERE id = 1
SELECT * FROM users WHERE id = 2
SELECT * FROM users WHERE id = 999

-- Normalized to:
SELECT * FROM users WHERE id = ?
```

**Detection levels:**
- **Failure:** Count >= threshold (default: 10)
- **Warning:** Count >= 80% of threshold
- **Notice:** Count >= 50% of threshold (minimum 3)

### Smart SQL Normalization
Handles:
- String literals: `'hello'` → `?`
- Numbers: `123`, `45.67` → `?`
- UUIDs: `'550e8400-e29b-41d4-a716-446655440000'` → `?`
- IN clauses: `IN (1, 2, 3)` → `IN (?)`
- Boolean values: `TRUE`, `FALSE` → `?`

## Configuration Options

```python
MERCURY_PERFORMANCE_THRESHOLDS = {
    # Response time in milliseconds
    'response_time_ms': 200,

    # Maximum number of queries
    'query_count': 20,

    # N+1 pattern failure threshold
    'n_plus_one_threshold': 10,
}
```

**Priority order (highest to lowest):**
1. **Inline:** `monitor(response_time_ms=100)`
2. **File-level:** `MERCURY_PERFORMANCE_THRESHOLDS` in test module
3. **Django settings:** `settings.MERCURY_PERFORMANCE_THRESHOLDS`
4. **Defaults:** Built-in sensible values

## Advanced Usage

### Inspect Results Programmatically

```python
with monitor() as result:
    response = self.client.get('/api/users/')

# Access metrics
assert result.response_time_ms < 100
assert result.query_count <= 10
assert len(result.n_plus_one_patterns) == 0

# Export to JSON
metrics = result.to_dict()
```

### Custom Assertions

```python
from django_mercury import monitor

with monitor() as result:
    response = self.client.get('/api/users/')

# Custom business logic
if result.query_count > 15 and len(result.n_plus_one_patterns) > 0:
    result.explain()
    raise AssertionError("Too many queries with N+1 patterns detected")
```

### Disable Auto-Failures (Manual Checking)

```python
# Catch the exception to prevent test failure
try:
    with monitor() as result:
        response = self.client.get('/api/users/')
except AssertionError as e:
    # Full report is in the exception
    print(e)
    # Decide what to do...
```

## Architecture

Mercury follows SOLID principles with clean separation of concerns:

**Core Modules:**
- `monitor.py` - Context manager orchestration
- `config.py` - 4-layer threshold resolution
- `n_plus_one.py` - SQL normalization and pattern detection

**Design Principles:**
- **Pure functions** for easy testing
- **Immutable dataclasses** for results
- **No side effects** except Django query capture
- **Type hints** throughout
- **Zero dependencies** beyond Django

## Real-World Example

```python
from django_mercury import monitor
from django.test import TestCase
from myapp.models import User

class UserAPIPerformanceTest(TestCase):
    def setUp(self):
        # Create test data
        User.objects.bulk_create([
            User(username=f'user{i}') for i in range(100)
        ])

    def test_user_list_without_optimization(self):
        """This will fail - demonstrates N+1 problem."""
        with monitor(query_count=5) as result:
            # Bad: N+1 queries (1 + 100 profile lookups)
            users = User.objects.all()
            for user in users:
                _ = user.profile.bio  # Triggers query per user

        # AssertionError raised with N+1 pattern details

    def test_user_list_with_optimization(self):
        """This passes - select_related prevents N+1."""
        with monitor(query_count=5) as result:
            # Good: 1 query with JOIN
            users = User.objects.select_related('profile').all()
            for user in users:
                _ = user.profile.bio  # No additional queries

        # ✅ Passes threshold checks
```

## Testing Mercury Itself

Mercury has comprehensive test coverage:

```bash
# Run all tests
python -m unittest discover tests

# Run specific test module
python -m unittest tests.test_monitor

# With coverage
coverage run -m unittest discover tests
coverage report
```

**Current test suite:**
- 46 tests covering all core functionality
- Unit tests for pure functions
- Integration tests for Django components
- Edge case validation

## Contributing

We welcome contributions! Mercury is designed for extensibility:

### Project Structure
```
django_mercury/
├── __init__.py          # Public API exports
├── monitor.py           # Main context manager (400 lines)
├── config.py            # Threshold resolution (78 lines)
└── n_plus_one.py        # Pattern detection (96 lines)

tests/
├── test_monitor.py      # Monitor tests (27 tests)
├── test_config.py       # Config tests (5 tests)
└── test_n_plus_one.py   # N+1 tests (9 tests)
```

### Development Setup
```bash
# Clone repo
git clone https://github.com/80-20-Human-In-The-Loop/Django-Mercury-Performance-Testing.git
cd Django-Mercury-Performance-Testing

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
python -m unittest discover tests

# Format code
black django_mercury tests --line-length 100
isort django_mercury tests --profile black
```

### Code Standards
- **Type hints required** for all new code
- **Pure functions** preferred for testability
- **Docstrings** with examples for public APIs
- **Tests** for all new functionality

See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.

## Philosophy

**Mercury follows the 80-20 Human-in-the-Loop principle:**

- **80% automation:** Detect issues, measure metrics, normalize SQL
- **20% human control:** Understand problems, make decisions, fix code

**We believe:**
- Tools should teach, not just detect
- Automation should preserve understanding
- Performance testing should be accessible to all skill levels

Part of the [80-20 Human-in-the-Loop](https://github.com/80-20-Human-In-The-Loop) ecosystem.

## License

GNU General Public License v3.0 (GPL-3.0)

We chose GPL to ensure Mercury remains:
- **Free** - No cost barriers to learning
- **Open** - Transparent development and review
- **Fair** - Improvements benefit the entire community

See [LICENSE](LICENSE) for full text.

## FAQ

**Q: Do I need to configure anything?**
A: No. Mercury works with sensible defaults. Configure only when you need stricter/looser thresholds.

**Q: Does it work with pytest?**
A: Yes. Mercury works with any test runner - it's just a context manager.

**Q: What's the performance overhead?**
A: Minimal. Django's `CaptureQueriesContext` is already optimized. SQL normalization adds ~1ms per 100 queries.

**Q: Can I use this in production?**
A: Mercury is designed for tests, not production monitoring. Use Django Debug Toolbar or APM tools for production.

**Q: Does it work with async views?**
A: Not yet. Async support is planned for v0.2.0.

**Q: Can I customize the report format?**
A: Yes. Use `result.to_dict()` and format however you want. Custom formatters can be contributed as plugins.

## Roadmap

### v0.1.0 (Current - MVP)
- ✅ Context manager monitoring
- ✅ N+1 query detection
- ✅ 4-layer configuration
- ✅ Comprehensive test suite

### v0.2.0 (Next)
- 🔨 Async view support
- 🔨 Custom formatters API
- 🔨 Performance trend tracking
- 🔨 Memory profiling

### v1.0.0 (Future)
- 🤖 CLI with test discovery
- 🤖 Educational mode with explanations
- 🤖 Plugin system for extensibility
- 🤖 MCP server for AI integration

## Acknowledgments

- **Django Community** - For the incredible framework
- **EduLite Project** - Where Mercury was born
- **80-20 Human-in-the-Loop** - For the guiding philosophy
- **Contributors** - Thank you for making Mercury better!

---

<div align="center">

**Django Mercury: Simple, powerful performance testing.**

*Because every Django developer deserves fast, understandable applications.*

[Get Started](#quick-start) • [Documentation](https://github.com/80-20-Human-In-The-Loop/Django-Mercury-Performance-Testing/wiki) • [Contributing](CONTRIBUTING.md)

</div>
