Metadata-Version: 2.4
Name: django-vcache
Version: 1.0.0
Summary: A specialized, lightweight Django cache backend for Valkey.
Project-URL: Homepage, https://gitlab.com/glitchtip/django-vcache
Project-URL: Bug Tracker, https://gitlab.com/glitchtip/django-vcache/issues
Author-email: David Burke <david@burkesoftware.com>
License: MIT License
        
        Copyright (c) 2025 David Burke, Burke Software and Consulting
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
License-File: LICENSE
Classifier: Development Status :: 4 - Beta
Classifier: Framework :: Django
Classifier: Framework :: Django :: 6.0
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.12
Requires-Dist: django>=5.0
Requires-Dist: ormsgpack
Requires-Dist: valkey[libvalkey]
Requires-Dist: zstd; python_version < '3.14'
Description-Content-Type: text/markdown

# django-vcache

A fast, async-native Django cache backend for Valkey (and Redis). Opinionated and secure by default.

It powers the [GlitchTip](https://glitchtip.com) open-source error tracking platform.

Why django-vcache?

- **Fast** — Uses msgpack serialization (via ormsgpack) instead of pickle. 2-3x faster than Django's built-in RedisCache for typical workloads, up to 5x faster under concurrent async load.
- **Async-native** — Real async implementations for `aget`, `aset`, etc. No `sync_to_async` thread-pool wrappers. This is the only Django cache backend with true native async support.
- **Secure by default** — No pickle. Msgpack cannot execute arbitrary code on deserialization. No special configuration needed.
- **Efficient** — At most two connections (one sync, one async) per backend. Lazy-loaded. Automatic zstd compression for large values. Uses libvalkey C parser out of the box.
- **Raw Access** — Borrow the underlying valkey-py client for advanced operations (locking, pipelines, pub/sub) without spinning up new connections. Use with [django-vtask](https://gitlab.com/glitchtip/django-vtask).
- **Python 3.14 ready** — Uses stdlib `compression.zstd` on 3.14+, no third-party compression dependency needed.

## Benchmarks

Measured on Python 3.14 against Django's built-in `RedisCache`, both hitting the same local Valkey instance. No special tuning on either side — just drop-in configuration.

**Sequential operations** (5,000 iterations):

| Payload | Mode | Django RedisCache | django-vcache | Speedup |
|---------|------|-------------------|---------------|---------|
| Small (dict, 20 items) | sync | 1,438 ops/s | 4,041 ops/s | **2.8x** |
| Small (dict, 20 items) | async | 784 ops/s | 2,147 ops/s | **2.7x** |
| Medium (user session) | sync | 1,508 ops/s | 3,837 ops/s | **2.5x** |
| Medium (user session) | async | 767 ops/s | 2,118 ops/s | **2.8x** |
| Large (2KB+, compressed) | sync | 1,789 ops/s | 2,784 ops/s | **1.6x** |
| Large (2KB+, compressed) | async | 863 ops/s | 2,216 ops/s | **2.6x** |

**Concurrent async** (50 tasks, 1,000 set+get pairs):

| | ops/sec | Speedup |
|---|---------|---------|
| Django RedisCache | 1,979 | — |
| django-vcache | 11,477 | **5.8x** |

Django's `RedisCache` wraps every async call in `sync_to_async`, which pushes work to a thread pool. django-vcache uses native async I/O — the difference grows with concurrency.

Status: Stable and used in production.

## Installation

```bash
pip install django-vcache
```

## Usage

Update your `settings.py` to configure the cache backend:

```python
CACHES = {
    "default": {
        "BACKEND": "django_vcache.backend.ValkeyCache",
        "LOCATION": "valkey://your-valkey-host:6379/1",
        "OPTIONS": {
            "max_connections": 200,  # Example: limit the number of connections in the pool
            "connection_pool_timeout": 5, # Example: time to wait for a connection before raising an error
            "socket_connect_timeout": 5,  # Example: set a connection timeout
            "retry_on_timeout": True,     # Example: enable retry on timeout
        }
    },
}
```

The `max_connections` and `connection_pool_timeout` options enable sensible blocking behavior. When `max_connections` is reached, subsequent requests for a connection will wait for up to `connection_pool_timeout` seconds for a connection to become available before raising an error. It is recommended to set values for these options to prevent connection exhaustion.

You can then use Django's cache framework as usual:

```python
from django.core.cache import cache

cache.set('my_key', 'my_value', 30)
value = cache.get('my_key')
```

To access the underlying raw `valkey-py` client instance, you can use the `get_raw_client` method:

```python
# Get the synchronous client
sync_client = cache.get_raw_client()

# Get the asynchronous client
async_client = cache.get_raw_client(async_client=True)
```

## Async usage

You must use an ASGI server to run the asynchronous client. For example, you can use `granian` or `uvicorn`. This is due to limitations in the `valkey-py` library. aget will not run reliably on a WSGI server.

Example equivalent of Django runserver:
`granian --interface asgi --host 0.0.0.0 --port 8000 sample.asgi:application --reload`

## WSGI Compatibility

The primary `ValkeyCache` backend is designed for modern ASGI applications and provides native async support. However, for legacy systems running in a synchronous WSGI environment (like Gunicorn or uWSGI with default workers), calling async cache methods can be problematic.

For these specific cases, a WSGI-compatible backend is available. It ensures that async cache methods are safely wrapped, preventing errors related to event loop management in a synchronous context.

To use it, update your `settings.py`:

```python
CACHES = {
    "default": {
        "BACKEND": "django_vcache.wsgi.ValkeyWSGICache",
        "LOCATION": "valkey://your-valkey-host:6379/1",
        # ... other options
    },
}
```

> **Note:** `django-vcache` is optimized for ASGI. If your project is primarily WSGI-based, you may find that other cache backends like `django-redis` better suit your needs. The `ValkeyWSGICache` is provided as a compatibility layer, not a performance-focused feature.

## Contributing

### Development Environment

This project uses Docker for development. To get started:

1.  Clone the repository.
2.  Build and start the services:

    ```bash
    docker compose up -d --build
    ```

This will start a Valkey container and an `app` container with the Django sample project running on `http://localhost:8000`. The development server uses `granian` with auto-reload, so changes you make to the code will be reflected automatically.

#### Using Valkey Sentinel

To run the development environment with Valkey Sentinel enabled, use the override compose file:

```bash
docker compose -f compose.yml -f compose.sentinel.yml up -d --build
```

You will also need to configure your `sample/settings.py` to use the Sentinel URL. The recommended way is to set the `VALKEY_URL` environment variable before starting the services:

```bash
export VALKEY_URL="sentinel://localhost:26379/mymaster/1"
```

The application will then be available at `http://localhost:8000`.

### Using Valkey Cluster

To use `django-vcache` with a Valkey Cluster, set the `CLUSTER_MODE` option to `True` in your cache configuration. The `LOCATION` should point to one of the cluster's nodes; `valkey-py` will automatically discover the rest of the cluster nodes.

```python
CACHES = {
    "default": {
        "BACKEND": "django_vcache.backend.ValkeyCache",
        "LOCATION": "valkey://your-cluster-node-1:6379/1",
        "OPTIONS": {
            "CLUSTER_MODE": True,
            "socket_connect_timeout": 5,
            "retry_on_timeout": True,
        }
    },
}
```

Note that distributed locking (via `cache.lock()` and `cache.alock()`) is not supported when `CLUSTER_MODE` is enabled, as this functionality is not provided by the underlying `valkey-py` library in cluster environments. Attempting to use these methods will raise a `NotImplementedError`.

To run the development environment with Valkey Cluster enabled, use the override compose file and environment variables:

```bash
docker compose -f compose.yml -f compose.cluster.yml up -d --build \
    -e VALKEY_URL='valkey://valkey-1:6379/1' \
    -e VALKEY_CLUSTER_MODE='true'
```

The application will then be available at `http://localhost:8000`.

### Running Tests

To run the test suite, execute the following command:

```bash
docker compose run --rm app bash -c "python sample/manage.py test"
```

## Credits

Inspired by the excellent work of django-valkey and django-redis, but re-architected for strict resource efficiency and modern async/sync hybrid stacks.
