Metadata-Version: 2.4
Name: backoff-simulator
Version: 0.1.1
Summary: Simulates backoff strategies for contending writes over the network.
Author: Cosmo Grant
Author-email: Cosmo Grant <cosmodgrant@gmail.com>
License-Expression: MIT
License-File: LICENSE
Classifier: Programming Language :: Python :: 3
Classifier: Operating System :: OS Independent
Requires-Dist: matplotlib>=3.10.8
Requires-Dist: tabulate>=0.10.0
Requires-Python: >=3.12
Project-URL: Homepage, https://github.com/cosmo-grant/backoff-simulator
Description-Content-Type: text/markdown

# Backoff Simulator

Many clients request over the network that the server write a particular value.
If writes contend, only one commits.
Clients back off and retry until their write commits.

You want to keep low:
- the time until all writes commit (**duration**)
- the total number of requests (**work**)

You can keep the duration low by making clients retry rapidly.
But then writes often contend, so the work is high.

You can keep the work low by making clients retry sporadically.
But then the server is often idle, so the duration is high.

So there’s a **tradeoff**.

The **cost** is a combined measure of duration and work.
It's set by a work-to-duration exchange rate:
how much you weight work compared to duration.

There's a famous aws blog post and simulation script about this:
- https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter
- https://github.com/aws-samples/aws-arch-backoff-simulator

This simulation is based on those.
But I re-implemented the simulation and added a few bells and whistles.

The blog post is clear:

> The return on implementation complexity of using jittered backoff is huge,
and it should be considered a standard approach for remote clients.

But is that true of **your use case**?

**Let's explore.**
