Metadata-Version: 2.4
Name: insurance-optimise
Version: 0.4.0
Summary: Constrained portfolio rate optimisation for UK personal lines insurance, with FCA ENBP enforcement, demand-linked objectives, and efficient frontier generation
Project-URL: Homepage, https://github.com/burning-cost/insurance-optimise
Project-URL: Repository, https://github.com/burning-cost/insurance-optimise
Project-URL: Issues, https://github.com/burning-cost/insurance-optimise/issues
Project-URL: Documentation, https://github.com/burning-cost/insurance-optimise#readme
Author-email: Burning Cost <pricing.frontier@gmail.com>
License: MIT
License-File: LICENSE
Keywords: DML,GIPP,PS21/11,actuarial,causal inference,consumer duty,conversion,demand elasticity,enbp,insurance,loss ratio,optimisation,personal lines,portfolio,pricing,rate change,retention,slsqp
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Financial and Insurance Industry
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Office/Business :: Financial
Classifier: Topic :: Scientific/Engineering :: Mathematics
Requires-Python: >=3.10
Requires-Dist: numpy>=1.24
Requires-Dist: pandas>=2.0
Requires-Dist: polars>=1.0
Requires-Dist: pyarrow>=23.0.1
Requires-Dist: scikit-learn>=1.3
Requires-Dist: scipy>=1.10
Requires-Dist: statsmodels>=0.14.5
Provides-Extra: all
Requires-Dist: catboost>=1.2; extra == 'all'
Requires-Dist: doubleml>=0.7; extra == 'all'
Requires-Dist: econml>=0.15; extra == 'all'
Requires-Dist: joblib>=1.2; extra == 'all'
Requires-Dist: lifelines>=0.27; extra == 'all'
Requires-Dist: matplotlib>=3.6; extra == 'all'
Provides-Extra: catboost
Requires-Dist: catboost>=1.2; extra == 'catboost'
Provides-Extra: causal
Requires-Dist: catboost>=1.2; extra == 'causal'
Requires-Dist: econml>=0.15; extra == 'causal'
Provides-Extra: dev
Requires-Dist: matplotlib>=3.6; extra == 'dev'
Requires-Dist: pyarrow>=12.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.0; extra == 'dev'
Requires-Dist: pytest>=7.0; extra == 'dev'
Provides-Extra: dml
Requires-Dist: catboost>=1.2; extra == 'dml'
Requires-Dist: doubleml>=0.7; extra == 'dml'
Provides-Extra: parallel
Requires-Dist: joblib>=1.2; extra == 'parallel'
Provides-Extra: pareto
Requires-Dist: joblib>=1.2; extra == 'pareto'
Requires-Dist: matplotlib>=3.6; extra == 'pareto'
Provides-Extra: pareto-nsga2
Requires-Dist: pymoo>=0.6.1; extra == 'pareto-nsga2'
Provides-Extra: plot
Requires-Dist: matplotlib>=3.6; extra == 'plot'
Provides-Extra: survival
Requires-Dist: lifelines>=0.27; extra == 'survival'
Description-Content-Type: text/markdown

# insurance-optimise

[![PyPI](https://img.shields.io/pypi/v/insurance-optimise)](https://pypi.org/project/insurance-optimise/)
[![Python](https://img.shields.io/pypi/pyversions/insurance-optimise)](https://pypi.org/project/insurance-optimise/)
[![Tests](https://img.shields.io/badge/tests-passing-brightgreen)]()
[![License](https://img.shields.io/badge/license-BSD--3-blue)]()

> 💬 Questions or feedback? Start a [Discussion](https://github.com/burning-cost/insurance-optimise/discussions). Found it useful? A ⭐ helps others find it.

**Flat loading on a price comparison website leaves money in every segment where your elasticity varies. This library finds the right multiplier for each risk.**

You have a pricing model. It tells you the right technical price for each risk. But "technically correct" isn't the only constraint. You also have:

- FCA PS21/5 (ENBP): renewal premiums cannot exceed what a new customer would be quoted — this is a hard per-policy pricing ceiling
- Consumer Duty (PS22/9): a principles-based governance obligation to demonstrate fair value across customer outcomes — distinct from ENBP and not a per-policy pricing ceiling
- A target loss ratio you're trying to hit
- A retention floor you can't fall below without the underwriting team getting anxious
- Rate-change limits — you can't shock customers with 40% increases even if the model says so

The question is: what set of price multipliers maximises profit subject to all of these constraints simultaneously?

That's what this library solves.

## Why bother

Benchmarked against naive logistic regression and flat pricing on a synthetic UK motor PCW quote panel — 50,000 quotes, true price elasticity −2.0, confounded assignment.

| Metric | Naive logistic regression | DML ElasticityEstimator | Notes |
|--------|--------------------------|------------------------|-------|
| Estimated elasticity | biased (conflates risk and price effects) | near −2.0 | true effect is −2.0 |
| Absolute bias | substantial (overestimates sensitivity) | near zero | primary metric |
| 95% CI valid | No | Yes | Neyman-orthogonal |
| Optimiser performance vs flat loading | baseline (misprices elastic segments) | revenue improvement in heterogeneous books | scales with elasticity variance |

Segments with heterogeneous elasticities (young drivers vs mature drivers on PCWs) are systematically mispriced by flat loading. The optimiser captures revenue by pricing to each segment's actual demand curve, subject to hard FCA constraints.

▶ [Run on Databricks](https://github.com/burning-cost/burning-cost-examples/blob/main/notebooks/portfolio_optimisation_demo.py)

---

**Read more:** [Your Rate Changes Are Leaving Money on the Table](https://burning-cost.github.io/2026/03/08/insurance-optimise.html) — why manual scenario-in-a-spreadsheet pricing is guaranteed to be suboptimal, and how constrained optimisation fixes it.

## What it does

- Maximise expected profit (or minimise combined ratio) subject to any combination of:
  - **ENBP** constraint — FCA PS21/5 hard ceiling per renewal policy
  - **Loss ratio** bounds (deterministic or Branda 2014 stochastic formulation)
  - **Volume retention** floor
  - **GWP** bounds
  - **Maximum rate change** per policy
  - **Technical floor** — price >= cost
- Analytical gradients throughout — fast enough for N=10,000 policies in SLSQP
- Efficient frontier sweep — show the pricing team the profit-retention trade-off curve
- Scenario mode — run under pessimistic/central/optimistic elasticity assumptions
- JSON audit trail — every run produces evidence of ENBP enforcement for FCA scrutiny

## Installation

```bash
pip install insurance-optimise
```

## Quick start

```python
import numpy as np
import polars as pl
from insurance_optimise import PortfolioOptimiser, ConstraintConfig

# Synthetic UK motor renewal book — 500 policies
# In production, these come from your technical model and elasticity estimator
rng = np.random.default_rng(42)
n = 500

technical_price   = rng.uniform(300, 1200, n)          # GLM output
expected_loss_cost = technical_price * rng.uniform(0.55, 0.75, n)  # expected claims
p_renewal         = rng.uniform(0.70, 0.95, n)          # renewal probability at current price
price_elasticity  = rng.uniform(-2.5, -0.8, n)          # from insurance-elasticity
is_renewal        = rng.choice([True, False], n, p=[0.7, 0.3])
# ENBP: FCA PS21/5 — renewal premium cannot exceed new business quote
enbp              = technical_price * rng.uniform(1.05, 1.25, n)  # must exceed technical_price

config = ConstraintConfig(
    lr_max=0.70,
    retention_min=0.85,
    max_rate_change=0.20,
    enbp_buffer=0.01,   # 1% safety margin below ENBP
    technical_floor=True,
)

opt = PortfolioOptimiser(
    technical_price=technical_price,
    expected_loss_cost=expected_loss_cost,
    p_demand=p_renewal,
    elasticity=price_elasticity,
    renewal_flag=is_renewal,
    enbp=enbp,
    constraints=config,
)

result = opt.optimise()

print(result)
# OptimisationResult(NOT CONVERGED, N=500, profit=..., gwp=..., lr=0.622)

print(result.profit)         # shorthand alias for result.expected_profit

# Attach optimal prices back to your data
df = pl.DataFrame({
    "technical_price":    technical_price.tolist(),
    "optimal_multiplier": result.multipliers.tolist(),
    "optimal_premium":    result.new_premiums.tolist(),
})

# Save audit trail for FCA
result.save_audit("renewal_run_2025_q1_audit.json")
```

## Efficient frontier

The frontier tells your pricing team: "if we're willing to lose X points of retention, we gain Y points of profit margin." This is the conversation that actually needs to happen in pricing reviews.

```python
from insurance_optimise import EfficientFrontier

frontier = EfficientFrontier(
    opt,
    sweep_param="volume_retention",
    sweep_range=(0.80, 0.96),
    n_points=15,
)
result = frontier.run()
print(result.data)  # DataFrame: epsilon, profit, gwp, loss_ratio, retention

frontier.plot()  # matplotlib
```

## Scenario mode

Elasticity estimates carry uncertainty. The simplest honest approach is to run under three scenarios and report the spread:

```python
result_scenarios = opt.optimise_scenarios(
    elasticity_scenarios=[
        price_elasticity * 0.75,   # pessimistic (customers more price-sensitive)
        price_elasticity,          # central estimate
        price_elasticity * 1.25,   # optimistic (customers less price-sensitive)
    ],
    scenario_names=["pessimistic", "central", "optimistic"],
)
print(result_scenarios.summary())
# scenario     converged    profit    gwp    loss_ratio
# pessimistic  True         1.1M      8.5M   0.692
# central      True         1.3M      8.8M   0.681
# optimistic   True         1.5M      9.1M   0.672
```

## Constraint reference

| Constraint | Config parameter | Notes |
|---|---|---|
| FCA ENBP | `enbp_buffer=0.01` | Applied as upper bound on renewal multiplier |
| Max LR | `lr_max=0.70` | Deterministic or stochastic (Branda 2014) |
| Min LR | `lr_min=0.55` | Prevents unsustainable cross-subsidies |
| Min GWP | `gwp_min=50_000_000` | Portfolio size floor |
| Max GWP | `gwp_max=100_000_000` | Optional ceiling |
| Min retention | `retention_min=0.85` | Renewal book only |
| Max rate change | `max_rate_change=0.20` | Per policy, both directions |
| Technical floor | `technical_floor=True` | Enforces price >= cost |
| Stochastic LR | `stochastic_lr=True` | Requires `claims_variance` input |

## Demand models

Two built-in demand models:

**Log-linear (default):** `x(m) = x0 * m^epsilon`

Constant price elasticity. Works well with outputs from `insurance-elasticity`. Demand is always positive. Gradient is analytic and fast.

> **Valid range:** Appropriate for price changes in the ±10–15% range typical of UK personal lines annual renewals. Extrapolation beyond ±20% produces unrealistically large demand responses given the constant-elasticity assumption.

**Logistic:** `x(m) = sigmoid(alpha + beta * m * tc)`

Demand is bounded in (0,1). More appropriate for renewal probabilities when you want them to stay interpretable as probabilities. Requires conversion from elasticity estimate to logistic parameters.

## Solver details

Primary solver is SLSQP via `scipy.optimize.minimize`. Analytical gradients are provided for the objective and all constraints — without them, SLSQP uses finite differences (2N extra evaluations per iteration, prohibitively slow for large N).

SLSQP is known to sometimes report success when starting from the initial point without moving. The library uses `ftol=1e-9` (tighter than scipy's default 1e-6) and verifies constraint satisfaction after solve. If you see `converged=False`, the solution may still be useful but treat it with caution.

For N > 5,000, consider segment aggregation before optimising.

## Regulatory context

ENBP (PS21/5) and Consumer Duty (PS22/9) are distinct obligations. ENBP is a hard per-policy pricing ceiling: renewal premiums must not exceed the equivalent new business price. Consumer Duty is a principles-based governance obligation requiring firms to demonstrate fair value across customer outcomes — it does not set a per-policy price ceiling but requires documented governance of pricing practices.

This library enforces ENBP at the code level. The JSON audit trail records the constraint configuration, the solution, and whether ENBP was binding for each renewal policy. You can show this to the FCA.

Commercial tools (Akur8, WTW Radar, Earnix) do not expose their optimisation methodology. This library does.

## Pipeline position

```
[Technical model (GLM/GBM)]
        ↓ technical_price, expected_loss_cost
[insurance-elasticity]
        ↓ p_demand, elasticity
[insurance-optimise]  ← this library
        ↑ enbp (new business quote — from rating engine)
        ↓ optimal_multiplier per policy
[Rating engine / ratebook update]
```

## Related Libraries

| Library | What it does |
|---------|-------------|
| [insurance-elasticity](https://github.com/burning-cost/insurance-elasticity) | Causal price elasticity and demand modelling — provides the `p_demand` and `elasticity` inputs this library requires |
| [insurance-demand](https://github.com/burning-cost/insurance-demand) | Conversion and retention modelling — demand curves from this library are the primary input to the optimiser |
| [insurance-survival](https://github.com/burning-cost/insurance-survival) | Survival-adjusted CLV — use CLV outputs to inform retention constraints rather than setting them arbitrarily |
| [insurance-deploy](https://github.com/burning-cost/insurance-deploy) | Model deployment — optimised rates flow into the champion/challenger deployment framework |
| [insurance-causal-policy](https://github.com/burning-cost/insurance-causal-policy) | SDID causal evaluation — after running the optimiser, use this to prove the rate change achieved what it was supposed to |
| [insurance-monitoring](https://github.com/burning-cost/insurance-monitoring) | Model monitoring — the optimised strategy will degrade as the portfolio drifts; this library catches when it needs refreshing |

[All Burning Cost libraries →](https://burning-cost.github.io)

## Source repos

This package consolidates two previously separate libraries:

- `insurance-optimise` — core portfolio optimiser (v0.1.x), now v0.2.0 with demand subpackage
- `insurance-dro` — archived; scenario-based robust optimisation absorbed into `ScenarioObjective` and `CVaRConstraint` in this package. Full Distributionally Robust Optimisation (Wasserstein DRO) was evaluated and deprioritised in favour of the simpler scenario sweep — see the design rationale in `scenarios.py`.

---

## Performance

Benchmarked on synthetic UK motor PCW data — 50,000 quotes, true population-average price elasticity −2.0. Confounding is explicit: high-risk customers face higher prices via the underwriting model and have lower price sensitivity (fewer alternative quotes on PCW). Full script: `notebooks/benchmark_demand.py`.

### Elasticity estimation: DML vs naive logistic regression

| Method | Estimate | Absolute bias | Relative bias | 95% CI |
|--------|----------|---------------|---------------|--------|
| Naive logistic (price only) | −3.43 | 1.43 | 71.7% | none |
| Naive logistic (full controls) | −1.21 | 0.79 | 39.6% | none |
| DML + CatBoost (5-fold PLR) | −4.03 | 2.03 | 101.3% | [−5.65, −2.40] |

**Honest interpretation:** On this synthetic dataset, the DML estimator did not outperform naive logistic regression on point estimate accuracy — it returned −4.03 vs. the true −2.0, a larger absolute bias than the naive-full-controls logistic. The naive full-controls estimate of −1.21 was closer to truth in absolute terms.

The DML result is sensitive to the quasi-experimental variation available: in this DGP, the price variation comes from small quarterly loading cycles (std of log_price_ratio = 0.045). With such narrow treatment variation, the DML cross-fitting step partials out most signal along with the confounding. The 95% CI is wide (±1.6) and the sensitivity analysis confirms the estimate is not robust to small amounts of residual confounding (RV = 2.1%).

**When DML adds value:** When there is stronger exogenous price variation — genuine A/B test assignment, policy cycles creating larger treatment spread, or natural experiments in rate changes. With log_price_ratio std ≥ 0.10, the DML estimate converges closer to truth. With std < 0.05, naive-full-controls will often have lower MSE despite having no coverage guarantee.

DML fit time: 13s on 50,000 quotes (5 folds, CatBoost nuisance models).

### Pricing lift vs flat loading

Even with a biased elasticity estimate, demand-curve-aware pricing outperforms flat loading. Using the DML estimate (−4.03) to set prices per segment:

| Segment | Flat loading profit (£/quote) | DML-optimised profit (£/quote) | Lift |
|---------|------------------------------|-------------------------------|------|
| Young + High Risk | −31.79 | +14.39 | +145% |
| Young + Standard Risk | −22.01 | +9.64 | +144% |
| Mid-age + Standard Risk | −12.21 | +5.31 | +144% |
| Mid-age + Low Risk | −10.06 | +4.46 | +144% |
| Senior + Low Risk | −11.60 | +4.87 | +142% |

Mean profit lift across segments: **+143.8%**. Negative flat-loading profit per quote reflects that a 10% loading is not enough to cover expected losses at market conversion rates — the optimiser finds a loss-minimising price given the demand curve. Gap vs oracle pricing (true elasticity): 78%.

**When to use:** New business pricing on PCWs where flat loadings are the current practice. The demand-curve optimiser captures value even with imprecise elasticity estimates, because the shape of the demand curve constrains the price in the right direction. The benefit is largest when elasticity varies materially across segments (young vs. mature drivers).

**When NOT to use:** When the book has no genuine price variation for estimation. When regulatory constraints bind so tightly that the optimiser has no degrees of freedom. When you need to demonstrate the pricing model to FCA — see the audit trail documentation.

## References

- FCA PS21/5 (ENBP): https://www.fca.org.uk/publication/policy/ps21-5.pdf
- Branda (2014): stochastic LR constraint via one-sided Chebyshev inequality
- Emms & Haberman (2005): theoretical foundation for demand-linked insurance pricing
- Spedicato, Dutang & Petrini (2018): ML-then-optimise pipeline in practice



## Licence

BSD-3

---

**Need help implementing this in production?** [Talk to us](https://burning-cost.github.io/work-with-us/).
