Metadata-Version: 2.4
Name: fairlearn-fhe
Version: 0.2.1
Summary: Drop-in encrypted (CKKS) Fairlearn metrics. Same API surface; ciphertext arithmetic via TenSEAL.
Author-email: Bader Alissaei <b@vaultbytes.com>
License-Expression: Apache-2.0
Project-URL: Homepage, https://github.com/BAder82t/fairlearn-fhe
Project-URL: Repository, https://github.com/BAder82t/fairlearn-fhe
Project-URL: Issues, https://github.com/BAder82t/fairlearn-fhe/issues
Project-URL: Documentation, https://bader82t.github.io/fairlearn-fhe/
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Security :: Cryptography
Classifier: Topic :: Scientific/Engineering
Classifier: Development Status :: 3 - Alpha
Requires-Python: <3.13,>=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.26
Requires-Dist: fairlearn>=0.10
Requires-Dist: scikit-learn>=1.3
Requires-Dist: pandas>=2.0
Requires-Dist: tenseal>=0.3.14
Provides-Extra: openfhe
Requires-Dist: openfhe>=1.2; extra == "openfhe"
Provides-Extra: signing
Requires-Dist: cryptography>=42; extra == "signing"
Provides-Extra: dev
Requires-Dist: pytest>=8; extra == "dev"
Requires-Dist: pytest-cov>=5; extra == "dev"
Requires-Dist: ruff>=0.8; extra == "dev"
Requires-Dist: build>=1.2; extra == "dev"
Requires-Dist: twine>=5; extra == "dev"
Provides-Extra: docs
Requires-Dist: mkdocs>=1.6; extra == "docs"
Requires-Dist: mkdocs-material>=9.5; extra == "docs"
Requires-Dist: mkdocstrings[python]>=0.26; extra == "docs"
Dynamic: license-file

# fairlearn-fhe

Drop-in encrypted Fairlearn metrics. Identical API surface; ciphertext arithmetic over CKKS via TenSEAL.

`fairlearn-fhe` is an early-stage project maintained at
<https://github.com/BAder82t/fairlearn-fhe>.

```python
# plaintext
from fairlearn.metrics import demographic_parity_difference
disp = demographic_parity_difference(y_true, y_pred, sensitive_features=A)

# encrypted (one import change)
from fairlearn_fhe.metrics import demographic_parity_difference
from fairlearn_fhe import build_context, encrypt
ctx = build_context()
y_p_enc = encrypt(ctx, y_pred)
disp = demographic_parity_difference(y_true, y_p_enc, sensitive_features=A)
```

`disp` is numerically equivalent to the plaintext result within CKKS noise tolerance (`< 1e-4` abs error in default settings).

## Trust models

Two modes are supported. The default ports the regaudit-fhe convention; the second goes further.

### Mode A — encrypted predictions, plaintext sensitive features (default)

- **Encrypted:** `y_pred`.
- **Plaintext:** `y_true`, `sensitive_features`, group counts.
- **Cost:** one ct×pt multiply + slot-sum per group → depth 1.

### Mode B — fully-encrypted predictions and sensitive features

```python
from fairlearn_fhe import build_context, encrypt, encrypt_sensitive_features
from fairlearn_fhe.metrics import demographic_parity_difference

ctx = build_context()
y_pred_enc = encrypt(ctx, y_pred)
sf_enc     = encrypt_sensitive_features(ctx, sensitive_features, y_true=y_true)

disp = demographic_parity_difference(y_true, y_pred_enc, sensitive_features=sf_enc)
```

- **Encrypted:** `y_pred`, the per-row group-membership masks.
- **Plaintext (auditor metadata):** group counts, per-group positive/negative counts (passed via `y_true=` at encryption time).
- **Cost:** ct×ct + ct×pt + slot-sum per group → depth 2.

`y_true` remains plaintext in both modes (it is the auditor's ground truth). The denominators of TPR/FPR-style metrics — per-group positive/negative counts — are always revealed: there is no fairness signal without them.

Per-group rates are decrypted at the audit boundary; final aggregation (`max`, `min`, ratio, difference) runs on those K plaintext scalars.

## Supported metrics

| Plaintext name | Encrypted? | Mechanism |
| --- | --- | --- |
| `selection_rate` | yes | sum(y_pred·mask)/n_g |
| `true_positive_rate` | yes | sum(y_pred·y_true·mask)/sum(y_true·mask) |
| `true_negative_rate` | yes | sum((1-y_pred)·(1-y_true)·mask)/sum((1-y_true)·mask) |
| `false_positive_rate` | yes | sum(y_pred·(1-y_true)·mask)/sum((1-y_true)·mask) |
| `false_negative_rate` | yes | sum((1-y_pred)·y_true·mask)/sum(y_true·mask) |
| `mean_prediction` | yes | sum(y_pred·mask)/n_g |
| `demographic_parity_difference` | yes | max-min selection_rate over groups |
| `demographic_parity_ratio` | yes | min/max selection_rate over groups |
| `equalized_odds_difference` | yes | max(tpr_diff, fpr_diff) |
| `equalized_odds_ratio` | yes | min(tpr_ratio, fpr_ratio) |
| `equal_opportunity_difference` | yes | tpr max-min |
| `equal_opportunity_ratio` | yes | tpr min/max |

Plus `MetricFrame.fhe()` returning an `EncryptedMetricFrame`.

## Backends

Two CKKS backends share a single API:

```python
from fairlearn_fhe import build_context

ctx_tenseal = build_context(backend="tenseal")  # default; pip-installable
ctx_openfhe = build_context(backend="openfhe")  # native OpenFHE backend, opt-in
```

Benchmarked on n=1024, 3 sensitive groups, depth-6 circuit:

| backend | ctx build | encrypt | dp_diff | dp abs err | eo_diff | eo abs err |
|---|---|---|---|---|---|---|
| tenseal | 888 ms | 7.5 ms | 284 ms | 1e-7 | 562 ms | 2e-7 |
| openfhe | 321 ms | 13.5 ms | 505 ms | 2e-10 | 1015 ms | 4e-11 |

On the included benchmark, OpenFHE gives lower numeric error; TenSEAL is faster
per metric and ships via pip on every supported platform.

## Install

```bash
pip install fairlearn-fhe          # tenseal backend
pip install fairlearn-fhe[openfhe] # add openfhe backend (requires C++ build)
pip install fairlearn-fhe[signing] # add Ed25519 envelope signing helpers
```

Verify an audit envelope without importing an FHE backend:

```bash
fairlearn-fhe-verify envelope.json
```

## License

Apache-2.0. Compatible with Fairlearn (MIT).
