Metadata-Version: 2.4
Name: vfairness
Version: 0.0.1
Summary: End-to-end ML fairness library: bias detection, fairness-aware training, calibration, threshold optimization, evaluation metrics with confidence intervals, drift monitoring, CI/CD gates, and A/B testing for fairness interventions.
Project-URL: Homepage, https://validant.ai
Project-URL: Documentation, https://vfairness.validant.ai
Project-URL: Repository, https://github.com/validantai/vfairness
Author-email: Daniel Glinz <daniel@glinz.co>
License: MIT
License-File: LICENSE
Keywords: ai-fairness,ai-governance,algorithmic-fairness,bias-detection,demographic-parity,drift-monitoring,equalized-odds,fairness-calibration,fairness-metrics,ml-fairness,model-auditing,responsible-ai
Classifier: Development Status :: 1 - Planning
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Description-Content-Type: text/markdown

# vfairness — AI Fairness Assessment Library

**vfairness** is a production-ready Python library that provides end-to-end fairness tooling across the entire ML pipeline — from pre-training bias detection through post-deployment monitoring. It helps data scientists and ML engineers measure whether models treat different groups fairly, identify sources of bias, apply mitigation strategies, and continuously monitor fairness in production.

> This is a placeholder release to reserve the package name on PyPI.
> The first functional release is coming soon.

## Modules

### Preprocessing
- **Bias Detection** — Historical pattern matching (43+ risk patterns across US, EU, and Swiss jurisdictions), representation bias analysis, statistical disparity testing, and proxy variable identification.
- **Feature Engineering** — Fairness-aware feature transformations including correlation reduction, feature suppression, residual methods, and intersectional balancing.

### In-Processing
- **Loss Functions** — PyTorch-based fairness-aware losses: demographic parity, equalized odds, equal opportunity, adversarial debiasing, and counterfactual fairness.
- **Constraints** — Training-time constraint enforcement via exponentiated gradient, grid search, and threshold optimization.
- **Regularizers** — Statistical parity and Hilbert-Schmidt independence regularization.
- **Wrappers** — Scikit-learn compatible `FairClassifier` and `FairRegressor` estimators.

### Post-Processing
- **Calibration** — Group-specific probability calibration (Platt scaling, isotonic regression, beta calibration, temperature scaling, histogram binning) with fairness-calibration trade-off analysis.
- **Threshold Optimization** — Single, per-group, and multi-objective threshold optimization under fairness constraints (demographic parity, equalized odds, equal opportunity, predictive parity).
- **Reweighting** — Prediction reweighting, rejection option classification, calibrated equalization, and distribution matching.

### Evaluation
- **Fairness Metrics** — Classification, regression, and ranking metrics with bootstrap and Bayesian confidence intervals, effect sizes, and multiple testing corrections.
- **Intersectional Analysis** — Multi-attribute group disparity analysis and automatic discovery of protected attributes and proxy features.
- **FairExplAIner** — Plain-language explanations of fairness metrics with actionable recommendations.
- **Visualization** — Plotly-based dashboards, radar charts, heatmaps, and confidence interval plots.
- **Robustness Testing** — Permutation tests, sensitivity analysis, stress testing, and subgroup audits.
- **MLOps Integration** — Native logging to MLflow and Weights & Biases, plus pytest-style fairness assertions.

### Operations
- **CI/CD** — Pre-training data validation, model fairness deployment gates, hierarchical checking, pytest integration, and pre-commit hooks.
- **Monitoring** — Sliding-window fairness tracking, multi-scale drift detection (MMD), adaptive thresholds, and prioritized alerting.
- **Reporting** — Tiered report generation (executive, operational, technical) in HTML, Markdown, and JSON with interactive Dash dashboards.
- **Experimentation** — A/B testing framework for fairness interventions with power analysis, sequential testing (SPRT), multi-objective Pareto analysis, and causal decomposition.

### Rendering
- SVG template engine with Jinja2 for generating publication-quality fairness report visuals.

## Installation

```bash
pip install vfairness
```

Optional extras:

```bash
pip install vfairness[viz]          # matplotlib, seaborn
pip install vfairness[dashboard]    # plotly
pip install vfairness[training]     # pytorch
pip install vfairness[mlops]        # mlflow, wandb
pip install vfairness[all]          # everything
```

## Quick Start

```python
from vfairness.evaluation import FairnessAnalyzer

analyzer = FairnessAnalyzer(
    y_true=y_test,
    y_pred=y_pred,
    sensitive_features=df_test["gender"],
)
report = analyzer.evaluate()
```

## Requirements

- Python 3.9+
- NumPy, Pandas, SciPy (core)

## License

MIT License — see [LICENSE](LICENSE) for details.

## Links

- Documentation: https://vfairness.validant.ai
- Repository: https://github.com/validantai/vfairness
