Metadata-Version: 2.4
Name: fairhealth
Version: 0.1.0
Summary: Trustworthy Healthcare AI: federated learning, fairness auditing, and explainability for clinical settings
Author-email: Farjana Yesmin <farjanayesmin76@gmail.com>
License: MIT License
        
        Copyright (c) 2026 Farjana Yesmin
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
        
Project-URL: Homepage, https://github.com/Farjana-Yesmin/fairhealth
Project-URL: Documentation, https://fairhealth.readthedocs.io
Project-URL: Repository, https://github.com/Farjana-Yesmin/fairhealth
Project-URL: Issues, https://github.com/Farjana-Yesmin/fairhealth/issues
Keywords: healthcare,fairness,federated-learning,explainable-ai,trustworthy-ai,bangladesh,maternal-health,differential-privacy
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Healthcare Industry
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Medical Science Apps.
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.24
Requires-Dist: pandas>=2.0
Requires-Dist: scikit-learn>=1.3
Requires-Dist: xgboost>=1.7
Requires-Dist: matplotlib>=3.7
Requires-Dist: scipy>=1.10
Requires-Dist: shap>=0.42
Requires-Dist: scikit-fuzzy>=0.4
Provides-Extra: federated
Requires-Dist: tenseal>=0.3.14; extra == "federated"
Provides-Extra: nlp
Requires-Dist: transformers>=4.35; extra == "nlp"
Requires-Dist: nltk>=3.8; extra == "nlp"
Provides-Extra: dev
Requires-Dist: pytest>=7.4; extra == "dev"
Requires-Dist: black>=23.0; extra == "dev"
Requires-Dist: twine>=4.0; extra == "dev"
Requires-Dist: build>=0.10; extra == "dev"
Dynamic: license-file

# FairHealth

**Trustworthy Healthcare AI — built from peer-reviewed research.**

[![PyPI version](https://badge.fury.io/py/fairhealth.svg)](https://pypi.org/project/fairhealth/)
[![Python 3.9+](https://img.shields.io/badge/python-3.9%2B-blue.svg)](https://www.python.org/)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)

---

FairHealth is an open-source Python library for building **fair, explainable,
and privacy-preserving** machine learning models for healthcare.

Built by [Farjana Yesmin](https://farjana-yesmin.github.io/) from
peer-reviewed research on maternal health, biosignals, and federated learning.

---

## Install

```bash
pip install fairhealth
```

---

## What It Does

| Module | What | Paper |
|---|---|---|
| `fairhealth.fairness` | Demographic parity, equalized odds, disparate impact | MobiHealth 2026 |
| `fairhealth.explain` | SHAP wrappers + Fuzzy-XGBoost hybrid explainer | ICAIHE 2026 |
| `fairhealth.federated` | FedAvg + differential privacy + sparsification | MedHE, CIBB 2026 |
| `fairhealth.datasets` | Maternal health, dengue, flood PDNA — all public | Multiple |

---

## Quick Example

```python
import fairhealth as fh
import numpy as np

# ── Fairness audit ───────────────────────────────────────────────
from fairhealth.fairness.metrics import demographic_parity_diff

y_pred    = np.array([1, 0, 1, 0, 1, 0])
sensitive = np.array([0, 0, 0, 1, 1, 1])   # 0=young, 1=older

dpd = demographic_parity_diff(y_pred, sensitive)
print(f"Demographic Parity Difference: {dpd:.4f}")
# → 0.3333  (gap between age groups)

# ── Fuzzy risk explanation ───────────────────────────────────────
from fairhealth.explain.fuzzy import get_fired_rules, score_to_label

rules = get_fired_rules(age=42, sbp=145, bs=12, hr=88)
for rule in rules:
    print(f"Rule {rule['id']}: {rule['condition']} → {rule['outcome']}")
# → Rule 1: High BP AND High Blood Sugar → HIGH RISK
# → Rule 5: High Heart Rate AND High BP  → HIGH RISK

# ── Federated privacy ────────────────────────────────────────────
from fairhealth.federated.privacy import sparsify, add_gaussian_noise

weights       = np.array([0.4, 0.3, 0.15, 0.1, 0.03, 0.02])
sparse, rate  = sparsify(weights, sparsity=0.975)
print(f"Communication reduced by {rate:.1%}")
# → Communication reduced by 83.3%
```

---

## Real Results (from My Papers)

| Finding | Value |
|---|---|
| Maternal health model accuracy | 79.3% (Fuzzy-XGBoost hybrid) |
| Clinicians preferring hybrid explanation | 71% (14 clinicians, ICAIHE 2026) |
| Demographic parity difference (age groups) | 0.1011 |
| Federated vs central accuracy gap | 9.3% |
| Communication reduction (sparsification) | 83–97.5% |

---

## Datasets Used (All Public — No Hospital Access Needed)

| Dataset | Domain | Source |
|---|---|---|
| Maternal Health Risk | Risk prediction | UCI / Kaggle |
| Bangladesh Dengue | Symptom triage | DGHS Bangladesh |
| Bangladesh Flood PDNA 2022 | Disaster equity | Government open data |
| PTB-XL | ECG biosignals | PhysioNet (free) |

---

## Research Papers

If my library helps your work, please cite:

```bibtex
@software{fairhealth2026,
  author = {Yesmin, Farjana},
  title  = {FairHealth: Trustworthy Healthcare AI},
  year   = {2026},
  url    = {https://github.com/Farjana-Yesmin/fairhealth}
}
```

**Related papers:**
- Yesmin, F. (2026). *Fairness-Aware Representation Learning for ECG-Based Disease Prediction.* MobiHealth 2026.
- Yesmin, F. et al. (2026). *Explainable AI for Maternal Health Risk Prediction in Bangladesh.* ICAIHE 2026.
- Yesmin, F. (2026). *MedHE: Communication-Efficient Privacy-Preserving Federated Learning.* CIBB 2026.
- Yesmin, F. & Akter, R. (2026). *Toward Equitable Recovery.* CCAI 2026 (IEEE).

---

## Author

**Farjana Yesmin** — ML Researcher, Trustworthy AI for Healthcare  
Website: [farjana-yesmin.github.io](https://farjana-yesmin.github.io/)  
Email: farjanayesmin76@gmail.com

---

## License

MIT © Farjana Yesmin
