Metadata-Version: 2.4
Name: decon_dtn_toolkit
Version: 0.1.1
Summary: A toolkit for addressing confounding effects in text classification problems
Author: DeconDTN Research Team
License-Expression: MIT
License-File: LICENSE
Keywords: bias-mitigation,causal-inference,confounding,machine-learning,nlp,text-classification
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Text Processing :: Linguistic
Requires-Python: >=3.10
Requires-Dist: datasets>=4.5.0
Requires-Dist: matplotlib>=3.9.4
Requires-Dist: pandas>=2.3.3
Requires-Dist: prettytable>=3.16.0
Requires-Dist: scikit-learn>=1.6.1
Requires-Dist: sentence-transformers>=5.2.0
Requires-Dist: statsmodels>=0.14.6
Requires-Dist: tensorboard>=2.20.0
Requires-Dist: torch>=2.8.0
Requires-Dist: torchmetrics>=1.8.2
Requires-Dist: tqdm>=4.67.1
Requires-Dist: transformers>=4.57.3
Requires-Dist: wandb>=0.23.1
Provides-Extra: test
Requires-Dist: parameterized>=0.9.0; extra == 'test'
Description-Content-Type: text/markdown

# DeconDTN-Toolkit
<div align="center">
  <img src="assets/icon.png" />
</div>

[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/decon-dtn-toolkit?pypiBaseUrl=https%3A%2F%2Ftest.pypi.org)
![PyPI - Version](https://img.shields.io/pypi/v/decon-dtn-toolkit?pypiBaseUrl=https%3A%2F%2Ftest.pypi.org)


DeconDTN Toolkit is a PyTorch suite containing benchmark datasets and algorithms for confounding effects in text classification.

[**[Docs]**](/docs/) [**[Dataset Access]**](DATASETS.md)

### Features
If a dataset is drawn from two different sources, one may be enriched for the outcome of interest (i.e. 
$P(Y \mid \text{source1}) \neq P(Y \mid \text{source2})$
). In this situation a model may learn to recognize the data source, and make predictions in accordance with their class distribution, rather than on the basis of relevant features. This scenario, which we refer to as **provenance shift**, was the primary motivating use case for the development of the DeconDTN toolkit, though the same evaluation framework and methods of mitigation can apply to other confounding variables also.
- An **evaluation framework** for assessment of robustness to confounding shifts in which the proportion of positive examples changes with a confounding variable. 
- A range of [**algorithms**](src/decon_dtn_toolkit/algorithms.py) with the potential to mitigate for confounding shift 
- A range of [**benchmark datasets**](src/decon_dtn_toolkit/datasets.py) to evaluate performance. 

### Available Algorithms

The [**currently available algorithms**](src/decon_dtn_toolkit/algorithms.py) are:

* Empirical Risk Minimization (**ERM**, [Vapnik, 1998](https://www.wiley.com/en-fr/Statistical+Learning+Theory-p-9780471030034))
* Data Re-Sampling (**ReSample**, [Japkowicz, 2000](https://site.uottawa.ca/~nat/Papers/ic-ai-2000.ps))
* Domain Adversarial Neural Network (**DANN**, [Ganin et al., 2015](https://arxiv.org/abs/1505.07818))
* Conditional Domain Adversarial Neural Network (**CDANN**, [Li et al., 2018](https://openaccess.thecvf.com/content_ECCV_2018/papers/Ya_Li_Deep_Domain_Generalization_ECCV_2018_paper.pdf))
* Deep Correlation Alignment (**CORAL**, [Sun and Saenko, 2016](https://arxiv.org/abs/1607.01719))
* Maximum Mean Discrepancy (**MMD**, [Li et al., 2018](https://openaccess.thecvf.com/content_cvpr_2018/papers/Li_Domain_Generalization_With_CVPR_2018_paper.pdf))
* Mixup (**Mixup**, [Zhang et al., 2018](https://arxiv.org/abs/1710.09412))
* Learning Invariant Predictors with Selective Augmentation (**LISA**, [Yao et al., 2022](https://arxiv.org/abs/2201.00299))
* Invariant Risk Minimization (**IRM**, [Arjovsky et al., 2019](https://arxiv.org/abs/1907.02893))
* Group Distributionally Robust Optimization (**GroupDRO**, [Sagawa et al., 2020](https://arxiv.org/abs/1911.08731))
* Gradient Matching for Domain Generalization (**Fish**, [Shi et al., 2021](https://arxiv.org/pdf/2104.09937.pdf))
* Learning from Failure (**LfF**, [Nam et al., 2020](https://proceedings.neurips.cc/paper/2020/file/eddc3427c5d77843c2253f1e799fe933-Paper.pdf))
* Just Train Twice (**JTT**, [Liu et al., 2021](http://proceedings.mlr.press/v139/liu21f.html))
* Deep Feature Reweighting (**DFR**, [Kirichenko et al., 2022](https://arxiv.org/abs/2204.02937))
* Optimal Representations for Covariate Shift (**CAD** & **CondCAD**, [Ruan et al., 2022](https://arxiv.org/abs/2201.00057))
* Backdoor Adjustment (**BackDoor**, [Ding et al., 2024](https://pmc.ncbi.nlm.nih.gov/articles/PMC10785933/))
* Dual Filter (**DualFilter**, [Sheng et al., 2025](https://aclanthology.org/2025.acl-long.514/))

Send us a PR to add your algorithm! Our implementations use the hyper-parameter grids [described here](src/decon_dtn_toolkit//hparams_registry.py).

### Available Datasets

The [**currently available datasets**](src/decon_dtn_toolkit/datasets.py) are:

* CivilComments ([Borkan et al., 2019](https://arxiv.org/abs/1903.04561)) from the [WILDS benchmark](https://arxiv.org/abs/2012.07421)
* MultiNLI ([Williams et al., 2017](https://arxiv.org/abs/1704.05426))
* MIMICNotes ([Johnson et al., 2016](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4878278/)) from the [SubpopBench](https://arxiv.org/abs/2302.12254)
* AmazonReviews ([Ni et al., 2019](https://aclanthology.org/D19-1018/)) from [Veith et al., 2021](https://arxiv.org/abs/2106.00545)
* SHAC [(Lybarger et al., 2021](https://www.sciencedirect.com/science/article/pii/S1532046420302598))
* HateSpeech ([Vidgen et al., 2021](https://aclanthology.org/2021.acl-long.132) & [Gibert et al., 2018](https://arxiv.org/abs/1809.04444)) from [Ding et al., 2023](https://arxiv.org/abs/2312.05435)


## Installation

### Prerequisites
- Ubuntu 18.04 or higher
- CUDA 12.1 or higher
- Python 3.10 or higher
- pip

### Python Package
```bash
pip install --index-url https://test.pypi.org/simple/ \
            --extra-index-url https://pypi.org/simple/ \
            decon-dtn-toolkit
```

### From source

#### Option 1: uv (**Recommended**)
```bash
# Create virtual environment
uv venv dedtn-env
source dedtn-env/bin/activate  # On Linux/Mac
# dedtn-env\Scripts\activate  # On Windows

# Install the package in editable mode (for development)
uv pip install -e .
```

<details>
<summary style="display: list-item; font-size: 1em; font-weight: bold; cursor: pointer; outline: none;">
    Option 2: conda
</summary>
Conda is ideal for managing complex dependencies, especially with CUDA/PyTorch installations. It provides both package and environment management.

```bash
# Create environment with Python 3.12
conda create -n dedtn-tool python=3.12
conda activate dedtn-tool

# Install the package in editable mode
pip install -e .
```
</details>

<details>
<summary style="display: list-item; font-size: 1em; font-weight: bold; cursor: pointer; outline: none;">
    Option 3: venv (Python Built-in)
</summary>
venv is Python's built-in virtual environment tool - lightweight and requires no additional installation. Good for standard Python projects.

```bash
# Create virtual environment
python -m venv dedtn-env

# Activate environment
# On Linux/Mac:
source dedtn-env/bin/activate
# On Windows:
# dedtn-env\Scripts\activate

# Install the package
pip install -e .
```
</details>

### Verify Installation
After activating your chosen environment, you can verify the installation with:
```bash
python -c "import decon_dtn_toolkit; print('DeconDTN-Toolkit installed successfully')"
``` 

## Quick Start
To train `DANN` on the `Amazon_Reviews` dataset
```python
from decon_dtn_toolkit import datasets
from decon_dtn_toolkit.trainer import TrainConfig, Trainer

data_dir = "PATH_TO_Amazon_Reviews_2018"
amazon_reviews = vars(datasets)["Amazon_Reviews_2018"](root=data_dir)
config = TrainConfig(algorithm='DANN')
model = Trainer(dataset=dataset, config=config)
model.train()
```

## Unittest
```bash
python -m unittest discover
```

## Acknowledgement
This project is built upon
- [DomainBed](https://github.com/facebookresearch/DomainBed) - A PyTorch suite containing benchmark datasets and algorithms for domain generalization in computer vision - MIT license
- [WILDS](https://github.com/p-lambda/wilds) - A benchmark of in-the-wild distribution shifts spanning diverse data modalities and applications - MIT license
- [SubpopBench](https://github.com/YyzHarry/SubpopBench) - A benchmark of subpopulation shift - MIT license


## Citation
Below are citations of the DeconDTN line of work.
```bib
@inproceedings{ding2024backdoor,
  title={Backdoor adjustment of confounding by provenance for robust text classification of multi-institutional clinical notes},
  author={Ding, Xiruo and Sheng, Zhecheng and Yeti{\c{s}}gen, Meliha and Pakhomov, Serguei and Cohen, Trevor},
  booktitle={AMIA Annual Symposium Proceedings},
  volume={2023},
  pages={923},
  year={2024}
}
@article{ding2025tailoring,
  title={Tailoring task arithmetic to address bias in models trained on multi-institutional datasets},
  author={Ding, Xiruo and Sheng, Zhecheng and Hur, Brian and Tauscher, Justin and Ben-Zeev, Dror and Yeti{\c{s}}gen, Meliha and Pakhomov, Serguei and Cohen, Trevor},
  journal={Journal of Biomedical Informatics},
  pages={104858},
  year={2025},
  publisher={Elsevier}
}
@inproceedings{sheng2025mitigating,
  title={Mitigating confounding in speech-based dementia detection through weight masking},
  author={Sheng, Zhecheng and Ding, Xiruo and Hur, Brian and Li, Changye and Cohen, Trevor and Pakhomov, Serguei VS},
  booktitle={Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
  pages={10419--10434},
  year={2025}
}
```

## Key Contributors (listed alphabetically)
Trevor Cohen

Xiruo Ding

Yongsen Tan

Zhecheng Sheng
