Metadata-Version: 2.1
Name: trustworthyai
Version: 0.35.0
Summary: SDK API to explain models, generate counterfactual examples, analyze causal effects and analyze errors in Machine Learning models.
Home-page: https://github.com/affectlog/trustworthy-ai-toolbox
Author: AL360°
Author-email: developer@affectlog.com
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Development Status :: 3 - Alpha
Requires-Python: >=3.7
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: dice-ml<0.12,>=0.11
Requires-Dist: econml>=0.14.1
Requires-Dist: statsmodels<0.14.0
Requires-Dist: jsonschema
Requires-Dist: affectlog-erroranalysis>=0.5.4
Requires-Dist: interpret-community>=0.31.0
Requires-Dist: lightgbm>=2.0.11
Requires-Dist: numpy<=1.26.2,>=1.17.2
Requires-Dist: numba<=0.58.1
Requires-Dist: pandas<2.0.0,>=0.25.1
Requires-Dist: scikit-learn!=1.1,<=1.5.1,>=0.22.1
Requires-Dist: scipy>=1.4.1
Requires-Dist: semver~=2.13.0
Requires-Dist: ml-wrappers
Requires-Dist: networkx<=2.5
Requires-Dist: ipykernel<=6.8.0; python_version <= "3.7"
Requires-Dist: ipykernel>=6.22.0; python_version > "3.7"
Requires-Dist: affectlog_utils>=0.4.2

# Trustworthy AI Model Analysis SDK for Python

### This package has been tested with Python 3.7, 3.8, 3.9 and 3.10

The Trustworthy AI Model Analysis SDK enables users to analyze their machine learning models in one API. Users will be able to analyze errors, explain the most important features, compute counterfactuals and run causal analysis using a single API.

Highlights of the package include:

- `explainer.add()` explains the model
- `counterfactuals.add()` computes counterfactuals
- `error_analysis.add()` runs error analysis
- `causal.add()` runs causal analysis

### Supported scenarios, models and datasets

`trustworthyai` supports computation of Trustworthy AI insights for `scikit-learn` models that are ttained on `pandas.DataFrame`. The `trustworthyai` accept both models and pipelines as input as long as the model or pipeline implements a `predict` or `predict_proba` function that conforms to the `scikit-learn` convention. If not compatible, you can wrap your model's prediction function into a wrapper class that transforms the output into the format that is supported (`predict` or `predict_proba` of `scikit-learn`), and pass that wrapper class to modules in `trustworthyai`.

Currently, we support datasets having numerical and categorical features. The following table provides the scenarios supported for each of the four trustworthy AI insights:-

| RAI insight | Binary classification | Multi-class classification | Multilabel classification | Regression | Timeseries forecasting | Categorical features | Text features | Image Features | Recommender Systems | Reinforcement Learning |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | -- |
| Explainability | Yes | Yes | No | Yes | No | Yes | No | No | No | No |
| Error Analysis | Yes | Yes | No | Yes | No | Yes | No | No | No | No |
| Causal Analysis | Yes | No | No | Yes | No | Yes (max 5 features due to expensiveness) | No | No | No | No |
| Counterfactual | Yes | Yes | No | Yes | No | Yes | No | No | No | No |


The source code can be found here:
https://github.com/affectlog/trustworthy-ai-toolbox/tree/main/trustworthyai
