Metadata-Version: 2.4
Name: crystal_ml_pipeline
Version: 0.5
Summary: End-to-end interpretable binary-classification pipeline
Author-email: Raffaele Mariosa <mraffaele87@gmail.com>
License: The MIT License (MIT)
        Copyright © 2025 <Raffaele Mariosa>
        
        Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Project-URL: Homepage, https://github.com/yourusername/crystal-ml
Project-URL: Repository, https://github.com/yourusername/crystal-ml
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Requires-Python: <=3.12,>=3.7
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pyyaml
Requires-Dist: pandas
Requires-Dist: numpy
Requires-Dist: scikit-learn
Requires-Dist: xgboost
Requires-Dist: imbalanced-learn
Requires-Dist: autogluon
Requires-Dist: openpyxl
Requires-Dist: SupervisedDiscretization
Requires-Dist: gosdt
Requires-Dist: graphviz
Requires-Dist: matplotlib
Requires-Dist: seaborn
Requires-Dist: joblib
Requires-Dist: gurobipy
Dynamic: license-file

# crystal-ml

An **end-to-end interpretable binary‐classification pipeline**.  
`crystal-ml` provides configurable data ingestion, model training (SVM, Balanced Random Forest, XGBoost, AutoGluon),  
SVM‐based downsampling algorithm, supervised discretization (FCCA), and optimal decision‐tree induction (GOSDT).

---

## 🚀 Features

- **Data ingestion** from CSV/XLSX, with train/test split or pre-split datasets  
- **Balanced Random Forest**, **SVM**, **XGBoost**, and **AutoGluon** model training with hyperparameter search  
- **SVM‐based undersampling**: identify “free” support vectors for downsampling training set (+ validation)
- **FCCA discretization** 
- **GOSDT** (Generalized and Scalable Optimal Sparse Decision Trees) for interpretable optimal decision tree
- Fully **YAML‐driven configuration**: 

---

### 🔗 Official Documentations

- **SVM**: https://scikit-learn.org/stable/api/sklearn.svm.html  
- **Balanced Random Forest**: https://imbalanced-learn.org/stable/references/generated/imblearn.ensemble.BalancedRandomForestClassifier.html  
- **XGBoost**: https://xgboost.readthedocs.io/en/latest/index.html  
- **AutoGluon**: https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html  
- **FCCA**: https://github.com/ceciliasalvatore/sfcca  
- **GOSDT**: https://github.com/ubc-systopia/gosdt-guesses  

---

## 🛠️ Prerequisites

- Python **3.7** – **3.12** (recommended **3.10**)  
- `git`, `pip`, and optionally `conda`  
-  An active Gurobi Licence is needed to run the code (specifically, to execute the FCCA discretization procedure)
---

## 📦 Installation

### From PyPI

    

    # (Optional) Create & activate a fresh conda env with Python 3.10
    conda create -n crystal_ml python=3.10 -y
    conda activate crystal_ml

    # Install
    pip install crystal_ml_pipeline

    
---

### From source

    

    git clone https://github.com/yourusername/crystal-ml.git
    cd crystal-ml
    pip install .

    
---

## 🎯 Quickstart

###  1. Create a script, e.g. run.py:

    from crystal_ml.pipeline import run_pipeline

    if __name__ == "__main__":
        run_pipeline("config.yaml")

---

### 2. Prepare config.yaml and place your train/test files alongside.
All pipeline options live in a single config.yaml at your project root. Copy the template in the repo ([text](https://gitlab.com/mraffaele87/crystal-ml/-/blob/master/config.yaml?ref_type=heads)) and tweak sections as needed (see section "Configuration of Pipeline’s Parameters" for more details).

### 3. Execute:

    python run.py (alternatively, use your favourite IDE to run the project)

---

### 4. Inspect the logs/ folder for:
- Excel reports (*_Performance.xlsx, *_Results.xlsx)
- Pickled objects (.pkl)
- PNG charts (*.png)
- Optimal tree tree diagrams

---

## Configuration of Pipeline’s Parameters

All pipeline parameters are configured through a single YAML file named `config.yaml`, organized into sections corresponding to the pipeline sections. Here, we will not detail every individual parameter, as many of them—particularly those related to base models and external algorithms—are already thoroughly described in their official documentations:

- **Scikit-learn**: https://scikit-learn.org/stable/api/index.html  
- **Imbalanced-learn (BRF)**: https://imbalanced-learn.org/stable/references/generated/imblearn.ensemble.BalancedRandomForestClassifier.html  
- **SVM (Sklearn)**: https://scikit-learn.org/stable/api/sklearn.svm.html  
- **XGBoost**: https://xgboost.readthedocs.io/en/latest/index.html  
- **AutoGluon**: https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html  
- **FCCA**: https://github.com/ceciliasalvatore/supervised-discretization (see also the FCCA paper)  
- **GOSDT**: https://github.com/ubc-systopia/gosdt-guesses  

Below is a concise overview of the main configuration options, following the structure of the YAML file:

### Starting Dataset (`Data_Ingestion`)
- `enable`: enables or disables this phase. Must be enabled if pre-processed data (already discretized for GOSDT) is not provided.  
- `input data paths`: file paths to either the complete dataset or pre-split training and testing datasets.  
- `target_column`: name of the binary target variable to predict (e.g., `y720`).  
- `train/test split params`: parameters used for splitting the dataset into training and testing subsets (see the official scikit-learn docs for details).

### Base Models
This section includes the four base models (BRF, XGBoost, SVM, AutoGluon), each configurable through:
- `enabled`: enables or disables the execution of the specific model.  
- `output_dir`: directory where the model’s performance metrics and results are saved.  
- `search params`: parameters used in hyperparameter optimization via cross-validation (BRF, XGB, SVM), or more generally for selecting the optimal model configuration (see the official docs).

### SVM-based Undersampling Algorithm
This section contains the parameters to configure the SVM-based downsampling procedure, aimed at reducing the size of the training dataset:
- **SVM_Downsampling**  
  - `enabled`: enables or disables the downsampling algorithm.  
  - `output_dir`: directory for results, including the undersampled dataset (saved with `pickle`).  
  - `CV search params`: parameters for SVM hyperparameter search (see official scikit-learn docs).  
  - `n_free_models`: number of SVM models used to select support vectors (lower values yield smaller datasets).  
  - `save_output` / `load_saved_output`: whether to save/load undersampled datasets (using `pickle`), preventing repeated downsampling runs.  
  - `percentage_performance_drop_threshold`: threshold percentage drop in model performance that triggers a user warning.  
  - `percentage_performance_drop_metric`: metric chosen by the user (`Accuracy`, `Recall`, `Precision`, `f1`, or `f2`) to evaluate performance drop—using BRF as reference, comparing training metrics before vs. after downsampling.

- **Undersampling Performance Assessment** (BRF, XGB, AutoGluon)  
  Parameters analogous to those in the **Base Models** section are employed to estimate the effectiveness of the undersampled dataset by retraining the base models (excluding SVM) and assessing their performance.

### Data Discretization
This section handles the discretization of continuous features, required for GOSDT:
- **BRF_FCCA**  
  Parameters (same structure as the BRF in **Base Models**) used for configuring both the Balanced Random Forest model employed by FCCA to identify discretization thresholds, and the BRF models trained during each FCCA iteration to evaluate predictive performance on datasets discretized according to each parameter combination. BRF results are saved into subfolders named by their parameter settings.
- **FCCA**  
  - `enabled`: enables or disables the discretization step.  
  - `output_dir`: directory where FCCA generates its results—one subfolder (named by parameter combo) per tested configuration, containing the discretized datasets.  
  - Additional FCCA-specific parameters (e.g., `lambda0_values`, `p0_values`, `tao_q_values`), detailed in the official FCCA documentation and paper.  

This stage also produces two visual plots to help users select the optimal trade-off between data compression and information loss:  
- **Compression Rate vs. Inconsistency Rate** across all parameter combinations  
- **Balanced RF performance** on each discretized dataset  

### Interpretable Models
This final stage generates interpretable optimal decision trees using GOSDT on the FCCA-discretized data:
- `enabled`: enables or disables this step.  
- `input_dir`: path to the directory containing the FCCA output files (`x_train_discr.xlsx`, `y_train_discr.xlsx`, `x_test_discr.xlsx`, `y_test_discr.xlsx`).  
- `output_dir`: directory where GOSDT saves model performance metrics and the optimal tree plot.  
- Additional GOSDT-specific parameters are described in the official GOSDT documentation.

---
## Data Requirements

- **Format**: Tabular `.csv` or `.xlsx` with a header row of feature names.  
- **Features**: Only continuous or binary variables (negatives allowed); **one-hot encode** any categoricals.  
- **Missing values**: Must be addressed **before** running the pipeline.  
- **Target**: One binary column with values `-1` and `1` (configured in `config.yaml`).  
- **Scaling**: Do **not** pre-scale—`MinMaxScaler` is applied internally and thresholds are converted back to the original domain for the final tree.


# 📄 License
crystal_ml_pipeline is released under the MIT License. See LICENSE for details.

Built with ❤️ by Raffaele Mariosa.
PyPI: https://pypi.org/project/crystal-ml-pipeline/

For bug reports or feature suggestions, feel free to drop me a line at [raffaele.mariosa@uniroma1.it](mailto:raffaele.mariosa@uniroma1.it).

---
