10  HPT: sklearn SVC on Moons Data

spotPython can be installed via pip. Alternatively, the source code can be downloaded from gitHub: https://github.com/sequential-parameter-optimization/spotPython.

!pip install spotPython

10.1 Step 1: Setup

Before we consider the detailed experimental setup, we select the parameters that affect run time, initial design size and the device that is used.

Caution: Run time and initial design size should be increased for real experiments
  • MAX_TIME is set to one minute for demonstration purposes. For real experiments, this should be increased to at least 1 hour.
  • INIT_SIZE is set to 5 for demonstration purposes. For real experiments, this should be increased to at least 10.
MAX_TIME = 1
INIT_SIZE = 10
PREFIX = "10"

10.2 Step 2: Initialization of the Empty fun_control Dictionary

The fun_control dictionary is the central data structure that is used to control the optimization process. It is initialized as follows:

from spotPython.utils.init import fun_control_init
from spotPython.utils.file import get_experiment_name, get_spot_tensorboard_path
from spotPython.utils.device import getDevice

experiment_name = get_experiment_name(prefix=PREFIX)

fun_control = fun_control_init(
    task="classification",
    spot_tensorboard_path=get_spot_tensorboard_path(experiment_name),
    TENSORBOARD_CLEAN=True)

10.3 Step 3: SKlearn Load Data (Classification)

Randomly generate classification data.

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons, make_circles, make_classification
n_features = 2
n_samples = 500
target_column = "y"
ds =  make_moons(n_samples, noise=0.5, random_state=0)
X, y = ds
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.3, random_state=42
)
train = pd.DataFrame(np.hstack((X_train, y_train.reshape(-1, 1))))
test = pd.DataFrame(np.hstack((X_test, y_test.reshape(-1, 1))))
train.columns = [f"x{i}" for i in range(1, n_features+1)] + [target_column]
test.columns = [f"x{i}" for i in range(1, n_features+1)] + [target_column]
train.head()
x1 x2 y
0 1.960101 0.383172 0.0
1 2.354420 -0.536942 1.0
2 1.682186 -0.332108 0.0
3 1.856507 0.687220 1.0
4 1.925524 0.427413 1.0
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap

x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5
y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5
cm = plt.cm.RdBu
cm_bright = ListedColormap(["#FF0000", "#0000FF"])
ax = plt.subplot(1, 1, 1)
ax.set_title("Input data")
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors="k")
# Plot the testing points
ax.scatter(
    X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors="k"
)
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
ax.set_xticks(())
ax.set_yticks(())
plt.tight_layout()
plt.show()

n_samples = len(train)
# add the dataset to the fun_control
fun_control.update({"data": None, # dataset,
               "train": train,
               "test": test,
               "n_samples": n_samples,
               "target_column": target_column})

10.4 Step 4: Specification of the Preprocessing Model

Data preprocesssing can be very simple, e.g., you can ignore it. Then you would choose the prep_model “None”:

prep_model = None
fun_control.update({"prep_model": prep_model})

A default approach for numerical data is the StandardScaler (mean 0, variance 1). This can be selected as follows:

from sklearn.preprocessing import StandardScaler
prep_model = StandardScaler()
fun_control.update({"prep_model": prep_model})

Even more complicated pre-processing steps are possible, e.g., the follwing pipeline:

categorical_columns = []
one_hot_encoder = OneHotEncoder(handle_unknown="ignore", sparse_output=False)
prep_model = ColumnTransformer(
         transformers=[
             ("categorical", one_hot_encoder, categorical_columns),
         ],
         remainder=StandardScaler(),
     )

10.5 Step 5: Select Model (algorithm) and core_model_hyper_dict

The selection of the algorithm (ML model) that should be tuned is done by specifying the its name from the sklearn implementation. For example, the SVC support vector machine classifier is selected as follows:

from spotPython.hyperparameters.values import add_core_model_to_fun_control
from spotPython.data.sklearn_hyper_dict import SklearnHyperDict
from sklearn.svm import SVC
add_core_model_to_fun_control(core_model=SVC,
                              fun_control=fun_control,
                              hyper_dict=SklearnHyperDict,
                              filename=None)

Now fun_control has the information from the JSON file. The corresponding entries for the core_model class are shown below.

fun_control['core_model_hyper_dict']
{'C': {'type': 'float',
  'default': 1.0,
  'transform': 'None',
  'lower': 0.1,
  'upper': 10.0},
 'kernel': {'levels': ['linear', 'poly', 'rbf', 'sigmoid'],
  'type': 'factor',
  'default': 'rbf',
  'transform': 'None',
  'core_model_parameter_type': 'str',
  'lower': 0,
  'upper': 3},
 'degree': {'type': 'int',
  'default': 3,
  'transform': 'None',
  'lower': 3,
  'upper': 3},
 'gamma': {'levels': ['scale', 'auto'],
  'type': 'factor',
  'default': 'scale',
  'transform': 'None',
  'core_model_parameter_type': 'str',
  'lower': 0,
  'upper': 1},
 'coef0': {'type': 'float',
  'default': 0.0,
  'transform': 'None',
  'lower': 0.0,
  'upper': 0.0},
 'shrinking': {'levels': [0, 1],
  'type': 'factor',
  'default': 0,
  'transform': 'None',
  'core_model_parameter_type': 'bool',
  'lower': 0,
  'upper': 1},
 'probability': {'levels': [0, 1],
  'type': 'factor',
  'default': 0,
  'transform': 'None',
  'core_model_parameter_type': 'bool',
  'lower': 0,
  'upper': 1},
 'tol': {'type': 'float',
  'default': 0.001,
  'transform': 'None',
  'lower': 0.0001,
  'upper': 0.01},
 'cache_size': {'type': 'float',
  'default': 200,
  'transform': 'None',
  'lower': 100,
  'upper': 400},
 'break_ties': {'levels': [0, 1],
  'type': 'factor',
  'default': 0,
  'transform': 'None',
  'core_model_parameter_type': 'bool',
  'lower': 0,
  'upper': 1}}
sklearn Model Selection

The following sklearn models are supported by default:

  • RidgeCV
  • RandomForestClassifier
  • SVC
  • LogisticRegression
  • KNeighborsClassifier
  • GradientBoostingClassifier
  • GradientBoostingRegressor
  • ElasticNet

They can be imported as follows:

from sklearn.linear_model import RidgeCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.linear_model import ElasticNet

10.6 Step 6: Modify hyper_dict Hyperparameters for the Selected Algorithm aka core_model

spotPython provides functions for modifying the hyperparameters, their bounds and factors as well as for activating and de-activating hyperparameters without re-compilation of the Python source code. These functions were described in Section 12.6.

10.6.1 Modify hyperparameter of type numeric and integer (boolean)

Numeric and boolean values can be modified using the modify_hyper_parameter_bounds method.

sklearn Model Hyperparameters

The hyperparameters of the sklearn SVC model are described in the sklearn documentation.

  • For example, to change the tol hyperparameter of the SVC model to the interval [1e-5, 1e-3], the following code can be used:
from spotPython.hyperparameters.values import modify_hyper_parameter_bounds
modify_hyper_parameter_bounds(fun_control, "tol", bounds=[1e-5, 1e-3])
modify_hyper_parameter_bounds(fun_control, "probability", bounds=[0, 0])
fun_control["core_model_hyper_dict"]["tol"]
{'type': 'float',
 'default': 0.001,
 'transform': 'None',
 'lower': 1e-05,
 'upper': 0.001}

10.6.2 Modify hyperparameter of type factor

Factors can be modified with the modify_hyper_parameter_levels function. For example, to exclude the sigmoid kernel from the tuning, the kernel hyperparameter of the SVC model can be modified as follows:

from spotPython.hyperparameters.values import modify_hyper_parameter_levels
modify_hyper_parameter_levels(fun_control, "kernel", ["poly", "rbf"])
fun_control["core_model_hyper_dict"]["kernel"]
{'levels': ['poly', 'rbf'],
 'type': 'factor',
 'default': 'rbf',
 'transform': 'None',
 'core_model_parameter_type': 'str',
 'lower': 0,
 'upper': 1}

10.6.3 Optimizers

Optimizers are described in Section 12.6.1.

10.7 Step 7: Selection of the Objective (Loss) Function

There are two metrics:

  1. metric_river is used for the river based evaluation via eval_oml_iter_progressive.
  2. metric_sklearn is used for the sklearn based evaluation.
from sklearn.metrics import mean_absolute_error, accuracy_score, roc_curve, roc_auc_score, log_loss, mean_squared_error
fun_control.update({
               "metric_sklearn": log_loss,
               "weights": 1.0,
               })
metric_sklearn: Minimization and Maximization
  • Because the metric_sklearn is used for the sklearn based evaluation, it is important to know whether the metric should be minimized or maximized.
  • The weights parameter is used to indicate whether the metric should be minimized or maximized.
  • If weights is set to -1.0, the metric is maximized.
  • If weights is set to 1.0, the metric is minimized, e.g., weights = 1.0 for mean_absolute_error, or weights = -1.0 for roc_auc_score.

10.7.1 Predict Classes or Class Probabilities

If the key "predict_proba" is set to True, the class probabilities are predicted. False is the default, i.e., the classes are predicted.

fun_control.update({
               "predict_proba": False,
               })

10.8 Step 8: Calling the SPOT Function

10.8.1 Preparing the SPOT Call

The following code passes the information about the parameter ranges and bounds to spot.

# extract the variable types, names, and bounds
from spotPython.hyperparameters.values import (    
    get_var_name,
    get_var_type,
    get_bound_values
    )
var_type = get_var_type(fun_control)
var_name = get_var_name(fun_control)
lower = get_bound_values(fun_control, "lower")
upper = get_bound_values(fun_control, "upper")
from spotPython.utils.eda import gen_design_table
print(gen_design_table(fun_control))
| name        | type   | default   |   lower |   upper | transform   |
|-------------|--------|-----------|---------|---------|-------------|
| C           | float  | 1.0       |   0.1   |  10     | None        |
| kernel      | factor | rbf       |   0     |   1     | None        |
| degree      | int    | 3         |   3     |   3     | None        |
| gamma       | factor | scale     |   0     |   1     | None        |
| coef0       | float  | 0.0       |   0     |   0     | None        |
| shrinking   | factor | 0         |   0     |   1     | None        |
| probability | factor | 0         |   0     |   0     | None        |
| tol         | float  | 0.001     |   1e-05 |   0.001 | None        |
| cache_size  | float  | 200.0     | 100     | 400     | None        |
| break_ties  | factor | 0         |   0     |   1     | None        |

10.8.2 The Objective Function

The objective function is selected next. It implements an interface from sklearn’s training, validation, and testing methods to spotPython.

from spotPython.fun.hypersklearn import HyperSklearn
fun = HyperSklearn().fun_sklearn
from spotPython.hyperparameters.values import get_default_hyperparameters_as_array
# X_start = get_default_hyperparameters_as_array(fun_control)

10.8.3 Run the Spot Optimizer

  • Run SPOT for approx. x mins (max_time).
  • Note: the run takes longer, because the evaluation time of initial design (here: initi_size, 20 points) is not considered.

10.8.4 Starting the Hyperparameter Tuning

import numpy as np
from spotPython.spot import spot
from math import inf
spot_tuner = spot.Spot(fun=fun,
                   lower = lower,
                   upper = upper,
                   fun_evals = inf,
                   fun_repeats = 1,
                   max_time = MAX_TIME,
                   noise = False,
                   tolerance_x = np.sqrt(np.spacing(1)),
                   var_type = var_type,
                   var_name = var_name,
                   infill_criterion = "y",
                   n_points = 1,
                   seed=123,
                   log_level = 50,
                   show_models= False,
                   show_progress= True,
                   fun_control = fun_control,
                   design_control={"init_size": INIT_SIZE,
                                   "repeats": 1},
                   surrogate_control={"noise": True,
                                      "cod_type": "norm",
                                      "min_theta": -4,
                                      "max_theta": 3,
                                      "n_theta": len(var_name),
                                      "model_fun_evals": 10_000,
                                      "log_level": 50
                                      })
spot_tuner.run()
spotPython tuning: 5.734217584632275 [----------] 2.71% 
spotPython tuning: 5.734217584632275 [#---------] 6.10% 
spotPython tuning: 5.734217584632275 [#---------] 10.53% 
spotPython tuning: 5.734217584632275 [#---------] 14.48% 
spotPython tuning: 5.734217584632275 [##--------] 17.89% 
spotPython tuning: 5.734217584632275 [##--------] 20.99% 
spotPython tuning: 5.734217584632275 [##--------] 23.47% 
spotPython tuning: 5.734217584632275 [###-------] 31.10% 
spotPython tuning: 5.734217584632275 [####------] 39.07% 
spotPython tuning: 5.734217584632275 [#####-----] 45.23% 
spotPython tuning: 5.734217584632275 [#####-----] 52.81% 
spotPython tuning: 5.734217584632275 [######----] 59.34% 
spotPython tuning: 5.734217584632275 [#######---] 66.85% 
spotPython tuning: 5.734217584632275 [#######---] 72.20% 
spotPython tuning: 5.734217584632275 [########--] 79.70% 
spotPython tuning: 5.734217584632275 [#########-] 88.51% 
spotPython tuning: 5.734217584632275 [##########] 97.98% 
spotPython tuning: 5.734217584632275 [##########] 100.00% Done...
<spotPython.spot.spot.Spot at 0x10558b400>

10.9 Step 9: Results

from spotPython.utils.file import save_pickle
save_pickle(spot_tuner, experiment_name)
from spotPython.utils.file import load_pickle
spot_tuner = load_pickle(experiment_name)
  • Show the Progress of the hyperparameter tuning:

After the hyperparameter tuning run is finished, the progress of the hyperparameter tuning can be visualized.

spot_tuner.plot_progress(log_y=False,
    filename="./figures/" + experiment_name+"_progress.png")

Progress plot. Black dots denote results from the initial design. Red dots illustrate the improvement found by the surrogate model based optimization.
  • Print the results
print(gen_design_table(fun_control=fun_control,
    spot=spot_tuner))
| name        | type   | default   |   lower |   upper |                tuned | transform   |   importance | stars   |
|-------------|--------|-----------|---------|---------|----------------------|-------------|--------------|---------|
| C           | float  | 1.0       |     0.1 |    10.0 |    2.394471655384338 | None        |         5.97 | *       |
| kernel      | factor | rbf       |     0.0 |     1.0 |                  1.0 | None        |       100.00 | ***     |
| degree      | int    | 3         |     3.0 |     3.0 |                  3.0 | None        |         0.00 |         |
| gamma       | factor | scale     |     0.0 |     1.0 |                  0.0 | None        |         0.00 |         |
| coef0       | float  | 0.0       |     0.0 |     0.0 |                  0.0 | None        |         0.00 |         |
| shrinking   | factor | 0         |     0.0 |     1.0 |                  0.0 | None        |         0.00 |         |
| probability | factor | 0         |     0.0 |     0.0 |                  0.0 | None        |         0.00 |         |
| tol         | float  | 0.001     |   1e-05 |   0.001 | 0.000982585315792582 | None        |         0.00 |         |
| cache_size  | float  | 200.0     |   100.0 |   400.0 |    375.6371648003268 | None        |         0.00 |         |
| break_ties  | factor | 0         |     0.0 |     1.0 |                  0.0 | None        |         0.00 |         |

10.9.1 Show variable importance

spot_tuner.plot_importance(threshold=0.025, filename="./figures/" + experiment_name+"_importance.png")

Variable importance plot, threshold 0.025.

10.9.2 Get Default Hyperparameters

from spotPython.hyperparameters.values import get_default_values, transform_hyper_parameter_values
values_default = get_default_values(fun_control)
values_default = transform_hyper_parameter_values(fun_control=fun_control, hyper_parameter_values=values_default)
values_default
{'C': 1.0,
 'kernel': 'rbf',
 'degree': 3,
 'gamma': 'scale',
 'coef0': 0.0,
 'shrinking': 0,
 'probability': 0,
 'tol': 0.001,
 'cache_size': 200.0,
 'break_ties': 0}
from sklearn.pipeline import make_pipeline
model_default = make_pipeline(fun_control["prep_model"], fun_control["core_model"](**values_default))
model_default
Pipeline(steps=[('standardscaler', StandardScaler()),
                ('svc',
                 SVC(break_ties=0, cache_size=200.0, probability=0,
                     shrinking=0))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

10.9.3 Get SPOT Results

X = spot_tuner.to_all_dim(spot_tuner.min_X.reshape(1,-1))
print(X)
[[2.39447166e+00 1.00000000e+00 3.00000000e+00 0.00000000e+00
  0.00000000e+00 0.00000000e+00 0.00000000e+00 9.82585316e-04
  3.75637165e+02 0.00000000e+00]]
from spotPython.hyperparameters.values import assign_values, return_conf_list_from_var_dict
v_dict = assign_values(X, fun_control["var_name"])
return_conf_list_from_var_dict(var_dict=v_dict, fun_control=fun_control)
[{'C': 2.394471655384338,
  'kernel': 'rbf',
  'degree': 3,
  'gamma': 'scale',
  'coef0': 0.0,
  'shrinking': 0,
  'probability': 0,
  'tol': 0.000982585315792582,
  'cache_size': 375.6371648003268,
  'break_ties': 0}]
from spotPython.hyperparameters.values import get_one_sklearn_model_from_X
model_spot = get_one_sklearn_model_from_X(X, fun_control)
model_spot
Pipeline(steps=[('standardscaler', StandardScaler()),
                ('svc',
                 SVC(C=2.394471655384338, break_ties=0,
                     cache_size=375.6371648003268, probability=0, shrinking=0,
                     tol=0.000982585315792582))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

10.9.4 Plot: Compare Predictions

from spotPython.plot.validation import plot_roc
plot_roc([model_default, model_spot], fun_control, model_names=["Default", "Spot"])

from spotPython.plot.validation import plot_confusion_matrix
plot_confusion_matrix(model_default, fun_control, title = "Default")

plot_confusion_matrix(model_spot, fun_control, title="SPOT")

min(spot_tuner.y), max(spot_tuner.y)
(5.734217584632275, 7.782152436286657)

10.9.5 Detailed Hyperparameter Plots

filename = "./figures/" + experiment_name
spot_tuner.plot_important_hyperparameter_contour(filename=filename)
C:  5.974249844745921
kernel:  100.0

10.9.6 Parallel Coordinates Plot

spot_tuner.parallel_plot()

10.9.7 Plot all Combinations of Hyperparameters

  • Warning: this may take a while.
PLOT_ALL = False
if PLOT_ALL:
    n = spot_tuner.k
    for i in range(n-1):
        for j in range(i+1, n):
            spot_tuner.plot_contour(i=i, j=j, min_z=min_z, max_z = max_z)