list | grep "spot[RiverPython]" pip
spotPython 0.2.37
spotRiver 0.0.93
Note: you may need to restart the kernel to use updated packages.
In this tutorial, we will show how spotPython
can be integrated into the PyTorch
training workflow.
This document refers to the following software versions:
python
: 3.10.10torch
: 2.0.1torchvision
: 0.15.0list | grep "spot[RiverPython]" pip
spotPython 0.2.37
spotRiver 0.0.93
Note: you may need to restart the kernel to use updated packages.
spotPython
can be installed via pip. Alternatively, the source code can be downloaded from gitHub: https://github.com/sequential-parameter-optimization/spotPython.
!pip install spotPython
spotPython
from gitHub.# import sys
# !{sys.executable} -m pip install --upgrade build
# !{sys.executable} -m pip install --upgrade --force-reinstall spotPython
Before we consider the detailed experimental setup, we select the parameters that affect run time, initial design size and the device that is used.
DEVICE
."cpu"
is preferred (on Mac)."cuda:0"
instead.None
, spotPython
will automatically select the device.
"mps"
on Macs, which is not the best choice for simple neural nets.= 1
MAX_TIME = 5
INIT_SIZE = "cpu" # "cuda:0" None DEVICE
from spotPython.utils.device import getDevice
= getDevice(DEVICE)
DEVICE print(DEVICE)
cpu
import os
import copy
import socket
from datetime import datetime
from dateutil.tz import tzlocal
= datetime.now(tzlocal())
start_time = socket.gethostname().split(".")[0]
HOSTNAME = '12-torch' + "_" + HOSTNAME + "_" + str(MAX_TIME) + "min_" + str(INIT_SIZE) + "init_" + str(start_time).split(".", 1)[0].replace(' ', '_')
experiment_name = experiment_name.replace(':', '-')
experiment_name print(experiment_name)
if not os.path.exists('./figures'):
'./figures') os.makedirs(
12-torch_bartz09_1min_5init_2023-06-18_18-37-10
fun_control
DictionaryspotPython
uses a Python dictionary for storing the information required for the hyperparameter tuning process, which was described in Section 14.2.
from spotPython.utils.init import fun_control_init
= fun_control_init(task="classification",
fun_control ="runs/12_spot_hpt_torch_cifar10",
tensorboard_path=DEVICE) device
from torchvision import datasets, transforms
import torchvision
def load_data(data_dir="./data"):
= transforms.Compose([
transform
transforms.ToTensor(),0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
transforms.Normalize((
])
= torchvision.datasets.CIFAR10(
trainset =data_dir, train=True, download=True, transform=transform)
root
= torchvision.datasets.CIFAR10(
testset =data_dir, train=False, download=True, transform=transform)
root
return trainset, testset
= load_data() train, test
Files already downloaded and verified
Files already downloaded and verified
fun_control
dictionary:= len(train)
n_samples # add the dataset to the fun_control
"data": None, # dataset,
fun_control.update({"train": train,
"test": test,
"n_samples": n_samples,
"target_column": None})
After the training and test data are specified and added to the fun_control
dictionary, spotPython
allows the specification of a data preprocessing pipeline, e.g., for the scaling of the data or for the one-hot encoding of categorical variables, see Section 14.4. This feature is not used here, so we do not change the default value (which is None
).
algorithm
) and core_model_hyper_dict
spotPython
includes the Net_CIFAR10
class which is implemented in the file netcifar10.py
. The class is imported here.
This class inherits from the class Net_Core
which is implemented in the file netcore.py
, see Section 14.5.1.
from spotPython.torch.netcifar10 import Net_CIFAR10
from spotPython.data.torch_hyper_dict import TorchHyperDict
from spotPython.hyperparameters.values import add_core_model_to_fun_control
= add_core_model_to_fun_control(core_model=Net_CIFAR10,
fun_control =fun_control,
fun_control=TorchHyperDict,
hyper_dict=None) filename
hyper_dict
Hyperparameters for the Selected AlgorithmspotPython
uses JSON
files for the specification of the hyperparameters, which were described in Section 14.5.5.
The corresponding entries for the Net_CIFAR10
class are shown below.
"Net_CIFAR10":
{
"l1": {
"type": "int",
"default": 5,
"transform": "transform_power_2_int",
"lower": 2,
"upper": 9},
"l2": {
"type": "int",
"default": 5,
"transform": "transform_power_2_int",
"lower": 2,
"upper": 9},
"lr_mult": {
"type": "float",
"default": 1.0,
"transform": "None",
"lower": 0.1,
"upper": 10.0},
"batch_size": {
"type": "int",
"default": 4,
"transform": "transform_power_2_int",
"lower": 1,
"upper": 4},
"epochs": {
"type": "int",
"default": 3,
"transform": "transform_power_2_int",
"lower": 3,
"upper": 4},
"k_folds": {
"type": "int",
"default": 1,
"transform": "None",
"lower": 1,
"upper": 1},
"patience": {
"type": "int",
"default": 5,
"transform": "None",
"lower": 2,
"upper": 10
},
"optimizer": {
"levels": ["Adadelta",
"Adagrad",
"Adam",
"AdamW",
"SparseAdam",
"Adamax",
"ASGD",
"NAdam",
"RAdam",
"RMSprop",
"Rprop",
"SGD"],
"type": "factor",
"default": "SGD",
"transform": "None",
"class_name": "torch.optim",
"core_model_parameter_type": "str",
"lower": 0,
"upper": 12},
"sgd_momentum": {
"type": "float",
"default": 0.0,
"transform": "None",
"lower": 0.0,
"upper": 1.0}
},
hyper_dict
Hyperparameters for the Selected Algorithm aka core_model
spotPython
provides functions for modifying the hyperparameters, their bounds and factors as well as for activating and de-activating hyperparameters without re-compilation of the Python source code. These functions were described in Section 14.6.
hyper_dict
Hyperparameters for the Selected Algorithm aka core_model
The hyperparameter k_folds
is not used, it is de-activated here by setting the lower and upper bound to the same value.
l1
and l2
as well as epochs
and patience
are set to small values for demonstration purposes. These values are too small for a real application.fun_control = modify_hyper_parameter_bounds(fun_control, "l1", bounds=[2, 7])
fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[7, 9])
andfun_control = modify_hyper_parameter_bounds(fun_control, "patience", bounds=[2, 7])
from spotPython.hyperparameters.values import modify_hyper_parameter_bounds
= modify_hyper_parameter_bounds(fun_control, "k_folds", bounds=[0, 0])
fun_control = modify_hyper_parameter_bounds(fun_control, "patience", bounds=[2, 2])
fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[2, 3])
fun_control = modify_hyper_parameter_bounds(fun_control, "l1", bounds=[2, 5])
fun_control = modify_hyper_parameter_bounds(fun_control, "l2", bounds=[2, 5]) fun_control
from spotPython.hyperparameters.values import modify_hyper_parameter_levels
= modify_hyper_parameter_levels(fun_control, "optimizer",["Adam", "AdamW", "Adamax", "NAdam"]) fun_control
Optimizers can be selected as described in Section 19.6.2.
Optimizers are described in Section 14.6.1.
= modify_hyper_parameter_bounds(fun_control,
fun_control "lr_mult", bounds=[1e-3, 1e-3])
= modify_hyper_parameter_bounds(fun_control,
fun_control "sgd_momentum", bounds=[0.9, 0.9])
The evaluation procedure requires the specification of two elements:
These are described in Section 19.7.1.
The key "loss_function"
specifies the loss function which is used during the optimization, see Section 14.7.5.
We will use CrossEntropy loss for the multiclass-classification task.
from torch.nn import CrossEntropyLoss
= CrossEntropyLoss()
loss_function
fun_control.update({"loss_function": loss_function,
"shuffle": True,
"eval": "train_hold_out"
})
import torchmetrics
= torchmetrics.Accuracy(task="multiclass",
metric_torch =10).to(fun_control["device"])
num_classes"metric_torch": metric_torch}) fun_control.update({
The following code passes the information about the parameter ranges and bounds to spot
.
# extract the variable types, names, and bounds
from spotPython.hyperparameters.values import (get_bound_values,
get_var_name,
get_var_type,)= get_var_type(fun_control)
var_type = get_var_name(fun_control)
var_name "var_type": var_type,
fun_control.update({"var_name": var_name})
= get_bound_values(fun_control, "lower")
lower = get_bound_values(fun_control, "upper") upper
from spotPython.utils.eda import gen_design_table
print(gen_design_table(fun_control))
| name | type | default | lower | upper | transform |
|--------------|--------|-----------|---------|---------|-----------------------|
| l1 | int | 5 | 2 | 5 | transform_power_2_int |
| l2 | int | 5 | 2 | 5 | transform_power_2_int |
| lr_mult | float | 1.0 | 0.001 | 0.001 | None |
| batch_size | int | 4 | 1 | 4 | transform_power_2_int |
| epochs | int | 3 | 2 | 3 | transform_power_2_int |
| k_folds | int | 1 | 0 | 0 | None |
| patience | int | 5 | 2 | 2 | None |
| optimizer | factor | SGD | 0 | 3 | None |
| sgd_momentum | float | 0.0 | 0.9 | 0.9 | None |
fun_torch
The objective function fun_torch
is selected next. It implements an interface from PyTorch
’s training, validation, and testing methods to spotPython
.
from spotPython.fun.hypertorch import HyperTorch
= HyperTorch().fun_torch fun
import numpy as np
from spotPython.spot import spot
from math import inf
= spot.Spot(fun=fun,
spot_tuner = lower,
lower = upper,
upper = inf,
fun_evals = 1,
fun_repeats = MAX_TIME,
max_time = False,
noise = np.sqrt(np.spacing(1)),
tolerance_x = var_type,
var_type = var_name,
var_name = "y",
infill_criterion = 1,
n_points =123,
seed= 50,
log_level = False,
show_models= True,
show_progress= fun_control,
fun_control ={"init_size": INIT_SIZE,
design_control"repeats": 1},
={"noise": True,
surrogate_control"cod_type": "norm",
"min_theta": -4,
"max_theta": 3,
"n_theta": len(var_name),
"model_fun_evals": 10_000,
"log_level": 50
})=X_start) spot_tuner.run(X_start
config: {'l1': 16, 'l2': 8, 'lr_mult': 0.001, 'batch_size': 16, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'AdamW', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.1005999967455864 | Loss: 2.3320793640136719 | Acc: 0.1006000000000000.
Epoch: 2 |
MulticlassAccuracy: 0.1005999967455864 | Loss: 2.3313088960647583 | Acc: 0.1006000000000000.
Epoch: 3 |
MulticlassAccuracy: 0.1005000025033951 | Loss: 2.3305278373718261 | Acc: 0.1005000000000000.
Epoch: 4 |
MulticlassAccuracy: 0.1004500016570091 | Loss: 2.3295538171768189 | Acc: 0.1004500000000000.
Epoch: 5 |
MulticlassAccuracy: 0.1027000024914742 | Loss: 2.3282799844741819 | Acc: 0.1027000000000000.
Epoch: 6 |
MulticlassAccuracy: 0.1048500016331673 | Loss: 2.3268387472152710 | Acc: 0.1048500000000000.
Epoch: 7 |
MulticlassAccuracy: 0.1062000021338463 | Loss: 2.3253345775604246 | Acc: 0.1062000000000000.
Epoch: 8 |
MulticlassAccuracy: 0.1062500029802322 | Loss: 2.3238091669082643 | Acc: 0.1062500000000000.
Returned to Spot: Validation loss: 2.3238091669082643
config: {'l1': 8, 'l2': 8, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 4, 'k_folds': 0, 'patience': 2, 'optimizer': 'Adamax', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.1001999974250793 | Loss: 2.3143725443840029 | Acc: 0.1002000000000000.
Epoch: 2 |
MulticlassAccuracy: 0.1001999974250793 | Loss: 2.3127466676712034 | Acc: 0.1002000000000000.
Epoch: 3 |
MulticlassAccuracy: 0.1001999974250793 | Loss: 2.3108304573059084 | Acc: 0.1002000000000000.
Epoch: 4 |
MulticlassAccuracy: 0.1001999974250793 | Loss: 2.3086504985809326 | Acc: 0.1002000000000000.
Returned to Spot: Validation loss: 2.3086504985809326
config: {'l1': 32, 'l2': 16, 'lr_mult': 0.001, 'batch_size': 2, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.0838499963283539 | Loss: 2.2970648386478425 | Acc: 0.0838500000000000.
Epoch: 2 |
MulticlassAccuracy: 0.1060499995946884 | Loss: 2.2672528599262236 | Acc: 0.1060500000000000.
Epoch: 3 |
MulticlassAccuracy: 0.1532000005245209 | Loss: 2.2286043057560923 | Acc: 0.1532000000000000.
Epoch: 4 |
MulticlassAccuracy: 0.1592999994754791 | Loss: 2.1975404310941697 | Acc: 0.1593000000000000.
Epoch: 5 |
MulticlassAccuracy: 0.1621000021696091 | Loss: 2.1720725005507471 | Acc: 0.1621000000000000.
Epoch: 6 |
MulticlassAccuracy: 0.1661500036716461 | Loss: 2.1485777876496317 | Acc: 0.1661500000000000.
Epoch: 7 |
MulticlassAccuracy: 0.1698500066995621 | Loss: 2.1261238770365716 | Acc: 0.1698500000000000.
Epoch: 8 |
MulticlassAccuracy: 0.2027000039815903 | Loss: 2.1048670004248620 | Acc: 0.2027000000000000.
Returned to Spot: Validation loss: 2.104867000424862
config: {'l1': 4, 'l2': 8, 'lr_mult': 0.001, 'batch_size': 4, 'epochs': 4, 'k_folds': 0, 'patience': 2, 'optimizer': 'AdamW', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.1021500006318092 | Loss: 2.3173159094333649 | Acc: 0.1021500000000000.
Epoch: 2 |
MulticlassAccuracy: 0.1021500006318092 | Loss: 2.3166419280052186 | Acc: 0.1021500000000000.
Epoch: 3 |
MulticlassAccuracy: 0.1021500006318092 | Loss: 2.3159096458911894 | Acc: 0.1021500000000000.
Epoch: 4 |
MulticlassAccuracy: 0.1021500006318092 | Loss: 2.3151150074481963 | Acc: 0.1021500000000000.
Returned to Spot: Validation loss: 2.3151150074481963
config: {'l1': 16, 'l2': 32, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'Adam', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.0977500006556511 | Loss: 2.3072158798217774 | Acc: 0.0977500000000000.
Epoch: 2 |
MulticlassAccuracy: 0.0977500006556511 | Loss: 2.3066844908714295 | Acc: 0.0977500000000000.
Epoch: 3 |
MulticlassAccuracy: 0.0977500006556511 | Loss: 2.3061227139472962 | Acc: 0.0977500000000000.
Epoch: 4 |
MulticlassAccuracy: 0.0977500006556511 | Loss: 2.3054926554679871 | Acc: 0.0977500000000000.
Epoch: 5 |
MulticlassAccuracy: 0.0977500006556511 | Loss: 2.3047465593338012 | Acc: 0.0977500000000000.
Epoch: 6 |
MulticlassAccuracy: 0.0977500006556511 | Loss: 2.3037577246665957 | Acc: 0.0977500000000000.
Epoch: 7 |
MulticlassAccuracy: 0.0977500006556511 | Loss: 2.3023899147033693 | Acc: 0.0977500000000000.
Epoch: 8 |
MulticlassAccuracy: 0.0977500006556511 | Loss: 2.3005272761344910 | Acc: 0.0977500000000000.
Returned to Spot: Validation loss: 2.300527276134491
config: {'l1': 8, 'l2': 16, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.0992000028491020 | Loss: 2.3124587755203247 | Acc: 0.0992000000000000.
Epoch: 2 |
MulticlassAccuracy: 0.0992000028491020 | Loss: 2.3107175286293029 | Acc: 0.0992000000000000.
Epoch: 3 |
MulticlassAccuracy: 0.0992000028491020 | Loss: 2.3079713578224181 | Acc: 0.0992000000000000.
Epoch: 4 |
MulticlassAccuracy: 0.0992000028491020 | Loss: 2.3040227384567262 | Acc: 0.0992000000000000.
Epoch: 5 |
MulticlassAccuracy: 0.0998999997973442 | Loss: 2.2972345275878907 | Acc: 0.0999000000000000.
Epoch: 6 |
MulticlassAccuracy: 0.1196999996900558 | Loss: 2.2885291831970216 | Acc: 0.1197000000000000.
Epoch: 7 |
MulticlassAccuracy: 0.1536000072956085 | Loss: 2.2788153867721559 | Acc: 0.1536000000000000.
Epoch: 8 |
MulticlassAccuracy: 0.1726000010967255 | Loss: 2.2669863924026488 | Acc: 0.1726000000000000.
Returned to Spot: Validation loss: 2.2669863924026488
spotPython tuning: 2.104867000424862 [##########] 100.00% Done...
<spotPython.spot.spot.Spot at 0x2acc26aa0>
The textual output shown in the console (or code cell) can be visualized with Tensorboard as described in Section 14.9, see also the description in the documentation: Tensorboard.
After the hyperparameter tuning run is finished, the results can be analyzed as described in Section 14.10.
= False
SAVE = False
LOAD
if SAVE:
= "res_" + experiment_name + ".pkl"
result_file_name with open(result_file_name, 'wb') as f:
pickle.dump(spot_tuner, f)
if LOAD:
= "ADD THE NAME here, e.g.: res_ch10-friedman-hpt-0_maans03_60min_20init_1K_2023-04-14_10-11-19.pkl"
result_file_name with open(result_file_name, 'rb') as f:
= pickle.load(f) spot_tuner
After the hyperparameter tuning run is finished, the progress of the hyperparameter tuning can be visualized. The following code generates the progress plot from ?fig-progress.
=False,
spot_tuner.plot_progress(log_y="./figures/" + experiment_name+"_progress.png") filename
print(gen_design_table(fun_control=fun_control,
=spot_tuner)) spot
| name | type | default | lower | upper | tuned | transform | importance | stars |
|--------------|--------|-----------|---------|---------|---------|-----------------------|--------------|---------|
| l1 | int | 5 | 2.0 | 5.0 | 5.0 | transform_power_2_int | 0.15 | . |
| l2 | int | 5 | 2.0 | 5.0 | 4.0 | transform_power_2_int | 0.00 | |
| lr_mult | float | 1.0 | 0.001 | 0.001 | 0.001 | None | 0.00 | |
| batch_size | int | 4 | 1.0 | 4.0 | 1.0 | transform_power_2_int | 0.26 | . |
| epochs | int | 3 | 2.0 | 3.0 | 3.0 | transform_power_2_int | 0.00 | |
| k_folds | int | 1 | 0.0 | 0.0 | 0.0 | None | 0.00 | |
| patience | int | 5 | 2.0 | 2.0 | 2.0 | None | 0.00 | |
| optimizer | factor | SGD | 0.0 | 3.0 | 3.0 | None | 100.00 | *** |
| sgd_momentum | float | 0.0 | 0.9 | 0.9 | 0.9 | None | 0.00 | |
=0.025, filename="./figures/" + experiment_name+"_importance.png") spot_tuner.plot_importance(threshold
The architecture of the spotPython
model can be obtained by the following code:
from spotPython.hyperparameters.values import get_one_core_model_from_X
= spot_tuner.to_all_dim(spot_tuner.min_X.reshape(1,-1))
X = get_one_core_model_from_X(X, fun_control)
model_spot model_spot
Net_CIFAR10(
(conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=32, bias=True)
(fc2): Linear(in_features=32, out_features=16, bias=True)
(fc3): Linear(in_features=16, out_features=10, bias=True)
)
from spotPython.torch.traintest import (
train_tuned,
test_tuned, )
=model_spot, train_dataset=train,
train_tuned(net=fun_control["loss_function"],
loss_function=fun_control["metric_torch"],
metric=True,
shuffle= fun_control["device"],
device =None,
path=fun_control["task"],) task
Epoch: 1 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.315
MulticlassAccuracy: 0.0996999964118004 | Loss: 2.3080700983047486 | Acc: 0.0997000000000000.
Epoch: 2 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.302
MulticlassAccuracy: 0.1472000032663345 | Loss: 2.2859390279293059 | Acc: 0.1472000000000000.
Epoch: 3 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.275
MulticlassAccuracy: 0.1883500069379807 | Loss: 2.2514116731166838 | Acc: 0.1883500000000000.
Epoch: 4 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.239
MulticlassAccuracy: 0.1889500021934509 | Loss: 2.2111890979766846 | Acc: 0.1889500000000000.
Epoch: 5 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.195
MulticlassAccuracy: 0.1897000074386597 | Loss: 2.1724009061694147 | Acc: 0.1897000000000000.
Epoch: 6 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.158
MulticlassAccuracy: 0.1960500031709671 | Loss: 2.1386834780216217 | Acc: 0.1960500000000000.
Epoch: 7 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.126
MulticlassAccuracy: 0.2005500048398972 | Loss: 2.1079546381115914 | Acc: 0.2005500000000000.
Epoch: 8 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.093
MulticlassAccuracy: 0.2078000009059906 | Loss: 2.0786065553188324 | Acc: 0.2078000000000000.
Returned to Spot: Validation loss: 2.0786065553188324
If path
is set to a filename, e.g., path = "model_spot_trained.pt"
, the weights of the trained model will be loaded from this file.
=model_spot, test_dataset=test,
test_tuned(net=False,
shuffle=fun_control["loss_function"],
loss_function=fun_control["metric_torch"],
metric= fun_control["device"],
device =fun_control["task"],) task
MulticlassAccuracy: 0.2084999978542328 | Loss: 2.0775342788815498 | Acc: 0.2085000000000000.
Final evaluation: Validation loss: 2.07753427888155
Final evaluation: Validation metric: 0.2084999978542328
----------------------------------------------
(2.07753427888155, nan, tensor(0.2085))
k_folds
attribute of the model as follows:setattr(model_spot, "k_folds", 10)
from spotPython.torch.traintest import evaluate_cv
# modify k-kolds:
setattr(model_spot, "k_folds", 3)
= evaluate_cv(net=model_spot,
df_eval, df_preds, df_metrics =fun_control["data"],
dataset=fun_control["loss_function"],
loss_function=fun_control["metric_torch"],
metric=fun_control["task"],
task=fun_control["writer"],
writer="model_spot_cv",
writerId= fun_control["device"]) device
Error in Net_Core. Call to evaluate_cv() failed. err=TypeError("Expected sequence or array-like, got <class 'NoneType'>"), type(err)=<class 'TypeError'>
= type(fun_control["metric_torch"]).__name__
metric_name print(f"loss: {df_eval}, Cross-validated {metric_name}: {df_metrics}")
loss: nan, Cross-validated MulticlassAccuracy: nan
= "./figures/" + experiment_name
filename =filename) spot_tuner.plot_important_hyperparameter_contour(filename
l1: 0.1505907656064017
batch_size: 0.2628721485249123
optimizer: 100.0
spot_tuner.parallel_plot()
Parallel coordinates plots
= False
PLOT_ALL if PLOT_ALL:
= spot_tuner.k
n for i in range(n-1):
for j in range(i+1, n):
=i, j=j, min_z=min_z, max_z = max_z) spot_tuner.plot_contour(i