12  HPT: PyTorch With cifar10 Data

In this tutorial, we will show how spotPython can be integrated into the PyTorch training workflow.

This document refers to the following software versions:

pip list | grep  "spot[RiverPython]"
spotPython                 0.2.38
spotRiver                  0.0.93
Note: you may need to restart the kernel to use updated packages.

spotPython can be installed via pip. Alternatively, the source code can be downloaded from gitHub: https://github.com/sequential-parameter-optimization/spotPython.

!pip install spotPython
# import sys
# !{sys.executable} -m pip install --upgrade build
# !{sys.executable} -m pip install --upgrade --force-reinstall spotPython

12.1 Step 1: Setup

Before we consider the detailed experimental setup, we select the parameters that affect run time, initial design size and the device that is used.

Caution: Run time and initial design size should be increased for real experiments
  • MAX_TIME is set to one minute for demonstration purposes. For real experiments, this should be increased to at least 1 hour.
  • INIT_SIZE is set to 5 for demonstration purposes. For real experiments, this should be increased to at least 10.
Note: Device selection
  • The device can be selected by setting the variable DEVICE.
  • Since we are using a simple neural net, the setting "cpu" is preferred (on Mac).
  • If you have a GPU, you can use "cuda:0" instead.
  • If DEVICE is set to None, spotPython will automatically select the device.
    • This might result in "mps" on Macs, which is not the best choice for simple neural nets.
MAX_TIME = 1
INIT_SIZE = 5
DEVICE = "cpu" # "cuda:0" None
from spotPython.utils.device import getDevice
DEVICE = getDevice(DEVICE)
print(DEVICE)
cpu
import os
import copy
import socket
from datetime import datetime
from dateutil.tz import tzlocal
start_time = datetime.now(tzlocal())
HOSTNAME = socket.gethostname().split(".")[0]
experiment_name = '12-torch' + "_" + HOSTNAME + "_" + str(MAX_TIME) + "min_" + str(INIT_SIZE) + "init_" + str(start_time).split(".", 1)[0].replace(' ', '_')
experiment_name = experiment_name.replace(':', '-')
print(experiment_name)
if not os.path.exists('./figures'):
    os.makedirs('./figures')
12-torch_bartz09_1min_5init_2023-06-19_02-39-17

12.2 Step 2: Initialization of the Empty fun_control Dictionary

spotPython uses a Python dictionary for storing the information required for the hyperparameter tuning process, which was described in Section 14.2.

Caution: Tensorboard does not work under Windows
  • Since tensorboard does not work under Windows, we recommend setting the parameter tensorboard_path to None if you are working under Windows.
from spotPython.utils.init import fun_control_init
fun_control = fun_control_init(task="classification",
    tensorboard_path="runs/12_spot_hpt_torch_cifar10",
    device=DEVICE)

12.3 Step 3: PyTorch Data Loading

12.3.1 Load Data Cifar10 Data

from torchvision import datasets, transforms
import torchvision
def load_data(data_dir="./data"):
    transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
    ])

    trainset = torchvision.datasets.CIFAR10(
        root=data_dir, train=True, download=True, transform=transform)

    testset = torchvision.datasets.CIFAR10(
        root=data_dir, train=False, download=True, transform=transform)

    return trainset, testset
train, test = load_data()
Files already downloaded and verified
Files already downloaded and verified
  • Since this works fine, we can add the data loading to the fun_control dictionary:
n_samples = len(train)
# add the dataset to the fun_control
fun_control.update({"data": None, # dataset,
               "train": train,
               "test": test,
               "n_samples": n_samples,
               "target_column": None})

12.4 Step 4: Specification of the Preprocessing Model

After the training and test data are specified and added to the fun_control dictionary, spotPython allows the specification of a data preprocessing pipeline, e.g., for the scaling of the data or for the one-hot encoding of categorical variables, see Section 14.4. This feature is not used here, so we do not change the default value (which is None).

12.5 Step 5: Select Model (algorithm) and core_model_hyper_dict

12.5.1 Implementing a Configurable Neural Network With spotPython

spotPython includes the Net_CIFAR10 class which is implemented in the file netcifar10.py. The class is imported here.

This class inherits from the class Net_Core which is implemented in the file netcore.py, see Section 14.5.1.

from spotPython.torch.netcifar10 import Net_CIFAR10
from spotPython.data.torch_hyper_dict import TorchHyperDict
from spotPython.hyperparameters.values import add_core_model_to_fun_control
fun_control = add_core_model_to_fun_control(core_model=Net_CIFAR10,
                              fun_control=fun_control,
                              hyper_dict=TorchHyperDict,
                              filename=None)

12.5.2 The Search Space

12.5.3 Configuring the Search Space With spotPython

12.5.3.1 The hyper_dict Hyperparameters for the Selected Algorithm

spotPython uses JSON files for the specification of the hyperparameters, which were described in Section 14.5.5.

The corresponding entries for the core_model class are shown below.

fun_control['core_model_hyper_dict']
{'l1': {'type': 'int',
  'default': 5,
  'transform': 'transform_power_2_int',
  'lower': 2,
  'upper': 9},
 'l2': {'type': 'int',
  'default': 5,
  'transform': 'transform_power_2_int',
  'lower': 2,
  'upper': 9},
 'lr_mult': {'type': 'float',
  'default': 1.0,
  'transform': 'None',
  'lower': 0.1,
  'upper': 10.0},
 'batch_size': {'type': 'int',
  'default': 4,
  'transform': 'transform_power_2_int',
  'lower': 1,
  'upper': 4},
 'epochs': {'type': 'int',
  'default': 3,
  'transform': 'transform_power_2_int',
  'lower': 3,
  'upper': 4},
 'k_folds': {'type': 'int',
  'default': 1,
  'transform': 'None',
  'lower': 1,
  'upper': 1},
 'patience': {'type': 'int',
  'default': 5,
  'transform': 'None',
  'lower': 2,
  'upper': 10},
 'optimizer': {'levels': ['Adadelta',
   'Adagrad',
   'Adam',
   'AdamW',
   'SparseAdam',
   'Adamax',
   'ASGD',
   'NAdam',
   'RAdam',
   'RMSprop',
   'Rprop',
   'SGD'],
  'type': 'factor',
  'default': 'SGD',
  'transform': 'None',
  'class_name': 'torch.optim',
  'core_model_parameter_type': 'str',
  'lower': 0,
  'upper': 12},
 'sgd_momentum': {'type': 'float',
  'default': 0.0,
  'transform': 'None',
  'lower': 0.0,
  'upper': 1.0}}

12.6 Step 6: Modify hyper_dict Hyperparameters for the Selected Algorithm aka core_model

spotPython provides functions for modifying the hyperparameters, their bounds and factors as well as for activating and de-activating hyperparameters without re-compilation of the Python source code. These functions were described in Section 14.6.

12.6.1 Step 5: Modify hyper_dict Hyperparameters for the Selected Algorithm aka core_model

12.6.1.1 Modify Hyperparameters of Type numeric and integer (boolean)

The hyperparameter k_folds is not used, it is de-activated here by setting the lower and upper bound to the same value.

Caution: Small net size, number of epochs, and patience for demonstration purposes
  • Net sizes l1 and l2 as well as epochs and patience are set to small values for demonstration purposes. These values are too small for a real application.
  • More resonable values are, e.g.:
    • fun_control = modify_hyper_parameter_bounds(fun_control, "l1", bounds=[2, 7])
    • fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[7, 9]) and
    • fun_control = modify_hyper_parameter_bounds(fun_control, "patience", bounds=[2, 7])
from spotPython.hyperparameters.values import modify_hyper_parameter_bounds
fun_control = modify_hyper_parameter_bounds(fun_control, "k_folds", bounds=[0, 0])
fun_control = modify_hyper_parameter_bounds(fun_control, "patience", bounds=[2, 2])
fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[2, 3])
fun_control = modify_hyper_parameter_bounds(fun_control, "l1", bounds=[2, 5])
fun_control = modify_hyper_parameter_bounds(fun_control, "l2", bounds=[2, 5])

12.6.2 Modify hyperparameter of type factor

from spotPython.hyperparameters.values import modify_hyper_parameter_levels
fun_control = modify_hyper_parameter_levels(fun_control, "optimizer",["Adam", "AdamW", "Adamax", "NAdam"])

12.6.3 Optimizers

Optimizers can be selected as described in Section 19.6.2.

Optimizers are described in Section 14.6.1.

fun_control = modify_hyper_parameter_bounds(fun_control,
    "lr_mult", bounds=[1e-3, 1e-3])
fun_control = modify_hyper_parameter_bounds(fun_control,
    "sgd_momentum", bounds=[0.9, 0.9])

12.7 Step 7: Selection of the Objective (Loss) Function

12.7.1 Evaluation

The evaluation procedure requires the specification of two elements:

  1. the way how the data is split into a train and a test set and
  2. the loss function (and a metric).

These are described in Section 19.7.1.

The key "loss_function" specifies the loss function which is used during the optimization, see Section 14.7.5.

We will use CrossEntropy loss for the multiclass-classification task.

from torch.nn import CrossEntropyLoss
loss_function = CrossEntropyLoss()
fun_control.update({
        "loss_function": loss_function,
        "shuffle": True,
        "eval":  "train_hold_out"
        })

12.7.2 Metric

import torchmetrics
metric_torch = torchmetrics.Accuracy(task="multiclass",
     num_classes=10).to(fun_control["device"])
fun_control.update({"metric_torch": metric_torch})

12.8 Step 8: Calling the SPOT Function

12.8.1 Preparing the SPOT Call

The following code passes the information about the parameter ranges and bounds to spot.

# extract the variable types, names, and bounds
from spotPython.hyperparameters.values import (get_bound_values,
    get_var_name,
    get_var_type,)
var_type = get_var_type(fun_control)
var_name = get_var_name(fun_control)
fun_control.update({"var_type": var_type,
                    "var_name": var_name})
lower = get_bound_values(fun_control, "lower")
upper = get_bound_values(fun_control, "upper")
from spotPython.utils.eda import gen_design_table
print(gen_design_table(fun_control))
| name         | type   | default   |   lower |   upper | transform             |
|--------------|--------|-----------|---------|---------|-----------------------|
| l1           | int    | 5         |   2     |   5     | transform_power_2_int |
| l2           | int    | 5         |   2     |   5     | transform_power_2_int |
| lr_mult      | float  | 1.0       |   0.001 |   0.001 | None                  |
| batch_size   | int    | 4         |   1     |   4     | transform_power_2_int |
| epochs       | int    | 3         |   2     |   3     | transform_power_2_int |
| k_folds      | int    | 1         |   0     |   0     | None                  |
| patience     | int    | 5         |   2     |   2     | None                  |
| optimizer    | factor | SGD       |   0     |   3     | None                  |
| sgd_momentum | float  | 0.0       |   0.9   |   0.9   | None                  |

12.8.2 The Objective Function fun_torch

The objective function fun_torch is selected next. It implements an interface from PyTorch’s training, validation, and testing methods to spotPython.

from spotPython.fun.hypertorch import HyperTorch
fun = HyperTorch().fun_torch

12.8.3 Starting the Hyperparameter Tuning

import numpy as np
from spotPython.spot import spot
from math import inf
spot_tuner = spot.Spot(fun=fun,
                   lower = lower,
                   upper = upper,
                   fun_evals = inf,
                   fun_repeats = 1,
                   max_time = MAX_TIME,
                   noise = False,
                   tolerance_x = np.sqrt(np.spacing(1)),
                   var_type = var_type,
                   var_name = var_name,
                   infill_criterion = "y",
                   n_points = 1,
                   seed=123,
                   log_level = 50,
                   show_models= False,
                   show_progress= True,
                   fun_control = fun_control,
                   design_control={"init_size": INIT_SIZE,
                                   "repeats": 1},
                   surrogate_control={"noise": True,
                                      "cod_type": "norm",
                                      "min_theta": -4,
                                      "max_theta": 3,
                                      "n_theta": len(var_name),
                                      "model_fun_evals": 10_000,
                                      "log_level": 50
                                      })
spot_tuner.run(X_start=X_start)

config: {'l1': 16, 'l2': 8, 'lr_mult': 0.001, 'batch_size': 16, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'AdamW', 'sgd_momentum': 0.9}
Epoch: 1 | 
MulticlassAccuracy: 0.1006999984383583 | Loss: 2.3223705308914186 | Acc: 0.1007000000000000.
Epoch: 2 | 
MulticlassAccuracy: 0.1011999994516373 | Loss: 2.3199273319244385 | Acc: 0.1012000000000000.
Epoch: 3 | 
MulticlassAccuracy: 0.1028999984264374 | Loss: 2.3167388437271117 | Acc: 0.1029000000000000.
Epoch: 4 | 
MulticlassAccuracy: 0.1064499989151955 | Loss: 2.3128294769287110 | Acc: 0.1064500000000000.
Epoch: 5 | 
MulticlassAccuracy: 0.1097500026226044 | Loss: 2.3082741222381591 | Acc: 0.1097500000000000.
Epoch: 6 | 
MulticlassAccuracy: 0.1121499985456467 | Loss: 2.3036102567672732 | Acc: 0.1121500000000000.
Epoch: 7 | 
MulticlassAccuracy: 0.1137000024318695 | Loss: 2.2997931219100951 | Acc: 0.1137000000000000.
Epoch: 8 | 
MulticlassAccuracy: 0.1133000031113625 | Loss: 2.2963734565734861 | Acc: 0.1133000000000000.
Returned to Spot: Validation loss: 2.296373456573486

config: {'l1': 8, 'l2': 8, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 4, 'k_folds': 0, 'patience': 2, 'optimizer': 'Adamax', 'sgd_momentum': 0.9}
Epoch: 1 | 
MulticlassAccuracy: 0.0991500020027161 | Loss: 2.3228664064407347 | Acc: 0.0991500000000000.
Epoch: 2 | 
MulticlassAccuracy: 0.0988999977707863 | Loss: 2.3209327189445497 | Acc: 0.0989000000000000.
Epoch: 3 | 
MulticlassAccuracy: 0.1055999994277954 | Loss: 2.3194583466529846 | Acc: 0.1056000000000000.
Epoch: 4 | 
MulticlassAccuracy: 0.1221000030636787 | Loss: 2.3179262590408327 | Acc: 0.1221000000000000.
Returned to Spot: Validation loss: 2.3179262590408327

config: {'l1': 32, 'l2': 16, 'lr_mult': 0.001, 'batch_size': 2, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 | 
MulticlassAccuracy: 0.1402499973773956 | Loss: 2.2941725546360017 | Acc: 0.1402500000000000.
Epoch: 2 | 
MulticlassAccuracy: 0.1581500023603439 | Loss: 2.2664855610966681 | Acc: 0.1581500000000000.
Epoch: 3 | 
MulticlassAccuracy: 0.1878499984741211 | Loss: 2.2389586074113845 | Acc: 0.1878500000000000.
Epoch: 4 | 
MulticlassAccuracy: 0.2070000022649765 | Loss: 2.2139738660097121 | Acc: 0.2070000000000000.
Epoch: 5 | 
MulticlassAccuracy: 0.2111500054597855 | Loss: 2.1935498227000236 | Acc: 0.2111500000000000.
Epoch: 6 | 
MulticlassAccuracy: 0.2131499946117401 | Loss: 2.1767184089899061 | Acc: 0.2131500000000000.
Epoch: 7 | 
MulticlassAccuracy: 0.2197999954223633 | Loss: 2.1595202534437181 | Acc: 0.2198000000000000.
Epoch: 8 | 
MulticlassAccuracy: 0.2282499969005585 | Loss: 2.1417718604683875 | Acc: 0.2282500000000000.
Returned to Spot: Validation loss: 2.1417718604683875

config: {'l1': 4, 'l2': 8, 'lr_mult': 0.001, 'batch_size': 4, 'epochs': 4, 'k_folds': 0, 'patience': 2, 'optimizer': 'AdamW', 'sgd_momentum': 0.9}
Epoch: 1 | 
MulticlassAccuracy: 0.1031500026583672 | Loss: 2.3165911318778991 | Acc: 0.1031500000000000.
Epoch: 2 | 
MulticlassAccuracy: 0.1031500026583672 | Loss: 2.3159911912679672 | Acc: 0.1031500000000000.
Epoch: 3 | 
MulticlassAccuracy: 0.1031500026583672 | Loss: 2.3148182560920714 | Acc: 0.1031500000000000.
Epoch: 4 | 
MulticlassAccuracy: 0.1031500026583672 | Loss: 2.3134243086099624 | Acc: 0.1031500000000000.
Returned to Spot: Validation loss: 2.3134243086099624

config: {'l1': 16, 'l2': 32, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'Adam', 'sgd_momentum': 0.9}
Epoch: 1 | 
MulticlassAccuracy: 0.0997999981045723 | Loss: 2.3052040863990784 | Acc: 0.0998000000000000.
Epoch: 2 | 
MulticlassAccuracy: 0.0997999981045723 | Loss: 2.3039341591835023 | Acc: 0.0998000000000000.
Epoch: 3 | 
MulticlassAccuracy: 0.0997999981045723 | Loss: 2.3019620117187500 | Acc: 0.0998000000000000.
Epoch: 4 | 
MulticlassAccuracy: 0.1004500016570091 | Loss: 2.2992475516319275 | Acc: 0.1004500000000000.
Epoch: 5 | 
MulticlassAccuracy: 0.1081499978899956 | Loss: 2.2960347411155699 | Acc: 0.1081500000000000.
Epoch: 6 | 
MulticlassAccuracy: 0.1198500022292137 | Loss: 2.2926005097389219 | Acc: 0.1198500000000000.
Epoch: 7 | 
MulticlassAccuracy: 0.1262000054121017 | Loss: 2.2889273350715635 | Acc: 0.1262000000000000.
Epoch: 8 | 
MulticlassAccuracy: 0.1288499981164932 | Loss: 2.2849002689361573 | Acc: 0.1288500000000000.
Returned to Spot: Validation loss: 2.2849002689361573

config: {'l1': 8, 'l2': 16, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 | 
MulticlassAccuracy: 0.0969000011682510 | Loss: 2.3096915546417236 | Acc: 0.0969000000000000.
Epoch: 2 | 
MulticlassAccuracy: 0.1062500029802322 | Loss: 2.3068276701927184 | Acc: 0.1062500000000000.
Epoch: 3 | 
MulticlassAccuracy: 0.1194500029087067 | Loss: 2.3036983233451842 | Acc: 0.1194500000000000.
Epoch: 4 | 
MulticlassAccuracy: 0.1287499964237213 | Loss: 2.3008342904090879 | Acc: 0.1287500000000000.
Epoch: 5 | 
MulticlassAccuracy: 0.1298500001430511 | Loss: 2.2975508972167971 | Acc: 0.1298500000000000.
Epoch: 6 | 
MulticlassAccuracy: 0.1298000067472458 | Loss: 2.2930915016174316 | Acc: 0.1298000000000000.
Epoch: 7 | 
MulticlassAccuracy: 0.1395500004291534 | Loss: 2.2868650953292846 | Acc: 0.1395500000000000.
Epoch: 8 | 
MulticlassAccuracy: 0.1609500050544739 | Loss: 2.2795675284385681 | Acc: 0.1609500000000000.
Returned to Spot: Validation loss: 2.279567528438568
spotPython tuning: 2.1417718604683875 [##########] 100.00% Done...
<spotPython.spot.spot.Spot at 0x16eaaae00>

12.9 Step 9: Tensorboard

The textual output shown in the console (or code cell) can be visualized with Tensorboard as described in Section 14.9, see also the description in the documentation: Tensorboard.

12.10 Step 10: Results

After the hyperparameter tuning run is finished, the results can be analyzed as described in Section 14.10.

SAVE = False
LOAD = False

if SAVE:
    result_file_name = "res_" + experiment_name + ".pkl"
    with open(result_file_name, 'wb') as f:
        pickle.dump(spot_tuner, f)

if LOAD:
    result_file_name = "ADD THE NAME here, e.g.: res_ch10-friedman-hpt-0_maans03_60min_20init_1K_2023-04-14_10-11-19.pkl"
    with open(result_file_name, 'rb') as f:
        spot_tuner =  pickle.load(f)

After the hyperparameter tuning run is finished, the progress of the hyperparameter tuning can be visualized. The following code generates the progress plot from ?fig-progress.

spot_tuner.plot_progress(log_y=False,
    filename="./figures/" + experiment_name+"_progress.png")

Progress plot. Black dots denote results from the initial design. Red dots illustrate the improvement found by the surrogate model based optimization.
  • Print the results
print(gen_design_table(fun_control=fun_control,
    spot=spot_tuner))
| name         | type   | default   |   lower |   upper |   tuned | transform             |   importance | stars   |
|--------------|--------|-----------|---------|---------|---------|-----------------------|--------------|---------|
| l1           | int    | 5         |     2.0 |     5.0 |     5.0 | transform_power_2_int |        85.47 | **      |
| l2           | int    | 5         |     2.0 |     5.0 |     4.0 | transform_power_2_int |         0.00 |         |
| lr_mult      | float  | 1.0       |   0.001 |   0.001 |   0.001 | None                  |         0.00 |         |
| batch_size   | int    | 4         |     1.0 |     4.0 |     1.0 | transform_power_2_int |       100.00 | ***     |
| epochs       | int    | 3         |     2.0 |     3.0 |     3.0 | transform_power_2_int |         0.00 |         |
| k_folds      | int    | 1         |     0.0 |     0.0 |     0.0 | None                  |         0.00 |         |
| patience     | int    | 5         |     2.0 |     2.0 |     2.0 | None                  |         0.00 |         |
| optimizer    | factor | SGD       |     0.0 |     3.0 |     3.0 | None                  |        51.34 | **      |
| sgd_momentum | float  | 0.0       |     0.9 |     0.9 |     0.9 | None                  |         0.00 |         |

12.10.1 Show variable importance

spot_tuner.plot_importance(threshold=0.025, filename="./figures/" + experiment_name+"_importance.png")

Variable importance plot, threshold 0.025.

12.10.2 Get the Tuned Architecture (SPOT Results)

The architecture of the spotPython model can be obtained by the following code:

from spotPython.hyperparameters.values import get_one_core_model_from_X
X = spot_tuner.to_all_dim(spot_tuner.min_X.reshape(1,-1))
model_spot = get_one_core_model_from_X(X, fun_control)
model_spot
Net_CIFAR10(
  (conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
  (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  (fc1): Linear(in_features=400, out_features=32, bias=True)
  (fc2): Linear(in_features=32, out_features=16, bias=True)
  (fc3): Linear(in_features=16, out_features=10, bias=True)
)

12.10.3 Evaluation of the Tuned Architecture

from spotPython.torch.traintest import (
    train_tuned,
    test_tuned,
    )
train_tuned(net=model_spot, train_dataset=train,
        loss_function=fun_control["loss_function"],
        metric=fun_control["metric_torch"],
        shuffle=True,
        device = fun_control["device"],
        path=None,
        task=fun_control["task"],)
Epoch: 1 | 
Batch: 10000. Batch Size: 2. Training Loss (running): 2.307
MulticlassAccuracy: 0.0995500013232231 | Loss: 2.3034106528520586 | Acc: 0.0995500000000000.
Epoch: 2 | 
Batch: 10000. Batch Size: 2. Training Loss (running): 2.298
MulticlassAccuracy: 0.1118000000715256 | Loss: 2.2878369047284126 | Acc: 0.1118000000000000.
Epoch: 3 | 
Batch: 10000. Batch Size: 2. Training Loss (running): 2.281
MulticlassAccuracy: 0.1352999955415726 | Loss: 2.2621154337048530 | Acc: 0.1353000000000000.
Epoch: 4 | 
Batch: 10000. Batch Size: 2. Training Loss (running): 2.252
MulticlassAccuracy: 0.1632499992847443 | Loss: 2.2318782879948618 | Acc: 0.1632500000000000.
Epoch: 5 | 
Batch: 10000. Batch Size: 2. Training Loss (running): 2.221
MulticlassAccuracy: 0.1868000030517578 | Loss: 2.2025451619029046 | Acc: 0.1868000000000000.
Epoch: 6 | 
Batch: 10000. Batch Size: 2. Training Loss (running): 2.191
MulticlassAccuracy: 0.2019499987363815 | Loss: 2.1765039404153823 | Acc: 0.2019500000000000.
Epoch: 7 | 
Batch: 10000. Batch Size: 2. Training Loss (running): 2.164
MulticlassAccuracy: 0.2197500020265579 | Loss: 2.1519887288570403 | Acc: 0.2197500000000000.
Epoch: 8 | 
Batch: 10000. Batch Size: 2. Training Loss (running): 2.134
MulticlassAccuracy: 0.2361000031232834 | Loss: 2.1273523331165314 | Acc: 0.2361000000000000.
Returned to Spot: Validation loss: 2.1273523331165314

If path is set to a filename, e.g., path = "model_spot_trained.pt", the weights of the trained model will be loaded from this file.

test_tuned(net=model_spot, test_dataset=test,
            shuffle=False,
            loss_function=fun_control["loss_function"],
            metric=fun_control["metric_torch"],
            device = fun_control["device"],
            task=fun_control["task"],)
MulticlassAccuracy: 0.2405000030994415 | Loss: 2.1225482901096342 | Acc: 0.2405000000000000.
Final evaluation: Validation loss: 2.122548290109634
Final evaluation: Validation metric: 0.24050000309944153
----------------------------------------------
(2.122548290109634, nan, tensor(0.2405))

12.10.4 Cross-validated Evaluations

Caution: Cross-validated Evaluations
  • The number of folds is set to 1 by default.
  • Here it was changed to 3 for demonstration purposes.
  • Set the number of folds to a reasonable value, e.g., 10.
  • This can be done by setting the k_folds attribute of the model as follows:
  • setattr(model_spot, "k_folds", 10)
from spotPython.torch.traintest import evaluate_cv
# modify k-kolds:
setattr(model_spot, "k_folds",  3)
df_eval, df_preds, df_metrics = evaluate_cv(net=model_spot,
            dataset=fun_control["data"],
            loss_function=fun_control["loss_function"],
            metric=fun_control["metric_torch"],
            task=fun_control["task"],
            writer=fun_control["writer"],
            writerId="model_spot_cv",
            device = fun_control["device"])
Error in Net_Core. Call to evaluate_cv() failed. err=TypeError("Expected sequence or array-like, got <class 'NoneType'>"), type(err)=<class 'TypeError'>
metric_name = type(fun_control["metric_torch"]).__name__
print(f"loss: {df_eval}, Cross-validated {metric_name}: {df_metrics}")
loss: nan, Cross-validated MulticlassAccuracy: nan

12.10.5 Detailed Hyperparameter Plots

filename = "./figures/" + experiment_name
spot_tuner.plot_important_hyperparameter_contour(filename=filename)
l1:  85.47467788740146
batch_size:  100.0
optimizer:  51.338262439659886

Contour plots.

12.10.6 Parallel Coordinates Plot

spot_tuner.parallel_plot()

Parallel coordinates plots

12.10.7 Plot all Combinations of Hyperparameters

  • Warning: this may take a while.
PLOT_ALL = False
if PLOT_ALL:
    n = spot_tuner.k
    for i in range(n-1):
        for j in range(i+1, n):
            spot_tuner.plot_contour(i=i, j=j, min_z=min_z, max_z = max_z)