list | grep "spot[RiverPython]" pip
spotPython 0.2.46
spotRiver 0.0.93
Note: you may need to restart the kernel to use updated packages.
In this tutorial, we will show how spotPython
can be integrated into the PyTorch
Lightning training workflow for a classifiaction task.
./data/VBDP/train.csv
.This document refers to the following software versions:
python
: 3.10.10torch
: 2.0.1torchvision
: 0.15.0list | grep "spot[RiverPython]" pip
spotPython 0.2.46
spotRiver 0.0.93
Note: you may need to restart the kernel to use updated packages.
spotPython
can be installed via pip. Alternatively, the source code can be downloaded from gitHub: https://github.com/sequential-parameter-optimization/spotPython.
!pip install spotPython
spotPython
from GitHub.# import sys
# !{sys.executable} -m pip install --upgrade build
# !{sys.executable} -m pip install --upgrade --force-reinstall spotPython
Before we consider the detailed experimental setup, we select the parameters that affect run time, initial design size and the device that is used.
DEVICE
."cpu"
is preferred (on Mac)."cuda:0"
instead."auto"
or None
, spotPython
will automatically select the device.
"mps"
on Macs, which is not the best choice for simple neural nets.PREFIX
is used for the experiment name and the name of the log file.= 1
MAX_TIME = 5
INIT_SIZE = "auto" #"cpu" # "cuda:0"
DEVICE = 0
WORKERS ="30" PREFIX
from spotPython.utils.device import getDevice
= getDevice(DEVICE)
DEVICE print(DEVICE)
mps
import os
if not os.path.exists('./figures'):
'./figures') os.makedirs(
fun_control
Dictionarytensorboard_path
to None
if you are working under Windows.spotPython
uses a Python dictionary for storing the information required for the hyperparameter tuning process, which was described in Section 14.2, see Initialization of the fun_control Dictionary in the documentation.
from spotPython.utils.init import fun_control_init
from spotPython.utils.file import get_experiment_name
= get_experiment_name(prefix=PREFIX)
experiment_name = fun_control_init(task="classification",
fun_control ="./runs/" + experiment_name,
tensorboard_path=WORKERS,
num_workers=DEVICE) device
The data loading and preprocessing is handled by Lightning
and PyTorch
. It comprehends the following classes:
CSVDataset
: A class that loads the data from a CSV file. [SOURCE]CSVDataModule
: A class that prepares the data for training and testing. [SOURCE]import torch
from spotPython.light.csvdataset import CSVDataset
from torch.utils.data import DataLoader
from torchvision.transforms import ToTensor
# Create an instance of CSVDataset
= CSVDataset(csv_file="./data/VBDP/train.csv", train=True)
dataset # show the dimensions of the input data
print(dataset[0][0].shape)
# show the first element of the input data
print(dataset[0][0])
# show the size of the dataset
print(f"Dataset Size: {len(dataset)}")
torch.Size([64])
tensor([1., 1., 0., 1., 1., 1., 1., 0., 1., 1., 1., 1., 0., 0., 1., 1., 0., 0.,
1., 0., 1., 0., 1., 1., 1., 1., 1., 1., 1., 0., 0., 1., 0., 0., 0., 0.,
1., 0., 0., 0., 0., 0., 1., 0., 1., 0., 1., 0., 0., 0., 0., 1., 0., 1.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
Dataset Size: 707
# Set batch size for DataLoader
= 3
batch_size # Create DataLoader
= DataLoader(dataset, batch_size=batch_size, shuffle=True)
dataloader
# Iterate over the data in the DataLoader
for batch in dataloader:
= batch
inputs, targets print(f"Batch Size: {inputs.size(0)}")
print("---------------")
print(f"Inputs: {inputs}")
print(f"Targets: {targets}")
break
Batch Size: 3
---------------
Inputs: tensor([[1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 1., 0., 0., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 1., 1., 0., 1., 0., 1.,
0., 0., 0., 1., 1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 1.,
1., 0., 1., 1., 0., 0., 0., 0., 1., 1., 1., 0., 1., 1., 1., 1., 1., 1.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 1., 1., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
Targets: tensor([9, 5, 0])
fun_control
dictionary by Lightning
and PyTorch
.spotPython
with torch
, river
and sklearn
, the data sets are not added to the fun_control
dictionary.The fun_control
dictionary, the torch
, sklearn
and river
versions of spotPython
allow the specification of a data preprocessing pipeline, e.g., for the scaling of the data or for the one-hot encoding of categorical variables, see Section 14.4. This feature is not used in the Lightning
version.
Lightning allows the data preprocessing to be specified in the LightningDataModule
class. It is not considered here, because it should be computed at one location only.
algorithm
) and core_model_hyper_dict
spotPython
includes the NetLightBase
class [SOURCE] for configurable neural networks. The class is imported here. It inherits from the class Lightning.LightningModule
, which is the base class for all models in Lightning
. Lightning.LightningModule
is a subclass of torch.nn.Module
and provides additional functionality for the training and testing of neural networks. The class Lightning.LightningModule
is described in the Lightning documentation.
from spotPython.light.netlightbase import NetLightBase
from spotPython.data.light_hyper_dict import LightHyperDict
from spotPython.hyperparameters.values import add_core_model_to_fun_control
= add_core_model_to_fun_control(core_model=NetLightBase,
fun_control =fun_control,
fun_control= LightHyperDict) hyper_dict
The default entries for the core_model
class are shown below.
'core_model_hyper_dict'] fun_control[
{'l1': {'type': 'int',
'default': 3,
'transform': 'transform_power_2_int',
'lower': 3,
'upper': 8},
'epochs': {'type': 'int',
'default': 4,
'transform': 'transform_power_2_int',
'lower': 4,
'upper': 9},
'batch_size': {'type': 'int',
'default': 4,
'transform': 'transform_power_2_int',
'lower': 1,
'upper': 4},
'act_fn': {'levels': ['Sigmoid', 'Tanh', 'ReLU', 'LeakyReLU', 'ELU', 'Swish'],
'type': 'factor',
'default': 'ReLU',
'transform': 'None',
'class_name': 'spotPython.torch.activation',
'core_model_parameter_type': 'instance()',
'lower': 0,
'upper': 2},
'optimizer': {'levels': ['Adadelta',
'Adagrad',
'Adam',
'AdamW',
'SparseAdam',
'Adamax',
'ASGD',
'NAdam',
'RAdam',
'RMSprop',
'Rprop',
'SGD'],
'type': 'factor',
'default': 'SGD',
'transform': 'None',
'class_name': 'torch.optim',
'core_model_parameter_type': 'str',
'lower': 0,
'upper': 11},
'dropout_prob': {'type': 'float',
'default': 0.01,
'transform': 'None',
'lower': 0.0,
'upper': 0.1},
'lr_mult': {'type': 'float',
'default': 1.0,
'transform': 'None',
'lower': 0.1,
'upper': 10.0}}
The NetLightBase
is a configurable neural network. The hyperparameters of the model are specified in the core_model_hyper_dict
dictionary [SOURCE].
hyper_dict
Hyperparameters for the Selected Algorithm aka core_model
spotPython
provides functions for modifying the hyperparameters, their bounds and factors as well as for activating and de-activating hyperparameters without re-compilation of the Python source code. These functions were described in Section 14.6.
epochs
and patience
are set to small values for demonstration purposes. These values are too small for a real application.fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[7, 9])
andfun_control = modify_hyper_parameter_bounds(fun_control, "patience", bounds=[2, 7])
from spotPython.hyperparameters.values import modify_hyper_parameter_bounds
= modify_hyper_parameter_bounds(fun_control, "l1", bounds=[2,3])
fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[1,2])
fun_control = modify_hyper_parameter_bounds(fun_control, "batch_size", bounds=[6, 8]) fun_control
from spotPython.hyperparameters.values import modify_hyper_parameter_levels
= modify_hyper_parameter_levels(fun_control, "optimizer",["Adam", "AdamW", "Adamax", "NAdam"])
fun_control # fun_control = modify_hyper_parameter_levels(fun_control, "optimizer", ["Adam"])
The updated fun_control
dictionary is shown below.
"core_model_hyper_dict"] fun_control[
{'l1': {'type': 'int',
'default': 3,
'transform': 'transform_power_2_int',
'lower': 2,
'upper': 3},
'epochs': {'type': 'int',
'default': 4,
'transform': 'transform_power_2_int',
'lower': 1,
'upper': 2},
'batch_size': {'type': 'int',
'default': 4,
'transform': 'transform_power_2_int',
'lower': 6,
'upper': 8},
'act_fn': {'levels': ['Sigmoid', 'Tanh', 'ReLU', 'LeakyReLU', 'ELU', 'Swish'],
'type': 'factor',
'default': 'ReLU',
'transform': 'None',
'class_name': 'spotPython.torch.activation',
'core_model_parameter_type': 'instance()',
'lower': 0,
'upper': 2},
'optimizer': {'levels': ['Adam', 'AdamW', 'Adamax', 'NAdam'],
'type': 'factor',
'default': 'SGD',
'transform': 'None',
'class_name': 'torch.optim',
'core_model_parameter_type': 'str',
'lower': 0,
'upper': 3},
'dropout_prob': {'type': 'float',
'default': 0.01,
'transform': 'None',
'lower': 0.0,
'upper': 0.1},
'lr_mult': {'type': 'float',
'default': 1.0,
'transform': 'None',
'lower': 0.1,
'upper': 10.0}}
The evaluation procedure requires the specification of two elements:
Lightning
.The loss function is specified in the configurable network class [SOURCE] We will use CrossEntropy loss for the multiclass-classification task.
from spotPython.torch.mapk import MAPK
import torch
= MAPK(k=2)
mapk = torch.tensor([0, 1, 2, 2])
target = torch.tensor(
preds
[0.5, 0.2, 0.2], # 0 is in top 2
[0.3, 0.4, 0.2], # 1 is in top 2
[0.2, 0.4, 0.3], # 2 is in top 2
[0.7, 0.2, 0.1], # 2 isn't in top 2
[
]
)
mapk.update(preds, target)print(mapk.compute()) # tensor(0.6250)
tensor(0.6250)
Similar to the loss function, the metric is specified in the configurable network class [SOURCE].
spotPython
.Lightning
.The following code passes the information about the parameter ranges and bounds to spot
. It extracts the variable types, names, and bounds
from spotPython.hyperparameters.values import (get_bound_values,
get_var_name,
get_var_type,)= get_var_type(fun_control)
var_type = get_var_name(fun_control)
var_name "var_type": var_type,
fun_control.update({"var_name": var_name})
= get_bound_values(fun_control, "lower")
lower = get_bound_values(fun_control, "upper") upper
Now, the dictionary fun_control
contains all information needed for the hyperparameter tuning. Before the hyperparameter tuning is started, it is recommended to take a look at the experimental design. The method gen_design_table
[SOURCE] generates a design table as follows:
from spotPython.utils.eda import gen_design_table
print(gen_design_table(fun_control))
| name | type | default | lower | upper | transform |
|--------------|--------|-----------|---------|---------|-----------------------|
| l1 | int | 3 | 2 | 3 | transform_power_2_int |
| epochs | int | 4 | 1 | 2 | transform_power_2_int |
| batch_size | int | 4 | 6 | 8 | transform_power_2_int |
| act_fn | factor | ReLU | 0 | 2 | None |
| optimizer | factor | SGD | 0 | 3 | None |
| dropout_prob | float | 0.01 | 0 | 0.1 | None |
| lr_mult | float | 1.0 | 0.1 | 10 | None |
This allows to check if all information is available and if the information is correct.
fun
The objective function fun
from the class HyperLight
[SOURCE] is selected next. It implements an interface from PyTorch
’s training, validation, and testing methods to spotPython
.
from spotPython.light.hyperlight import HyperLight
= HyperLight().fun fun
The spotPython
hyperparameter tuning is started by calling the Spot
function [SOURCE] as described in Section 14.8.4.
import numpy as np
from spotPython.spot import spot
from math import inf
= spot.Spot(fun=fun,
spot_tuner = lower,
lower = upper,
upper = inf,
fun_evals = 1,
fun_repeats = MAX_TIME,
max_time = False,
noise = np.sqrt(np.spacing(1)),
tolerance_x = var_type,
var_type = var_name,
var_name = "y",
infill_criterion = 1,
n_points =123,
seed= 50,
log_level = False,
show_models= True,
show_progress= fun_control,
fun_control ={"init_size": INIT_SIZE,
design_control"repeats": 1},
={"noise": True,
surrogate_control"cod_type": "norm",
"min_theta": -4,
"max_theta": 3,
"n_theta": len(var_name),
"model_fun_evals": 10_000,
"log_level": 50
}) spot_tuner.run()
config: {'l1': 4, 'epochs': 2, 'batch_size': 256, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.02375810986688453, 'lr_mult': 4.211776903906428}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.02375810986688453, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.02375810986688453, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.02375810986688453, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.02375810986688453, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.10247349739074707 │ │ val_loss │ 2.392517566680908 │ │ valid_mapk │ 0.14559221267700195 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.14559221267700195, 'val_loss': 2.392517566680908, 'val_acc': 0.10247349739074707}
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': Sigmoid(), 'optimizer': 'AdamW', 'dropout_prob': 0.06351516807805178, 'lr_mult': 9.576749332517311}
model: NetLightBase(
(act_fn): Sigmoid()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Sigmoid()
(2): Dropout(p=0.06351516807805178, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Sigmoid()
(5): Dropout(p=0.06351516807805178, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Sigmoid()
(8): Dropout(p=0.06351516807805178, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Sigmoid()
(11): Dropout(p=0.06351516807805178, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.08127208799123764 │ │ val_loss │ 2.4001107215881348 │ │ valid_mapk │ 0.13816551864147186 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.13816551864147186, 'val_loss': 2.4001107215881348, 'val_acc': 0.08127208799123764}
config: {'l1': 8, 'epochs': 4, 'batch_size': 64, 'act_fn': ReLU(), 'optimizer': 'NAdam', 'dropout_prob': 0.009636699262945718, 'lr_mult': 1.6215198628622067}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): ReLU()
(2): Dropout(p=0.009636699262945718, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): ReLU()
(5): Dropout(p=0.009636699262945718, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): ReLU()
(8): Dropout(p=0.009636699262945718, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): ReLU()
(11): Dropout(p=0.009636699262945718, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.10247349739074707 │ │ val_loss │ 2.3951730728149414 │ │ valid_mapk │ 0.15869984030723572 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.15869984030723572, 'val_loss': 2.3951730728149414, 'val_acc': 0.10247349739074707}
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': ReLU(), 'optimizer': 'AdamW', 'dropout_prob': 0.0905773354221908, 'lr_mult': 2.618078210197142}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): ReLU()
(2): Dropout(p=0.0905773354221908, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): ReLU()
(5): Dropout(p=0.0905773354221908, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): ReLU()
(8): Dropout(p=0.0905773354221908, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): ReLU()
(11): Dropout(p=0.0905773354221908, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.10954063385725021 │ │ val_loss │ 2.3944597244262695 │ │ valid_mapk │ 0.1956179291009903 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.1956179291009903, 'val_loss': 2.3944597244262695, 'val_acc': 0.10954063385725021}
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Sigmoid(), 'optimizer': 'Adamax', 'dropout_prob': 0.04267745138523515, 'lr_mult': 6.573111165984916}
model: NetLightBase(
(act_fn): Sigmoid()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Sigmoid()
(2): Dropout(p=0.04267745138523515, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Sigmoid()
(5): Dropout(p=0.04267745138523515, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Sigmoid()
(8): Dropout(p=0.04267745138523515, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Sigmoid()
(11): Dropout(p=0.04267745138523515, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.11660777032375336 │ │ val_loss │ 2.392375946044922 │ │ valid_mapk │ 0.1861175298690796 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.1861175298690796, 'val_loss': 2.392375946044922, 'val_acc': 0.11660777032375336}
config: {'l1': 4, 'epochs': 4, 'batch_size': 256, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.09383751300308583, 'lr_mult': 5.396497355422967}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.09383751300308583, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.09383751300308583, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.09383751300308583, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.09383751300308583, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.09540636092424393 │ │ val_loss │ 2.396305799484253 │ │ valid_mapk │ 0.1471956968307495 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.1471956968307495, 'val_loss': 2.396305799484253, 'val_acc': 0.09540636092424393}
spotPython tuning: 2.392375946044922 [----------] 0.74%
config: {'l1': 4, 'epochs': 2, 'batch_size': 256, 'act_fn': ReLU(), 'optimizer': 'Adamax', 'dropout_prob': 0.03234406670253607, 'lr_mult': 5.459139631731492}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): ReLU()
(2): Dropout(p=0.03234406670253607, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): ReLU()
(5): Dropout(p=0.03234406670253607, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): ReLU()
(8): Dropout(p=0.03234406670253607, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): ReLU()
(11): Dropout(p=0.03234406670253607, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.09187278896570206 │ │ val_loss │ 2.3971991539001465 │ │ valid_mapk │ 0.2027391940355301 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.2027391940355301, 'val_loss': 2.3971991539001465, 'val_acc': 0.09187278896570206}
spotPython tuning: 2.392375946044922 [----------] 1.44%
config: {'l1': 4, 'epochs': 2, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.08342010728469738, 'lr_mult': 6.574014495710398}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.08342010728469738, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.08342010728469738, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.08342010728469738, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.08342010728469738, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.1236749142408371 │ │ val_loss │ 2.396390676498413 │ │ valid_mapk │ 0.1730002611875534 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.1730002611875534, 'val_loss': 2.396390676498413, 'val_acc': 0.1236749142408371}
spotPython tuning: 2.392375946044922 [----------] 2.14%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Sigmoid(), 'optimizer': 'Adam', 'dropout_prob': 0.042679770277123295, 'lr_mult': 6.573108036322781}
model: NetLightBase(
(act_fn): Sigmoid()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Sigmoid()
(2): Dropout(p=0.042679770277123295, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Sigmoid()
(5): Dropout(p=0.042679770277123295, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Sigmoid()
(8): Dropout(p=0.042679770277123295, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Sigmoid()
(11): Dropout(p=0.042679770277123295, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.15547703206539154 │ │ val_loss │ 2.3940556049346924 │ │ valid_mapk │ 0.22715727984905243 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.22715727984905243, 'val_loss': 2.3940556049346924, 'val_acc': 0.15547703206539154}
spotPython tuning: 2.392375946044922 [----------] 3.00%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Sigmoid(), 'optimizer': 'Adam', 'dropout_prob': 0.042675416034995674, 'lr_mult': 6.540799591503147}
model: NetLightBase(
(act_fn): Sigmoid()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Sigmoid()
(2): Dropout(p=0.042675416034995674, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Sigmoid()
(5): Dropout(p=0.042675416034995674, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Sigmoid()
(8): Dropout(p=0.042675416034995674, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Sigmoid()
(11): Dropout(p=0.042675416034995674, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.0989399328827858 │ │ val_loss │ 2.400780439376831 │ │ valid_mapk │ 0.16534851491451263 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.16534851491451263, 'val_loss': 2.400780439376831, 'val_acc': 0.0989399328827858}
spotPython tuning: 2.392375946044922 [----------] 3.86%
config: {'l1': 4, 'epochs': 2, 'batch_size': 128, 'act_fn': ReLU(), 'optimizer': 'Adamax', 'dropout_prob': 0.07835430967167678, 'lr_mult': 4.211777632445757}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): ReLU()
(2): Dropout(p=0.07835430967167678, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): ReLU()
(5): Dropout(p=0.07835430967167678, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): ReLU()
(8): Dropout(p=0.07835430967167678, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): ReLU()
(11): Dropout(p=0.07835430967167678, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.07773851603269577 │ │ val_loss │ 2.4055542945861816 │ │ valid_mapk │ 0.13631688058376312 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.13631688058376312, 'val_loss': 2.4055542945861816, 'val_acc': 0.07773851603269577}
spotPython tuning: 2.392375946044922 [----------] 4.52%
config: {'l1': 8, 'epochs': 4, 'batch_size': 128, 'act_fn': Sigmoid(), 'optimizer': 'AdamW', 'dropout_prob': 0.0011121919799852175, 'lr_mult': 2.6180625977803973}
model: NetLightBase(
(act_fn): Sigmoid()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Sigmoid()
(2): Dropout(p=0.0011121919799852175, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Sigmoid()
(5): Dropout(p=0.0011121919799852175, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Sigmoid()
(8): Dropout(p=0.0011121919799852175, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Sigmoid()
(11): Dropout(p=0.0011121919799852175, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.08833922445774078 │ │ val_loss │ 2.39721417427063 │ │ valid_mapk │ 0.2018711417913437 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.2018711417913437, 'val_loss': 2.39721417427063, 'val_acc': 0.08833922445774078}
spotPython tuning: 2.392375946044922 [#---------] 5.41%
config: {'l1': 4, 'epochs': 2, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.08340923811413072, 'lr_mult': 6.595923472115284}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.08340923811413072, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.08340923811413072, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.08340923811413072, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.08340923811413072, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.0742049440741539 │ │ val_loss │ 2.398049831390381 │ │ valid_mapk │ 0.16555748879909515 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.16555748879909515, 'val_loss': 2.398049831390381, 'val_acc': 0.0742049440741539}
spotPython tuning: 2.392375946044922 [#---------] 6.10%
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'AdamW', 'dropout_prob': 0.09057727446725157, 'lr_mult': 2.618082155615973}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Tanh()
(2): Dropout(p=0.09057727446725157, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Tanh()
(5): Dropout(p=0.09057727446725157, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Tanh()
(8): Dropout(p=0.09057727446725157, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Tanh()
(11): Dropout(p=0.09057727446725157, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.12720848619937897 │ │ val_loss │ 2.3924472332000732 │ │ valid_mapk │ 0.23045267164707184 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.23045267164707184, 'val_loss': 2.3924472332000732, 'val_acc': 0.12720848619937897}
spotPython tuning: 2.392375946044922 [#---------] 6.75%
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'AdamW', 'dropout_prob': 0.09057528342634573, 'lr_mult': 4.524699563158676}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Tanh()
(2): Dropout(p=0.09057528342634573, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Tanh()
(5): Dropout(p=0.09057528342634573, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Tanh()
(8): Dropout(p=0.09057528342634573, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Tanh()
(11): Dropout(p=0.09057528342634573, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.11660777032375336 │ │ val_loss │ 2.3946332931518555 │ │ valid_mapk │ 0.2086387723684311 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.2086387723684311, 'val_loss': 2.3946332931518555, 'val_acc': 0.11660777032375336}
spotPython tuning: 2.392375946044922 [#---------] 7.39%
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'AdamW', 'dropout_prob': 0.0905773356451825, 'lr_mult': 4.5246995762084}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Tanh()
(2): Dropout(p=0.0905773356451825, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Tanh()
(5): Dropout(p=0.0905773356451825, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Tanh()
(8): Dropout(p=0.0905773356451825, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Tanh()
(11): Dropout(p=0.0905773356451825, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.0742049440741539 │ │ val_loss │ 2.3986656665802 │ │ valid_mapk │ 0.1410108059644699 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.1410108059644699, 'val_loss': 2.3986656665802, 'val_acc': 0.0742049440741539}
spotPython tuning: 2.392375946044922 [#---------] 7.97%
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'AdamW', 'dropout_prob': 0.09057739385964624, 'lr_mult': 2.618078729552682}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Tanh()
(2): Dropout(p=0.09057739385964624, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Tanh()
(5): Dropout(p=0.09057739385964624, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Tanh()
(8): Dropout(p=0.09057739385964624, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Tanh()
(11): Dropout(p=0.09057739385964624, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.07773851603269577 │ │ val_loss │ 2.399670124053955 │ │ valid_mapk │ 0.15009324252605438 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.15009324252605438, 'val_loss': 2.399670124053955, 'val_acc': 0.07773851603269577}
spotPython tuning: 2.392375946044922 [#---------] 8.61%
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'AdamW', 'dropout_prob': 0.09057322554200092, 'lr_mult': 4.524699575810369}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Tanh()
(2): Dropout(p=0.09057322554200092, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Tanh()
(5): Dropout(p=0.09057322554200092, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Tanh()
(8): Dropout(p=0.09057322554200092, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Tanh()
(11): Dropout(p=0.09057322554200092, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.08480565249919891 │ │ val_loss │ 2.399465560913086 │ │ valid_mapk │ 0.18843235075473785 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.18843235075473785, 'val_loss': 2.399465560913086, 'val_acc': 0.08480565249919891}
spotPython tuning: 2.392375946044922 [#---------] 9.85%
config: {'l1': 4, 'epochs': 4, 'batch_size': 256, 'act_fn': ReLU(), 'optimizer': 'Adamax', 'dropout_prob': 0.047569878989117555, 'lr_mult': 4.2068786297699425}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): ReLU()
(2): Dropout(p=0.047569878989117555, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): ReLU()
(5): Dropout(p=0.047569878989117555, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): ReLU()
(8): Dropout(p=0.047569878989117555, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): ReLU()
(11): Dropout(p=0.047569878989117555, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.0742049440741539 │ │ val_loss │ 2.3951728343963623 │ │ valid_mapk │ 0.19023677706718445 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.19023677706718445, 'val_loss': 2.3951728343963623, 'val_acc': 0.0742049440741539}
spotPython tuning: 2.392375946044922 [#---------] 11.17%
config: {'l1': 8, 'epochs': 2, 'batch_size': 256, 'act_fn': ReLU(), 'optimizer': 'Adamax', 'dropout_prob': 0.07582060312768013, 'lr_mult': 4.2121994035003825}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): ReLU()
(2): Dropout(p=0.07582060312768013, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): ReLU()
(5): Dropout(p=0.07582060312768013, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): ReLU()
(8): Dropout(p=0.07582060312768013, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): ReLU()
(11): Dropout(p=0.07582060312768013, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.10600706934928894 │ │ val_loss │ 2.3960399627685547 │ │ valid_mapk │ 0.17059703171253204 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.17059703171253204, 'val_loss': 2.3960399627685547, 'val_acc': 0.10600706934928894}
spotPython tuning: 2.392375946044922 [#---------] 12.37%
config: {'l1': 8, 'epochs': 2, 'batch_size': 256, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.0684918081896617, 'lr_mult': 4.211409247687685}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Tanh()
(2): Dropout(p=0.0684918081896617, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Tanh()
(5): Dropout(p=0.0684918081896617, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Tanh()
(8): Dropout(p=0.0684918081896617, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Tanh()
(11): Dropout(p=0.0684918081896617, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.06713780760765076 │ │ val_loss │ 2.3998467922210693 │ │ valid_mapk │ 0.1612895429134369 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.1612895429134369, 'val_loss': 2.3998467922210693, 'val_acc': 0.06713780760765076}
spotPython tuning: 2.392375946044922 [#---------] 13.47%
config: {'l1': 4, 'epochs': 4, 'batch_size': 256, 'act_fn': ReLU(), 'optimizer': 'Adamax', 'dropout_prob': 0.0439763473406647, 'lr_mult': 4.203674001468645}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): ReLU()
(2): Dropout(p=0.0439763473406647, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): ReLU()
(5): Dropout(p=0.0439763473406647, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): ReLU()
(8): Dropout(p=0.0439763473406647, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): ReLU()
(11): Dropout(p=0.0439763473406647, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.08127208799123764 │ │ val_loss │ 2.398921489715576 │ │ valid_mapk │ 0.17997685074806213 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.17997685074806213, 'val_loss': 2.398921489715576, 'val_acc': 0.08127208799123764}
spotPython tuning: 2.392375946044922 [#---------] 14.73%
config: {'l1': 8, 'epochs': 2, 'batch_size': 256, 'act_fn': Sigmoid(), 'optimizer': 'Adamax', 'dropout_prob': 0.023758103863880525, 'lr_mult': 5.272913948790006}
model: NetLightBase(
(act_fn): Sigmoid()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Sigmoid()
(2): Dropout(p=0.023758103863880525, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Sigmoid()
(5): Dropout(p=0.023758103863880525, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Sigmoid()
(8): Dropout(p=0.023758103863880525, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Sigmoid()
(11): Dropout(p=0.023758103863880525, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.0989399328827858 │ │ val_loss │ 2.399465322494507 │ │ valid_mapk │ 0.1902126669883728 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.1902126669883728, 'val_loss': 2.399465322494507, 'val_acc': 0.0989399328827858}
spotPython tuning: 2.392375946044922 [##--------] 16.04%
config: {'l1': 8, 'epochs': 4, 'batch_size': 64, 'act_fn': ReLU(), 'optimizer': 'NAdam', 'dropout_prob': 0.009636434121308145, 'lr_mult': 0.8934456717486565}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): ReLU()
(2): Dropout(p=0.009636434121308145, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): ReLU()
(5): Dropout(p=0.009636434121308145, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): ReLU()
(8): Dropout(p=0.009636434121308145, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): ReLU()
(11): Dropout(p=0.009636434121308145, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.0989399328827858 │ │ val_loss │ 2.3927416801452637 │ │ valid_mapk │ 0.17910878360271454 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.17910878360271454, 'val_loss': 2.3927416801452637, 'val_acc': 0.0989399328827858}
spotPython tuning: 2.392375946044922 [##--------] 17.64%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Sigmoid(), 'optimizer': 'Adamax', 'dropout_prob': 0.04267980575260771, 'lr_mult': 3.0808762799905938}
model: NetLightBase(
(act_fn): Sigmoid()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Sigmoid()
(2): Dropout(p=0.04267980575260771, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Sigmoid()
(5): Dropout(p=0.04267980575260771, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Sigmoid()
(8): Dropout(p=0.04267980575260771, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Sigmoid()
(11): Dropout(p=0.04267980575260771, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.09187278896570206 │ │ val_loss │ 2.396101474761963 │ │ valid_mapk │ 0.17742091417312622 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.17742091417312622, 'val_loss': 2.396101474761963, 'val_acc': 0.09187278896570206}
spotPython tuning: 2.392375946044922 [##--------] 19.06%
config: {'l1': 4, 'epochs': 4, 'batch_size': 64, 'act_fn': Tanh(), 'optimizer': 'Adam', 'dropout_prob': 0.023398307538795366, 'lr_mult': 5.487132597299661}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.023398307538795366, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.023398307538795366, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.023398307538795366, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.023398307538795366, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.08833922445774078 │ │ val_loss │ 2.3954827785491943 │ │ valid_mapk │ 0.17085261642932892 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.17085261642932892, 'val_loss': 2.3954827785491943, 'val_acc': 0.08833922445774078}
spotPython tuning: 2.392375946044922 [##--------] 20.92%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.024473992001252436, 'lr_mult': 1.5143125796489798}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.024473992001252436, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.024473992001252436, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.024473992001252436, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.024473992001252436, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.04240282624959946 │ │ val_loss │ 2.405418872833252 │ │ valid_mapk │ 0.11908436566591263 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.11908436566591263, 'val_loss': 2.405418872833252, 'val_acc': 0.04240282624959946}
spotPython tuning: 2.392375946044922 [##--------] 22.32%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.042679172055940004, 'lr_mult': 4.594818188873495}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.042679172055940004, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.042679172055940004, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.042679172055940004, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.042679172055940004, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.12014134228229523 │ │ val_loss │ 2.3936750888824463 │ │ valid_mapk │ 0.19454090297222137 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.19454090297222137, 'val_loss': 2.3936750888824463, 'val_acc': 0.12014134228229523}
spotPython tuning: 2.392375946044922 [##--------] 24.04%
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'AdamW', 'dropout_prob': 0.09058537640216029, 'lr_mult': 4.524699575524505}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Tanh()
(2): Dropout(p=0.09058537640216029, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Tanh()
(5): Dropout(p=0.09058537640216029, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Tanh()
(8): Dropout(p=0.09058537640216029, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Tanh()
(11): Dropout(p=0.09058537640216029, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.07773851603269577 │ │ val_loss │ 2.3982136249542236 │ │ valid_mapk │ 0.14977173507213593 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.14977173507213593, 'val_loss': 2.3982136249542236, 'val_acc': 0.07773851603269577}
spotPython tuning: 2.392375946044922 [###-------] 25.21%
config: {'l1': 4, 'epochs': 2, 'batch_size': 256, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.023758228775555536, 'lr_mult': 8.160689925573742}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.023758228775555536, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.023758228775555536, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.023758228775555536, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.023758228775555536, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.08833922445774078 │ │ val_loss │ 2.4002761840820312 │ │ valid_mapk │ 0.14441068470478058 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.14441068470478058, 'val_loss': 2.4002761840820312, 'val_acc': 0.08833922445774078}
spotPython tuning: 2.392375946044922 [###-------] 26.34%
config: {'l1': 8, 'epochs': 4, 'batch_size': 64, 'act_fn': ReLU(), 'optimizer': 'NAdam', 'dropout_prob': 0.009636437329579526, 'lr_mult': 0.8934406725402682}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): ReLU()
(2): Dropout(p=0.009636437329579526, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): ReLU()
(5): Dropout(p=0.009636437329579526, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): ReLU()
(8): Dropout(p=0.009636437329579526, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): ReLU()
(11): Dropout(p=0.009636437329579526, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.07773851603269577 │ │ val_loss │ 2.3988335132598877 │ │ valid_mapk │ 0.15428242087364197 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.15428242087364197, 'val_loss': 2.3988335132598877, 'val_acc': 0.07773851603269577}
spotPython tuning: 2.392375946044922 [###-------] 27.92%
config: {'l1': 8, 'epochs': 4, 'batch_size': 64, 'act_fn': ReLU(), 'optimizer': 'NAdam', 'dropout_prob': 0.00963665349868121, 'lr_mult': 1.317170821902294}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): ReLU()
(2): Dropout(p=0.00963665349868121, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): ReLU()
(5): Dropout(p=0.00963665349868121, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): ReLU()
(8): Dropout(p=0.00963665349868121, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): ReLU()
(11): Dropout(p=0.00963665349868121, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.11660777032375336 │ │ val_loss │ 2.396798610687256 │ │ valid_mapk │ 0.17309029400348663 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.17309029400348663, 'val_loss': 2.396798610687256, 'val_acc': 0.11660777032375336}
spotPython tuning: 2.392375946044922 [###-------] 29.55%
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': ReLU(), 'optimizer': 'AdamW', 'dropout_prob': 0.0905771145907998, 'lr_mult': 1.4232862597414226}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): ReLU()
(2): Dropout(p=0.0905771145907998, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): ReLU()
(5): Dropout(p=0.0905771145907998, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): ReLU()
(8): Dropout(p=0.0905771145907998, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): ReLU()
(11): Dropout(p=0.0905771145907998, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.060070671141147614 │ │ val_loss │ 2.402383327484131 │ │ valid_mapk │ 0.14375965297222137 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.14375965297222137, 'val_loss': 2.402383327484131, 'val_acc': 0.060070671141147614}
spotPython tuning: 2.392375946044922 [###-------] 31.45%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': ReLU(), 'optimizer': 'Adamax', 'dropout_prob': 0.027243892964829854, 'lr_mult': 4.359921415842041}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): ReLU()
(2): Dropout(p=0.027243892964829854, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): ReLU()
(5): Dropout(p=0.027243892964829854, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): ReLU()
(8): Dropout(p=0.027243892964829854, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): ReLU()
(11): Dropout(p=0.027243892964829854, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.09540636092424393 │ │ val_loss │ 2.3996846675872803 │ │ valid_mapk │ 0.16882073879241943 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.16882073879241943, 'val_loss': 2.3996846675872803, 'val_acc': 0.09540636092424393}
spotPython tuning: 2.392375946044922 [###-------] 33.16%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.042680366961387904, 'lr_mult': 6.188892294233735}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.042680366961387904, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.042680366961387904, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.042680366961387904, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.042680366961387904, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.12720848619937897 │ │ val_loss │ 2.388392210006714 │ │ valid_mapk │ 0.2057291716337204 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.2057291716337204, 'val_loss': 2.388392210006714, 'val_acc': 0.12720848619937897}
spotPython tuning: 2.388392210006714 [###-------] 34.92%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.04268137224425475, 'lr_mult': 6.188898026362269}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.04268137224425475, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.04268137224425475, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.04268137224425475, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.04268137224425475, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.13780918717384338 │ │ val_loss │ 2.390165090560913 │ │ valid_mapk │ 0.21349342167377472 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.21349342167377472, 'val_loss': 2.390165090560913, 'val_acc': 0.13780918717384338}
spotPython tuning: 2.388392210006714 [####------] 36.60%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.04268201744868218, 'lr_mult': 6.758308820333411}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.04268201744868218, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.04268201744868218, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.04268201744868218, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.04268201744868218, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.0742049440741539 │ │ val_loss │ 2.395200729370117 │ │ valid_mapk │ 0.13384132087230682 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.13384132087230682, 'val_loss': 2.395200729370117, 'val_acc': 0.0742049440741539}
spotPython tuning: 2.388392210006714 [####------] 38.40%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.04268098375843882, 'lr_mult': 5.930461484046648}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.04268098375843882, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.04268098375843882, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.04268098375843882, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.04268098375843882, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.08480565249919891 │ │ val_loss │ 2.4016621112823486 │ │ valid_mapk │ 0.14779449999332428 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.14779449999332428, 'val_loss': 2.4016621112823486, 'val_acc': 0.08480565249919891}
spotPython tuning: 2.388392210006714 [####------] 40.29%
config: {'l1': 4, 'epochs': 2, 'batch_size': 128, 'act_fn': ReLU(), 'optimizer': 'NAdam', 'dropout_prob': 0.06132344821080924, 'lr_mult': 6.234771443822758}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): ReLU()
(2): Dropout(p=0.06132344821080924, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): ReLU()
(5): Dropout(p=0.06132344821080924, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): ReLU()
(8): Dropout(p=0.06132344821080924, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): ReLU()
(11): Dropout(p=0.06132344821080924, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.14487633109092712 │ │ val_loss │ 2.392998456954956 │ │ valid_mapk │ 0.25366511940956116 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.25366511940956116, 'val_loss': 2.392998456954956, 'val_acc': 0.14487633109092712}
spotPython tuning: 2.388392210006714 [####------] 42.14%
config: {'l1': 4, 'epochs': 2, 'batch_size': 128, 'act_fn': ReLU(), 'optimizer': 'NAdam', 'dropout_prob': 0.06133687548769888, 'lr_mult': 6.235287856191432}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): ReLU()
(2): Dropout(p=0.06133687548769888, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): ReLU()
(5): Dropout(p=0.06133687548769888, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): ReLU()
(8): Dropout(p=0.06133687548769888, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): ReLU()
(11): Dropout(p=0.06133687548769888, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.05300353467464447 │ │ val_loss │ 2.401555299758911 │ │ valid_mapk │ 0.12557870149612427 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.12557870149612427, 'val_loss': 2.401555299758911, 'val_acc': 0.05300353467464447}
spotPython tuning: 2.388392210006714 [####------] 43.92%
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.07031322932455912, 'lr_mult': 6.241013735439696}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Tanh()
(2): Dropout(p=0.07031322932455912, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Tanh()
(5): Dropout(p=0.07031322932455912, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Tanh()
(8): Dropout(p=0.07031322932455912, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Tanh()
(11): Dropout(p=0.07031322932455912, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.1342756152153015 │ │ val_loss │ 2.3932626247406006 │ │ valid_mapk │ 0.19912230968475342 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.19912230968475342, 'val_loss': 2.3932626247406006, 'val_acc': 0.1342756152153015}
spotPython tuning: 2.388392210006714 [#####-----] 46.12%
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.07027660612195853, 'lr_mult': 6.238671520736446}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Tanh()
(2): Dropout(p=0.07027660612195853, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Tanh()
(5): Dropout(p=0.07027660612195853, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Tanh()
(8): Dropout(p=0.07027660612195853, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Tanh()
(11): Dropout(p=0.07027660612195853, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.060070671141147614 │ │ val_loss │ 2.3993563652038574 │ │ valid_mapk │ 0.13512732088565826 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.13512732088565826, 'val_loss': 2.3993563652038574, 'val_acc': 0.060070671141147614}
spotPython tuning: 2.388392210006714 [#####-----] 48.29%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.07537423942420608, 'lr_mult': 6.2470137058381345}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.07537423942420608, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.07537423942420608, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.07537423942420608, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.07537423942420608, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.13780918717384338 │ │ val_loss │ 2.3953957557678223 │ │ valid_mapk │ 0.21825166046619415 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.21825166046619415, 'val_loss': 2.3953957557678223, 'val_acc': 0.13780918717384338}
spotPython tuning: 2.388392210006714 [#####-----] 50.13%
config: {'l1': 8, 'epochs': 2, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'AdamW', 'dropout_prob': 0.0426807198001501, 'lr_mult': 6.246013129522406}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Tanh()
(2): Dropout(p=0.0426807198001501, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Tanh()
(5): Dropout(p=0.0426807198001501, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Tanh()
(8): Dropout(p=0.0426807198001501, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Tanh()
(11): Dropout(p=0.0426807198001501, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.07067137956619263 │ │ val_loss │ 2.3999907970428467 │ │ valid_mapk │ 0.11789480596780777 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.11789480596780777, 'val_loss': 2.3999907970428467, 'val_acc': 0.07067137956619263}
spotPython tuning: 2.388392210006714 [#####-----] 52.51%
config: {'l1': 8, 'epochs': 4, 'batch_size': 128, 'act_fn': ReLU(), 'optimizer': 'Adamax', 'dropout_prob': 0.06919458933286132, 'lr_mult': 6.230664799177474}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): ReLU()
(2): Dropout(p=0.06919458933286132, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): ReLU()
(5): Dropout(p=0.06919458933286132, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): ReLU()
(8): Dropout(p=0.06919458933286132, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): ReLU()
(11): Dropout(p=0.06919458933286132, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.07773851603269577 │ │ val_loss │ 2.401838541030884 │ │ valid_mapk │ 0.15969006717205048 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.15969006717205048, 'val_loss': 2.401838541030884, 'val_acc': 0.07773851603269577}
spotPython tuning: 2.388392210006714 [#####-----] 54.55%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.04268074830788072, 'lr_mult': 6.25835890562172}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.04268074830788072, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.04268074830788072, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.04268074830788072, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.04268074830788072, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.1236749142408371 │ │ val_loss │ 2.392627477645874 │ │ valid_mapk │ 0.19876866042613983 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.19876866042613983, 'val_loss': 2.392627477645874, 'val_acc': 0.1236749142408371}
spotPython tuning: 2.388392210006714 [######----] 56.68%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.042680744318734413, 'lr_mult': 6.255939820639847}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.042680744318734413, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.042680744318734413, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.042680744318734413, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.042680744318734413, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.16607773303985596 │ │ val_loss │ 2.389552116394043 │ │ valid_mapk │ 0.21810698509216309 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.21810698509216309, 'val_loss': 2.389552116394043, 'val_acc': 0.16607773303985596}
spotPython tuning: 2.388392210006714 [######----] 58.84%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.042680756800128734, 'lr_mult': 6.263160449685004}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.042680756800128734, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.042680756800128734, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.042680756800128734, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.042680756800128734, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.12014134228229523 │ │ val_loss │ 2.397331953048706 │ │ valid_mapk │ 0.1671007126569748 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.1671007126569748, 'val_loss': 2.397331953048706, 'val_acc': 0.12014134228229523}
spotPython tuning: 2.388392210006714 [######----] 61.33%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'NAdam', 'dropout_prob': 0.031142060658407487, 'lr_mult': 6.225195462532344}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.031142060658407487, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.031142060658407487, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.031142060658407487, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.031142060658407487, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.09187278896570206 │ │ val_loss │ 2.3979434967041016 │ │ valid_mapk │ 0.18205054104328156 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.18205054104328156, 'val_loss': 2.3979434967041016, 'val_acc': 0.09187278896570206}
spotPython tuning: 2.388392210006714 [######----] 64.15%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.04268072510979589, 'lr_mult': 6.244593889759593}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.04268072510979589, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.04268072510979589, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.04268072510979589, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.04268072510979589, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.10600706934928894 │ │ val_loss │ 2.3938939571380615 │ │ valid_mapk │ 0.19799703359603882 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.19799703359603882, 'val_loss': 2.3938939571380615, 'val_acc': 0.10600706934928894}
spotPython tuning: 2.388392210006714 [#######---] 66.52%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.04268073161790258, 'lr_mult': 6.243100116385749}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.04268073161790258, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.04268073161790258, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.04268073161790258, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.04268073161790258, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.02826855145394802 │ │ val_loss │ 2.4027292728424072 │ │ valid_mapk │ 0.10932677984237671 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.10932677984237671, 'val_loss': 2.4027292728424072, 'val_acc': 0.02826855145394802}
spotPython tuning: 2.388392210006714 [#######---] 68.71%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.007353036627540256, 'lr_mult': 9.43023582919475}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.10600706934928894 │ │ val_loss │ 2.3805158138275146 │ │ valid_mapk │ 0.24842464923858643 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.24842464923858643, 'val_loss': 2.3805158138275146, 'val_acc': 0.10600706934928894}
spotPython tuning: 2.3805158138275146 [#######---] 71.26%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.0, 'lr_mult': 10.0}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.0, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.0, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.0, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.12014134228229523 │ │ val_loss │ 2.3950657844543457 │ │ valid_mapk │ 0.18378664553165436 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.18378664553165436, 'val_loss': 2.3950657844543457, 'val_acc': 0.12014134228229523}
spotPython tuning: 2.3805158138275146 [#######---] 73.62%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'AdamW', 'dropout_prob': 0.0061350126003397705, 'lr_mult': 10.0}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.0061350126003397705, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.0061350126003397705, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.0061350126003397705, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.0061350126003397705, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.12014134228229523 │ │ val_loss │ 2.39585542678833 │ │ valid_mapk │ 0.2127218246459961 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.2127218246459961, 'val_loss': 2.39585542678833, 'val_acc': 0.12014134228229523}
spotPython tuning: 2.3805158138275146 [########--] 75.71%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.007353036627540256, 'lr_mult': 9.431678191546903}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.12014134228229523 │ │ val_loss │ 2.3897430896759033 │ │ valid_mapk │ 0.2086387723684311 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.2086387723684311, 'val_loss': 2.3897430896759033, 'val_acc': 0.12014134228229523}
spotPython tuning: 2.3805158138275146 [########--] 78.02%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': ReLU(), 'optimizer': 'Adamax', 'dropout_prob': 0.029433046422154416, 'lr_mult': 9.776647968853487}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): ReLU()
(2): Dropout(p=0.029433046422154416, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): ReLU()
(5): Dropout(p=0.029433046422154416, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): ReLU()
(8): Dropout(p=0.029433046422154416, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): ReLU()
(11): Dropout(p=0.029433046422154416, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.1236749142408371 │ │ val_loss │ 2.3934175968170166 │ │ valid_mapk │ 0.20507009327411652 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.20507009327411652, 'val_loss': 2.3934175968170166, 'val_acc': 0.1236749142408371}
spotPython tuning: 2.3805158138275146 [########--] 80.20%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': ReLU(), 'optimizer': 'Adamax', 'dropout_prob': 0.02943246910618171, 'lr_mult': 9.776587217454606}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): ReLU()
(2): Dropout(p=0.02943246910618171, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): ReLU()
(5): Dropout(p=0.02943246910618171, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): ReLU()
(8): Dropout(p=0.02943246910618171, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): ReLU()
(11): Dropout(p=0.02943246910618171, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.13780918717384338 │ │ val_loss │ 2.38862681388855 │ │ valid_mapk │ 0.20019932091236115 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.20019932091236115, 'val_loss': 2.38862681388855, 'val_acc': 0.13780918717384338}
spotPython tuning: 2.3805158138275146 [########--] 82.77%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': ReLU(), 'optimizer': 'Adamax', 'dropout_prob': 0.02943300201323344, 'lr_mult': 9.776652942652227}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): ReLU()
(2): Dropout(p=0.02943300201323344, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): ReLU()
(5): Dropout(p=0.02943300201323344, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): ReLU()
(8): Dropout(p=0.02943300201323344, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): ReLU()
(11): Dropout(p=0.02943300201323344, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.07067137956619263 │ │ val_loss │ 2.4026272296905518 │ │ valid_mapk │ 0.14528678357601166 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.14528678357601166, 'val_loss': 2.4026272296905518, 'val_acc': 0.07067137956619263}
spotPython tuning: 2.3805158138275146 [#########-] 85.05%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'AdamW', 'dropout_prob': 0.0, 'lr_mult': 10.0}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.0, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.0, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.0, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.06360424309968948 │ │ val_loss │ 2.398444414138794 │ │ valid_mapk │ 0.1343717873096466 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.1343717873096466, 'val_loss': 2.398444414138794, 'val_acc': 0.06360424309968948}
spotPython tuning: 2.3805158138275146 [#########-] 87.15%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.0760258560240939, 'lr_mult': 9.559371950286717}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.0760258560240939, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.0760258560240939, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.0760258560240939, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.0760258560240939, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.16607773303985596 │ │ val_loss │ 2.3954763412475586 │ │ valid_mapk │ 0.2431037873029709 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.2431037873029709, 'val_loss': 2.3954763412475586, 'val_acc': 0.16607773303985596}
spotPython tuning: 2.3805158138275146 [#########-] 89.81%
config: {'l1': 4, 'epochs': 2, 'batch_size': 128, 'act_fn': ReLU(), 'optimizer': 'AdamW', 'dropout_prob': 0.02540271713152167, 'lr_mult': 1.4972234873394543}
model: NetLightBase(
(act_fn): ReLU()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): ReLU()
(2): Dropout(p=0.02540271713152167, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): ReLU()
(5): Dropout(p=0.02540271713152167, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): ReLU()
(8): Dropout(p=0.02540271713152167, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): ReLU()
(11): Dropout(p=0.02540271713152167, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.0989399328827858 │ │ val_loss │ 2.4045815467834473 │ │ valid_mapk │ 0.17290382087230682 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.17290382087230682, 'val_loss': 2.4045815467834473, 'val_acc': 0.0989399328827858}
spotPython tuning: 2.3805158138275146 [#########-] 91.70%
config: {'l1': 8, 'epochs': 2, 'batch_size': 64, 'act_fn': Tanh(), 'optimizer': 'AdamW', 'dropout_prob': 0.07626766162980661, 'lr_mult': 6.123944109781775}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=8, bias=True)
(1): Tanh()
(2): Dropout(p=0.07626766162980661, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): Tanh()
(5): Dropout(p=0.07626766162980661, inplace=False)
(6): Linear(in_features=4, out_features=4, bias=True)
(7): Tanh()
(8): Dropout(p=0.07626766162980661, inplace=False)
(9): Linear(in_features=4, out_features=2, bias=True)
(10): Tanh()
(11): Dropout(p=0.07626766162980661, inplace=False)
(12): Linear(in_features=2, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.11307420581579208 │ │ val_loss │ 2.3961756229400635 │ │ valid_mapk │ 0.2081790268421173 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.2081790268421173, 'val_loss': 2.3961756229400635, 'val_acc': 0.11307420581579208}
spotPython tuning: 2.3805158138275146 [#########-] 93.93%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.0, 'lr_mult': 9.361788969362845}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.0, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.0, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.0, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.10247349739074707 │ │ val_loss │ 2.3959481716156006 │ │ valid_mapk │ 0.1913580298423767 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.1913580298423767, 'val_loss': 2.3959481716156006, 'val_acc': 0.10247349739074707}
spotPython tuning: 2.3805158138275146 [##########] 96.09%
config: {'l1': 4, 'epochs': 4, 'batch_size': 128, 'act_fn': Tanh(), 'optimizer': 'Adamax', 'dropout_prob': 0.0, 'lr_mult': 9.981833760267506}
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.0, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.0, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.0, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.12014134228229523 │ │ val_loss │ 2.394521951675415 │ │ valid_mapk │ 0.21760867536067963 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.21760867536067963, 'val_loss': 2.394521951675415, 'val_acc': 0.12014134228229523}
spotPython tuning: 2.3805158138275146 [##########] 100.00% Done...
<spotPython.spot.spot.Spot at 0x15b2f50c0>
The textual output shown in the console (or code cell) can be visualized with Tensorboard as described in Section 14.9, see also the description in the documentation: Tensorboard.
After the hyperparameter tuning run is finished, the results can be analyzed as described in Section 14.10.
=False,
spot_tuner.plot_progress(log_y="./figures/" + experiment_name+"_progress.png") filename
from spotPython.utils.eda import gen_design_table
print(gen_design_table(fun_control=fun_control, spot=spot_tuner))
| name | type | default | lower | upper | tuned | transform | importance | stars |
|--------------|--------|-----------|---------|---------|----------------------|-----------------------|--------------|---------|
| l1 | int | 3 | 2.0 | 3.0 | 2.0 | transform_power_2_int | 5.93 | * |
| epochs | int | 4 | 1.0 | 2.0 | 2.0 | transform_power_2_int | 0.02 | |
| batch_size | int | 4 | 6.0 | 8.0 | 7.0 | transform_power_2_int | 0.00 | |
| act_fn | factor | ReLU | 0.0 | 2.0 | 1.0 | None | 0.00 | |
| optimizer | factor | SGD | 0.0 | 3.0 | 2.0 | None | 100.00 | *** |
| dropout_prob | float | 0.01 | 0.0 | 0.1 | 0.007353036627540256 | None | 0.00 | |
| lr_mult | float | 1.0 | 0.1 | 10.0 | 9.43023582919475 | None | 0.18 | . |
=0.025,
spot_tuner.plot_importance(threshold="./figures/" + experiment_name+"_importance.png") filename
from spotPython.hyperparameters.values import get_one_config_from_X
= spot_tuner.to_all_dim(spot_tuner.min_X.reshape(1,-1))
X = get_one_config_from_X(X, fun_control)
config config
{'l1': 4,
'epochs': 4,
'batch_size': 128,
'act_fn': Tanh(),
'optimizer': 'Adamax',
'dropout_prob': 0.007353036627540256,
'lr_mult': 9.43023582919475}
from spotPython.light.traintest import test_model
test_model(config, fun_control)
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Test metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ test_mapk_epoch │ 0.2233170121908188 │ │ val_acc │ 0.12022630870342255 │ │ val_loss │ 2.392791509628296 │ └───────────────────────────┴───────────────────────────┘
test_model result: {'test_mapk_epoch': 0.2233170121908188, 'val_loss': 2.392791509628296, 'val_acc': 0.12022630870342255}
(2.392791509628296, 0.12022630870342255)
KFold
class from sklearn.model_selection
is used to generate the folds for cross-validation.CrossValidationDataModule
class [SOURCE] is used to generate the folds for the hyperparameter tuning process.cv_model
function [SOURCE].from spotPython.light.traintest import cv_model
cv_model(config, fun_control)
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
k: 0
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.11267605423927307 │ │ val_loss │ 2.402125120162964 │ │ valid_mapk │ 0.17370891571044922 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.17370891571044922, 'val_loss': 2.402125120162964, 'val_acc': 0.11267605423927307}
k: 1
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.14084507524967194 │ │ val_loss │ 2.37508487701416 │ │ valid_mapk │ 0.2582159638404846 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.2582159638404846, 'val_loss': 2.37508487701416, 'val_acc': 0.14084507524967194}
k: 2
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.07042253762483597 │ │ val_loss │ 2.3953492641448975 │ │ valid_mapk │ 0.18309858441352844 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.18309858441352844, 'val_loss': 2.3953492641448975, 'val_acc': 0.07042253762483597}
k: 3
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.23943662643432617 │ │ val_loss │ 2.3373219966888428 │ │ valid_mapk │ 0.33568075299263 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.33568075299263, 'val_loss': 2.3373219966888428, 'val_acc': 0.23943662643432617}
k: 4
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.14084507524967194 │ │ val_loss │ 2.3774917125701904 │ │ valid_mapk │ 0.2065727710723877 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.2065727710723877, 'val_loss': 2.3774917125701904, 'val_acc': 0.14084507524967194}
k: 5
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.19718310236930847 │ │ val_loss │ 2.3470654487609863 │ │ valid_mapk │ 0.31455397605895996 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.31455397605895996, 'val_loss': 2.3470654487609863, 'val_acc': 0.19718310236930847}
k: 6
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.19718310236930847 │ │ val_loss │ 2.336935043334961 │ │ valid_mapk │ 0.32863849401474 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.32863849401474, 'val_loss': 2.336935043334961, 'val_acc': 0.19718310236930847}
k: 7
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.18571428954601288 │ │ val_loss │ 2.336298942565918 │ │ valid_mapk │ 0.31904762983322144 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.31904762983322144, 'val_loss': 2.336298942565918, 'val_acc': 0.18571428954601288}
k: 8
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.1428571492433548 │ │ val_loss │ 2.3653533458709717 │ │ valid_mapk │ 0.26428571343421936 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.26428571343421936, 'val_loss': 2.3653533458709717, 'val_acc': 0.1428571492433548}
k: 9
model: NetLightBase(
(act_fn): Tanh()
(train_mapk): MAPK()
(valid_mapk): MAPK()
(test_mapk): MAPK()
(model): Sequential(
(0): Linear(in_features=64, out_features=4, bias=True)
(1): Tanh()
(2): Dropout(p=0.007353036627540256, inplace=False)
(3): Linear(in_features=4, out_features=2, bias=True)
(4): Tanh()
(5): Dropout(p=0.007353036627540256, inplace=False)
(6): Linear(in_features=2, out_features=2, bias=True)
(7): Tanh()
(8): Dropout(p=0.007353036627540256, inplace=False)
(9): Linear(in_features=2, out_features=1, bias=True)
(10): Tanh()
(11): Dropout(p=0.007353036627540256, inplace=False)
(12): Linear(in_features=1, out_features=11, bias=True)
)
)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Validate metric ┃ DataLoader 0 ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ val_acc │ 0.22857142984867096 │ │ val_loss │ 2.3240246772766113 │ │ valid_mapk │ 0.3499999940395355 │ └───────────────────────────┴───────────────────────────┘
train_model result: {'valid_mapk': 0.3499999940395355, 'val_loss': 2.3240246772766113, 'val_acc': 0.22857142984867096}
cv_model mapk result: 0.2733802795410156
0.2733802795410156
= "./figures/" + experiment_name
filename =filename) spot_tuner.plot_important_hyperparameter_contour(filename
l1: 5.933960029953566
optimizer: 99.99999999999999
lr_mult: 0.17506941146610672
spot_tuner.parallel_plot()
Parallel coordinates plots
= False
PLOT_ALL if PLOT_ALL:
= spot_tuner.k
n for i in range(n-1):
for j in range(i+1, n):
=i, j=j, min_z=min_z, max_z = max_z) spot_tuner.plot_contour(i