wbia.algo.verif.torch package

Submodules

wbia.algo.verif.torch.fit_harness module

wbia.algo.verif.torch.gpu_util module

wbia.algo.verif.torch.gpu_util.find_unused_gpu(min_memory=0)[source]

Finds GPU with the lowest memory usage by parsing output of nvidia-smi

python -c “from pysseg.util import gpu_util; print(gpu_util.find_unused_gpu())”

wbia.algo.verif.torch.gpu_util.gpu_info()[source]

Parses nvidia-smi

wbia.algo.verif.torch.gpu_util.have_gpu(min_memory=8000)[source]

Determine if we are on a machine with a good GPU

wbia.algo.verif.torch.lr_schedule module

class wbia.algo.verif.torch.lr_schedule.Exponential(init_lr=0.001, decay_rate=0.01, lr_decay_epoch=100)[source]

Bases: object

Decay learning rate by a factor of decay_rate every lr_decay_epoch epochs.

Example

>>> # DISABLE_DOCTEST
>>> from wbia.algo.verif.torch.lr_schedule import *
>>> lr_scheduler = Exponential()
>>> rates = np.array([lr_scheduler(i) for i in range(6)])
>>> target = np.array([1E-3, 1E-3, 1E-5, 1E-5, 1E-7, 1E-7])
>>> assert all(list(np.isclose(target, rates)))

wbia.algo.verif.torch.models module

class wbia.algo.verif.torch.models.Siamese[source]

Bases: torch.nn.modules.module.Module

Example

>>> # DISABLE_DOCTEST
>>> from wbia.algo.verif.siamese import *
>>> self = Siamese()
forward(input1, input2)[source]

Compute a resnet50 vector for each input and look at the L2 distance between the vectors.

wbia.algo.verif.torch.models.visualize()[source]

wbia.algo.verif.torch.netmath module

class wbia.algo.verif.torch.netmath.ContrastiveLoss(margin=1.0)[source]

Bases: torch.nn.modules.module.Module

Contrastive loss function.

References

https://github.com/delijati/pytorch-siamese/blob/master/contrastive.py

LaTeX:
$(y E)^2 + ((1 - y) max(m - E, 0)^2)$

Example

>>> # DISABLE_DOCTEST
>>> from wbia.algo.verif.siamese import *
>>> vecs1, vecs2, label = testdata_siam_desc()
>>> self = ContrastiveLoss()
>>> ut.exec_func_src(self.forward, globals())
>>> func = self.forward
>>> output = torch.nn.PairwiseDistance(p=2)(vecs1, vecs2)
>>> loss2x, dist_l2 = ut.exec_func_src(self.forward, globals(), globals(), keys=['loss2x', 'dist_l2'])
>>> ut.quit_if_noshow()
>>> loss2x, dist_l2, label = map(np.array, [loss, dist_l2, label])
>>> label = label.astype(np.bool)
>>> dist0_l2 = dist_l2[label]
>>> dist1_l2 = dist_l2[~label]
>>> loss0 = loss2x[label] / 2
>>> loss1 = loss2x[~label] / 2
>>> import wbia.plottool as pt
>>> pt.plot2(dist0_l2, loss0, 'x', color=pt.TRUE_BLUE, label='imposter_loss', y_label='loss')
>>> pt.plot2(dist1_l2, loss1, 'x', color=pt.FALSE_RED, label='genuine_loss', y_label='loss')
>>> pt.gca().set_xlabel('l2-dist')
>>> pt.legend()
>>> ut.show_if_requested()
forward(output, label, weight=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class wbia.algo.verif.torch.netmath.Criterions[source]

Bases: wbia.algo.verif.torch.netmath.NetMathParams

A collection of standard and custom loss criterion

class ContrastiveLoss(margin=1.0)

Bases: torch.nn.modules.module.Module

Contrastive loss function.

References

https://github.com/delijati/pytorch-siamese/blob/master/contrastive.py

LaTeX:
$(y E)^2 + ((1 - y) max(m - E, 0)^2)$

Example

>>> # DISABLE_DOCTEST
>>> from wbia.algo.verif.siamese import *
>>> vecs1, vecs2, label = testdata_siam_desc()
>>> self = ContrastiveLoss()
>>> ut.exec_func_src(self.forward, globals())
>>> func = self.forward
>>> output = torch.nn.PairwiseDistance(p=2)(vecs1, vecs2)
>>> loss2x, dist_l2 = ut.exec_func_src(self.forward, globals(), globals(), keys=['loss2x', 'dist_l2'])
>>> ut.quit_if_noshow()
>>> loss2x, dist_l2, label = map(np.array, [loss, dist_l2, label])
>>> label = label.astype(np.bool)
>>> dist0_l2 = dist_l2[label]
>>> dist1_l2 = dist_l2[~label]
>>> loss0 = loss2x[label] / 2
>>> loss1 = loss2x[~label] / 2
>>> import wbia.plottool as pt
>>> pt.plot2(dist0_l2, loss0, 'x', color=pt.TRUE_BLUE, label='imposter_loss', y_label='loss')
>>> pt.plot2(dist1_l2, loss1, 'x', color=pt.FALSE_RED, label='genuine_loss', y_label='loss')
>>> pt.gca().set_xlabel('l2-dist')
>>> pt.legend()
>>> ut.show_if_requested()
forward(output, label, weight=None)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

static cross_entropy2d(output, label, weight=None, size_average=True)[source]

https://github.com/ycszen/pytorch-seg/blob/master/loss.py

class wbia.algo.verif.torch.netmath.LRSchedules[source]

Bases: wbia.algo.verif.torch.netmath.NetMathParams

A collection of standard and custom learning rate schedulers

static exp(optimizer, epoch, init_lr=0.001, lr_decay_epoch=2)[source]

Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs.

class wbia.algo.verif.torch.netmath.Metrics[source]

Bases: wbia.algo.verif.torch.netmath.NetMathParams

static tpr(output, label)[source]

true positive rate

class wbia.algo.verif.torch.netmath.NetMathParams[source]

Bases: object

classmethod lookup(key_or_scheduler)[source]

Accepts either a string that encodes a known scheduler or a custom callable that is returned as-is.

Parameters:key_or_scheduler (str or func) – scheduler name or the func itself
class wbia.algo.verif.torch.netmath.Optimizers[source]

Bases: wbia.algo.verif.torch.netmath.NetMathParams

class Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)

Bases: torch.optim.optimizer.Optimizer

Implements Adam algorithm.

It has been proposed in Adam: A Method for Stochastic Optimization.

Parameters:
  • params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
  • lr (float, optional) – learning rate (default: 1e-3)
  • betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
  • eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
  • weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
  • amsgrad (boolean, optional) – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond
step(closure=None)

Performs a single optimization step.

Parameters:closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class SGD(params, lr=<object object>, momentum=0, dampening=0, weight_decay=0, nesterov=False)

Bases: torch.optim.optimizer.Optimizer

Implements stochastic gradient descent (optionally with momentum).

Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning.

Parameters:
  • params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
  • lr (float) – learning rate
  • momentum (float, optional) – momentum factor (default: 0)
  • weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
  • dampening (float, optional) – dampening for momentum (default: 0)
  • nesterov (bool, optional) – enables Nesterov momentum (default: False)

Example

>>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()

Note

The implementation of SGD with Momentum/Nesterov subtly differs from Sutskever et. al. and implementations in some other frameworks.

Considering the specific case of Momentum, the update can be written as

System Message: WARNING/2 (v = \rho * v + g \\ p = p - lr * v )

latex exited with error [stdout] This is pdfTeX, Version 3.14159265-2.6-1.40.20 (TeX Live 2019/MacPorts 2019.50896_2) (preloaded format=latex) restricted \write18 enabled. entering extended mode (./math.tex LaTeX2e <2018-12-01> (/opt/local/share/texmf-texlive/tex/latex/base/article.cls Document Class: article 2018/09/03 v1.4i Standard LaTeX document class (/opt/local/share/texmf-texlive/tex/latex/base/size12.clo)) (/opt/local/share/texmf-texlive/tex/latex/base/inputenc.sty ! LaTeX Error: File `utf8x.def' not found. Type X to quit or <RETURN> to proceed, or enter new name. (Default extension: def) Enter file name: ! Emergency stop. <read *> l.165 \endinput ^^M No pages of output. Transcript written on math.log.

where p, g, v and

System Message: WARNING/2 (\rho)

latex exited with error [stdout] This is pdfTeX, Version 3.14159265-2.6-1.40.20 (TeX Live 2019/MacPorts 2019.50896_2) (preloaded format=latex) restricted \write18 enabled. entering extended mode (./math.tex LaTeX2e <2018-12-01> (/opt/local/share/texmf-texlive/tex/latex/base/article.cls Document Class: article 2018/09/03 v1.4i Standard LaTeX document class (/opt/local/share/texmf-texlive/tex/latex/base/size12.clo)) (/opt/local/share/texmf-texlive/tex/latex/base/inputenc.sty ! LaTeX Error: File `utf8x.def' not found. Type X to quit or <RETURN> to proceed, or enter new name. (Default extension: def) Enter file name: ! Emergency stop. <read *> l.165 \endinput ^^M No pages of output. Transcript written on math.log.
denote the parameters, gradient, velocity, and momentum respectively.

This is in contrast to Sutskever et. al. and other frameworks which employ an update of the form

System Message: WARNING/2 (v = \rho * v + lr * g \\ p = p - v )

latex exited with error [stdout] This is pdfTeX, Version 3.14159265-2.6-1.40.20 (TeX Live 2019/MacPorts 2019.50896_2) (preloaded format=latex) restricted \write18 enabled. entering extended mode (./math.tex LaTeX2e <2018-12-01> (/opt/local/share/texmf-texlive/tex/latex/base/article.cls Document Class: article 2018/09/03 v1.4i Standard LaTeX document class (/opt/local/share/texmf-texlive/tex/latex/base/size12.clo)) (/opt/local/share/texmf-texlive/tex/latex/base/inputenc.sty ! LaTeX Error: File `utf8x.def' not found. Type X to quit or <RETURN> to proceed, or enter new name. (Default extension: def) Enter file name: ! Emergency stop. <read *> l.165 \endinput ^^M No pages of output. Transcript written on math.log.

The Nesterov version is analogously modified.

step(closure=None)

Performs a single optimization step.

Parameters:closure (callable, optional) – A closure that reevaluates the model and returns the loss.
wbia.algo.verif.torch.netmath.testdata_siam_desc(num_data=128, desc_dim=8)[source]

wbia.algo.verif.torch.old_harness module

wbia.algo.verif.torch.siamese module

wbia.algo.verif.torch.train_main module

class wbia.algo.verif.torch.train_main.LRSchedule[source]

Bases: object

static exp(optimizer, epoch, init_lr=0.001, lr_decay_epoch=2)[source]

Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs.

class wbia.algo.verif.torch.train_main.LabeledPairDataset(img1_fpaths, img2_fpaths, labels, transform='default')[source]

Bases: torch.utils.data.dataset.Dataset

transform=transforms.Compose([
transforms.Scale(224), transforms.ToTensor(), torchvision.transforms.Normalize([0.5, 0.5, 0.5], [0.225, 0.225, 0.225])

]

Ignore:
>>> from wbia.algo.verif.torch.train_main import *
>>> from wbia.algo.verif.vsone import *  # NOQA
>>> pblm = OneVsOneProblem.from_empty('PZ_MTEST')
>>> ibs = pblm.infr.ibs
>>> pblm.load_samples()
>>> samples = pblm.samples
>>> samples.print_info()
>>> xval_kw = pblm.xval_kw.asdict()
>>> skf_list = pblm.samples.stratified_kfold_indices(**xval_kw)
>>> train_idx, test_idx = skf_list[0]
>>> aids1, aids2 = pblm.samples.aid_pairs[train_idx].T
>>> labels = pblm.samples['match_state'].y_enc[train_idx]
>>> labels = (labels == 1).astype(np.int64)
>>> chip_config = {'resize_dim': 'wh', 'dim_size': (224, 224)}
>>> img1_fpaths = ibs.depc_annot.get('chips', aids1, read_extern=False, colnames='img', config=chip_config)
>>> img2_fpaths = ibs.depc_annot.get('chips', aids2, read_extern=False, colnames='img', config=chip_config)
>>> self = LabeledPairDataset(img1_fpaths, img2_fpaths, labels)
>>> img1, img2, label = self[0]
class_weights()[source]
wbia.algo.verif.torch.train_main.siam_vsone_train()[source]
CommandLine:
python -m wbia.algo.verif.torch.train_main siam_vsone_train

Example

>>> # DISABLE_DOCTEST
>>> from wbia.algo.verif.torch.train_main import *  # NOQA
>>> siam_vsone_train()

Module contents