OTW
Implementation of OTW, presented in [LLSH23]
- class dtw_loss_functions.otw.otw(m: float = 1, s: int | float = 0.5, beta: float = 1, reduction: str = 'mean')[source]
Bases:
ModuleMethods
forward(x, y)Computes the OTW distance between two time series.
- forward(x: Tensor, y: Tensor) Tensor[source]
Computes the OTW distance between two time series.
- Parameters:
x (torch.Tensor) – First time series, of shape
B x LwhereBis the batch size andLis the length of the time series.y (torch.Tensor) – Second time series, of shape
B x LwhereBis the batch size andLis the length of the time series.
- Returns:
OTW distance between the two time series
- Return type:
torch.Tensor
- dtw_loss_functions.otw.otw_distance(x: Tensor, y: Tensor, m: float = 1, s: int | float = 0.5, beta: float = 1, reduction: str = 'mean') Tensor[source]
Implements the OTW distance between two time series, as defined in equations (9) of the paper.
- Parameters:
x (torch.Tensor) – First time series, of shape
B x LwhereBis the batch size andLis the length of the time series.y (torch.Tensor) – Second time series, of shape
B x LwhereBis the batch size andLis the length of the time series.m (float) – Waste cost parameter, default is
1.s (int | float) – Window size parameter, it can be an integer or a float between
0and1. Default is0.5. If float, it is interpreted as a fraction of the length of the time series. If integer, it is interpreted as the number of time steps.beta (float) – Hyperparameter for the smooth l1 loss, default is
1.
- Returns:
OTW distance between the two time series
- Return type:
torch.Tensor
- dtw_loss_functions.otw.smooth_l1_loss(x: Tensor, beta: float, reduction='mean') Tensor[source]
Computes the smooth l1 of the input tensor x, as defined in equation (9) of the paper.
- Parameters:
x (torch.Tensor) – Input tensor of shape
B. Each element corresponds to the difference between two time series.beta (float) – Hyperparameter for the smooth l1 loss.
reduction (str) – Specifies the reduction to apply to the output:
none|mean|sum. Default:mean.
- Returns:
Smooth l1 loss of the input tensor. If reduction is
none, the output has the same shape as x. If reduction ismeanorsum, the output is a scalar.- Return type:
torch.Tensor
- dtw_loss_functions.otw.window_cumsum(x: Tensor, s: int) Tensor[source]
Computes the cumulative sum of the input tensor x as defined in equation (7) of the paper.
Given a time series A represented as an array of values
[a1, a2, ..., aL], the window cumsum is computed as :window_cumsum(A) = cumsum(A) - cumsum(A[0:L-s])(i.e. the cumsum of all the array minus the cumsum of the array excluding the last s elements)- Parameters:
x (torch.Tensor) – Input tensor of shape (B, L).
s (int) – Window size.
- Returns:
Cumulative sum over the sliding window, of shape (B).
- Return type:
torch.Tensor