pytomography.transforms.shared#

Submodules#

Package Contents#

Classes#

KEMTransform

Object to object transform used to take in a coefficient image \(\alpha\) and return an image estimate \(f = K\alpha\). This transform implements the matrix \(K\).

GaussianFilter

Applies a Gaussian smoothing filter to the reconstructed object with the specified full-width-half-max (FWHM)

RotationTransform

obj2obj transform used to rotate an object to angle \(\beta\) in the DICOM reference frame. (Note that an angle of )

DVFMotionTransform

The parent class for all transforms used in reconstruction (obj2obj, im2im, obj2im). Subclasses must implement the __call__ method.

class pytomography.transforms.shared.KEMTransform(support_objects, support_kernels=None, support_kernels_params=None, distance_kernel=None, distance_kernel_params=None, size=5, top_N=None, kernel_on_gpu=False)[source]#

Bases: pytomography.transforms.Transform

Object to object transform used to take in a coefficient image \(\alpha\) and return an image estimate \(f = K\alpha\). This transform implements the matrix \(K\).

Parameters:
  • support_objects (Sequence[torch.tensor]) – Objects used for support when building each basis function. These may correspond to PET/CT/MRI images, for example.

  • support_kernels (Sequence[Callable], optional) – A list of functions corresponding to the support kernel of each support object. If none, defaults to \(k(v_i, v_j; \sigma) = \exp\left(-\frac{(v_i-v_j)^2}{2\sigma^2} \right)\) for each support object. Defaults to None.

  • support_kernels_params (Sequence[Sequence[float]], optional) – A list of lists, where each sublist contains the additional parameters corresponding to each support kernel (parameters that follow the semi-colon in the expression above). As an example, if using the default configuration for support_kernels for two different support objects (say CT and PET), one could given support_kernel_params=[[40],[5]] If none then defaults to a list of N*[[1]] where N is the number of support objects. Defaults to None.

  • distance_kernel (Callable, optional) – Kernel used to weight based on voxel-voxel distance. If none, defaults to :math:`k(x_i, x_j; sigma) = expleft(-frac{(x_i-x_j)^2}{2sigma^2} right) Defaults to None.

  • distance_kernel_params (_type_, optional) – A list of parameters corresponding to additional parameters for the distance_kernel (i.e. the parameters that follow the semi-colon in the expression above). If none, then defaults to \(\sigma=1\). Defaults to None.

  • size (int, optional) – The size of each kernel. Defaults to 5.

  • top_N (int | None) –

  • kernel_on_gpu (bool) –

compute_kernel()[source]#
configure(object_meta, proj_meta)[source]#

Function used to initalize the transform using corresponding object and projection metadata

Parameters:
Return type:

None

forward(object)[source]#

Forward transform corresponding to \(K\alpha\)

Parameters:

object (torch.Tensor) – Coefficient image \(\alpha\)

Returns:

Image \(K\alpha\)

Return type:

torch.tensor

backward(object, norm_constant=None)[source]#

Backward transform corresponding to \(K^T\alpha\). Since the matrix is symmetric, the implementation is the same as forward.

Parameters:
  • object (torch.Tensor) – Coefficient image \(\alpha\)

  • norm_constant (torch.Tensor | None) –

Returns:

Image \(K^T\alpha\)

Return type:

torch.tensor

class pytomography.transforms.shared.GaussianFilter(FWHM, n_sigmas=3)[source]#

Bases: pytomography.transforms.Transform

Applies a Gaussian smoothing filter to the reconstructed object with the specified full-width-half-max (FWHM)

Parameters:
  • FWHM (float) – Specifies the width of the gaussian

  • n_sigmas (float) – Number of sigmas to include before truncating the kernel.

configure(object_meta, proj_meta)[source]#

Configures the transform to the object/proj metadata. This is done after creating the network so that it can be adjusted to the system matrix.

Parameters:
  • object_meta (ObjectMeta) – Object metadata.

  • proj_meta (ProjMeta) – Projections metadata.

Return type:

None

_get_kernels()[source]#

Obtains required kernels for smoothing

__call__(object)[source]#

Alternative way to call

forward(object)[source]#

Applies the Gaussian smoothing

Parameters:

object (torch.tensor) – Object to smooth

Returns:

Smoothed object

Return type:

torch.tensor

backward(object, norm_constant=None)[source]#

Applies Gaussian smoothing in back projection. Because the operation is symmetric, it is the same as the forward projection.

Parameters:
  • object (torch.tensor) – Object to smooth

  • norm_constant (torch.tensor, optional) – Normalization constant used in iterative algorithms. Defaults to None.

Returns:

Smoothed object

Return type:

torch.tensor

class pytomography.transforms.shared.RotationTransform(mode='bilinear')[source]#

Bases: pytomography.transforms.Transform

obj2obj transform used to rotate an object to angle \(\beta\) in the DICOM reference frame. (Note that an angle of )

Parameters:

mode (str) – Interpolation mode used in the rotation.

forward(object, angles)[source]#

Rotates an object to angle \(\beta\) in the DICOM reference frame. Note that the scanner angle \(\beta\) is related to \(\phi\) (azimuthal angle) by \(\phi = 3\pi/2 - \beta\).

Parameters:
  • object (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being rotated.

  • angles (torch.Tensor) – Tensor of size [batch_size] corresponding to the rotation angles.

Returns:

Tensor of size [batch_size, Lx, Ly, Lz] where each element in the batch dimension is rotated by the corresponding angle.

Return type:

torch.tensor

backward(object, angles)[source]#

Forward projection \(A:\mathbb{U} \to \mathbb{U}\) of attenuation correction.

Parameters:
  • object (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being rotated.

  • angles (torch.Tensor) – Tensor of size [batch_size] corresponding to the rotation angles.

Returns:

Tensor of size [batch_size, Lx, Ly, Lz] where each element in the batch dimension is rotated by the corresponding angle.

Return type:

torch.tensor

class pytomography.transforms.shared.DVFMotionTransform(dvf_forward=None, dvf_backward=None)[source]#

Bases: pytomography.transforms.Transform

The parent class for all transforms used in reconstruction (obj2obj, im2im, obj2im). Subclasses must implement the __call__ method.

Parameters:
  • device (str) – Pytorch device used for computation

  • dvf_forward (torch.Tensor | None) –

  • dvf_backward (torch.Tensor | None) –

_get_vol_ratio(DVF)[source]#
_get_old_coordinates()[source]#

Obtain meshgrid of coordinates corresponding to the object

Returns:

Tensor of coordinates corresponding to input object

Return type:

torch.Tensor

_get_new_coordinates(old_coordinates, DVF)[source]#

Obtain the new coordinates of each voxel based on the DVF.

Parameters:
  • old_coordinates (torch.Tensor) – Old coordinates of each voxel

  • DVF (torch.Tensor) – Deformation vector field.

Returns:

_description_

Return type:

_type_

_apply_dvf(DVF, vol_ratio, object_i)[source]#

Applies the deformation vector field to the object

Parameters:
  • DVF (torch.Tensor) – Deformation vector field

  • object_i (torch.Tensor) – Old object.

Returns:

Deformed object.

Return type:

torch.Tensor

forward(object_i)[source]#

Forward transform of deformation vector field

Parameters:

object_i (torch.Tensor) – Original object.

Returns:

Deformed object corresponding to forward transform.

Return type:

torch.Tensor

backward(object_i)[source]#

Backward transform of deformation vector field

Parameters:

object_i (torch.Tensor) – Original object.

Returns:

Deformed object corresponding to backward transform.

Return type:

torch.Tensor