pytomography.transforms#

This module contains transform operations used to build the system matrix. Currently, the PET transforms only support 2D PET.

Subpackages#

Submodules#

Package Contents#

Classes#

Transform

The parent class for all transforms used in reconstruction (obj2obj, im2im, obj2im). Subclasses must implement the __call__ method.

SPECTAttenuationTransform

obj2obj transform used to model the effects of attenuation in SPECT. This transform accepts either an attenuation_map (which must be aligned with the SPECT projection data) or a filepath consisting of folder containing CT DICOM files all pertaining to the same scan

SPECTPSFTransform

obj2obj transform used to model the effects of PSF blurring in SPECT. The smoothing kernel used to apply PSF modeling uses a Gaussian kernel with width \(\sigma\) dependent on the distance of the point to the detector; that information is specified in the SPECTPSFMeta parameter. There are a few potential arguments to initialize this transform (i) psf_meta, which contains relevant collimator information to obtain a Gaussian PSF model that works for low/medium energy SPECT (ii) kernel_f, an callable function that gives the kernel at any source-detector distance \(d\), or (iii) psf_net, a network configured to automatically apply full PSF modeling to a given object \(f\) at all source-detector distances. Only one of the arguments should be given.

CutOffTransform

proj2proj transformation used to set pixel values equal to zero at the first and last few z slices. This is often required when reconstructing DICOM data due to the finite field of view of the projection data, where additional axial slices are included on the top and bottom, with zero measured detection events. This transform is included in the system matrix, to model the sharp cutoff at the finite FOV.

KEMTransform

Object to object transform used to take in a coefficient image \(\alpha\) and return an image estimate \(f = K\alpha\). This transform implements the matrix \(K\).

GaussianFilter

Applies a Gaussian smoothing filter to the reconstructed object with the specified full-width-half-max (FWHM)

RotationTransform

obj2obj transform used to rotate an object to angle \(\beta\) in the DICOM reference frame. (Note that an angle of )

DVFMotionTransform

The parent class for all transforms used in reconstruction (obj2obj, im2im, obj2im). Subclasses must implement the __call__ method.

class pytomography.transforms.Transform[source]#

The parent class for all transforms used in reconstruction (obj2obj, im2im, obj2im). Subclasses must implement the __call__ method.

Parameters:

device (str) – Pytorch device used for computation

configure(object_meta, proj_meta)[source]#

Configures the transform to the object/proj metadata. This is done after creating the network so that it can be adjusted to the system matrix.

Parameters:
  • object_meta (ObjectMeta) – Object metadata.

  • proj_meta (ProjMeta) – Projections metadata.

Return type:

None

abstract forward(x)[source]#

Abstract method; must be implemented in subclasses to apply a correction to an object/proj and return it

Parameters:

x (torch.tensor) –

abstract backward(x)[source]#

Abstract method; must be implemented in subclasses to apply a correction to an object/proj and return it

Parameters:

x (torch.tensor) –

class pytomography.transforms.SPECTAttenuationTransform(attenuation_map=None, filepath=None, mode='constant', assume_padded=True)[source]#

Bases: pytomography.transforms.Transform

obj2obj transform used to model the effects of attenuation in SPECT. This transform accepts either an attenuation_map (which must be aligned with the SPECT projection data) or a filepath consisting of folder containing CT DICOM files all pertaining to the same scan

Parameters:
  • attenuation_map (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] corresponding to the attenuation coefficient in \({\text{cm}^{-1}}\) at the photon energy corresponding to the particular scan

  • filepath (Sequence[str]) – Folder location of CT scan; all .dcm files must correspond to different slices of the same scan.

  • mode (str) – Mode used for extrapolation of CT beyond edges when aligning DICOM SPECT/CT data. Defaults to ‘constant’, which means the image is padded with zeros.

  • assume_padded (bool) – Assumes objects and projections fed into forward and backward methods are padded, as they will be in reconstruction algorithms

configure(object_meta, proj_meta)[source]#

Function used to initalize the transform using corresponding object and projection metadata

Parameters:
Return type:

None

forward(object_i, ang_idx)[source]#

Forward projection \(A:\mathbb{U} \to \mathbb{U}\) of attenuation correction.

Parameters:
  • object_i (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being projected along axis=1.

  • ang_idx (torch.Tensor) – The projection indices: used to find the corresponding angle in projection space corresponding to each projection angle in object_i.

Returns:

Tensor of size [batch_size, Lx, Ly, Lz] such that projection of this tensor along the first axis corresponds to an attenuation corrected projection.

Return type:

torch.tensor

backward(object_i, ang_idx, norm_constant=None)[source]#

Back projection \(A^T:\mathbb{U} \to \mathbb{U}\) of attenuation correction. Since the matrix is diagonal, the implementation is the same as forward projection. The only difference is the optional normalization parameter.

Parameters:
  • object_i (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being projected along axis=1.

  • ang_idx (torch.Tensor) – The projection indices: used to find the corresponding angle in projection space corresponding to each projection angle in object_i.

  • norm_constant (torch.tensor, optional) – A tensor used to normalize the output during back projection. Defaults to None.

Returns:

Tensor of size [batch_size, Lx, Ly, Lz] such that projection of this tensor along the first axis corresponds to an attenuation corrected projection.

Return type:

torch.tensor

compute_average_prob_matrix()[source]#
class pytomography.transforms.SPECTPSFTransform(psf_meta=None, kernel_f=None, psf_net=None, assume_padded=True)[source]#

Bases: pytomography.transforms.Transform

obj2obj transform used to model the effects of PSF blurring in SPECT. The smoothing kernel used to apply PSF modeling uses a Gaussian kernel with width \(\sigma\) dependent on the distance of the point to the detector; that information is specified in the SPECTPSFMeta parameter. There are a few potential arguments to initialize this transform (i) psf_meta, which contains relevant collimator information to obtain a Gaussian PSF model that works for low/medium energy SPECT (ii) kernel_f, an callable function that gives the kernel at any source-detector distance \(d\), or (iii) psf_net, a network configured to automatically apply full PSF modeling to a given object \(f\) at all source-detector distances. Only one of the arguments should be given.

Parameters:
  • psf_meta (SPECTPSFMeta) – Metadata corresponding to the parameters of PSF blurring. In most cases (low/medium energy SPECT), this should be the only given argument.

  • kernel_f (Callable) – Function \(PSF(x,y,d)\) that gives PSF at every source-detector distance \(d\). It should be able to take in 1D numpy arrays as its first two arguments, and a single argument for the final argument \(d\). The function should return a corresponding 2D PSF kernel.

  • psf_net (Callable) – Network that takes in an object \(f\) and applies all necessary PSF correction to return a new object \(\tilde{f}\) that is PSF corrected, such that subsequent summation along the x-axis accurately models the collimator detector response.

  • assume_padded (bool) –

_configure_gaussian_model()[source]#

Internal function to configure Gaussian modeling. This is called when psf_meta is given in initialization

_configure_kernel_model()[source]#

Internal function to configure arbitrary kernel modeling. This is called when kernel_f is given in initialization

_configure_manual_net()[source]#

Internal function to configure the PSF net. This is called when psf_net is given in initialization

configure(object_meta, proj_meta)[source]#

Function used to initalize the transform using corresponding object and projection metadata

Parameters:
Return type:

None

_compute_kernel_size(radius, axis)[source]#

Function used to compute the kernel size used for PSF blurring. In particular, uses the min_sigmas attribute of SPECTPSFMeta to determine what the kernel size should be such that the kernel encompasses at least min_sigmas at all points in the object.

Returns:

The corresponding kernel size used for PSF blurring.

Return type:

int

_get_sigma(radius)[source]#

Uses PSF Meta data information to get blurring \(\sigma\) as a function of distance from detector.

Parameters:

radius (float) – The distance from the detector.

Returns:

An array of length Lx corresponding to blurring at each point along the 1st axis in object space

Return type:

array

_apply_psf(object, ang_idx)[source]#

Applies PSF modeling to an object with corresponding angle indices

Parameters:
  • object (torch.tensor) – Tensor of shape [batch_size, Lx, Ly, Lz] corresponding to object rotated at different angles

  • ang_idx (Sequence[int]) – List of length batch_size corresponding to angle of each object in the batch

Returns:

Object with PSF modeling applied

Return type:

torch.tensor

forward(object_i, ang_idx)[source]#

Applies the PSF transform \(A:\mathbb{U} \to \mathbb{U}\) for the situation where an object is being detector by a detector at the \(+x\) axis.

Parameters:
  • object_i (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being projected along its first axis

  • ang_idx (int) – The projection indices: used to find the corresponding angle in projection space corresponding to each projection angle in object_i.

Returns:

Tensor of size [batch_size, Lx, Ly, Lz] such that projection of this tensor along the first axis corresponds to n PSF corrected projection.

Return type:

torch.tensor

backward(object_i, ang_idx, norm_constant=None)[source]#

Applies the transpose of the PSF transform \(A^T:\mathbb{U} \to \mathbb{U}\) for the situation where an object is being detector by a detector at the \(+x\) axis. Since the PSF transform is a symmetric matrix, its implemtation is the same as the forward method.

Parameters:
  • object_i (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being projected along its first axis

  • ang_idx (int) – The projection indices: used to find the corresponding angle in projection space corresponding to each projection angle in object_i.

  • norm_constant (torch.tensor, optional) – A tensor used to normalize the output during back projection. Defaults to None.

Returns:

Tensor of size [batch_size, Lx, Ly, Lz] such that projection of this tensor along the first axis corresponds to n PSF corrected projection.

Return type:

torch.tensor

class pytomography.transforms.CutOffTransform(proj=None, file_NM=None)[source]#

Bases: pytomography.transforms.Transform

proj2proj transformation used to set pixel values equal to zero at the first and last few z slices. This is often required when reconstructing DICOM data due to the finite field of view of the projection data, where additional axial slices are included on the top and bottom, with zero measured detection events. This transform is included in the system matrix, to model the sharp cutoff at the finite FOV.

Parameters:
  • proj (torch.tensor) – Measured projection data.

  • file_NM (str | None) –

forward(proj)[source]#

Forward projection \(B:\mathbb{V} \to \mathbb{V}\) of the cutoff transform.

Parameters:

proj (torch.Tensor) – Tensor of size [batch_size, Ltheta, Lr, Lz] which transform is appplied to

Returns:

Original projections, but with certain z-slices equal to zero.

Return type:

torch.tensor

backward(proj, norm_constant=None)[source]#

Back projection \(B^T:\mathbb{V} \to \mathbb{V}\) of the cutoff transform. Since this is a diagonal matrix, the implementation is the same as forward projection, but with the optional norm_constant argument.

Parameters:
  • proj (torch.Tensor) – Tensor of size [batch_size, Ltheta, Lr, Lz] which transform is appplied to

  • norm_constant (torch.Tensor | None, optional) – A tensor used to normalize the output during back projection. Defaults to None.

Returns:

Original projections, but with certain z-slices equal to zero.

Return type:

torch.tensor

class pytomography.transforms.KEMTransform(support_objects, support_kernels=None, support_kernels_params=None, distance_kernel=None, distance_kernel_params=None, size=5, top_N=None, kernel_on_gpu=False)[source]#

Bases: pytomography.transforms.Transform

Object to object transform used to take in a coefficient image \(\alpha\) and return an image estimate \(f = K\alpha\). This transform implements the matrix \(K\).

Parameters:
  • support_objects (Sequence[torch.tensor]) – Objects used for support when building each basis function. These may correspond to PET/CT/MRI images, for example.

  • support_kernels (Sequence[Callable], optional) – A list of functions corresponding to the support kernel of each support object. If none, defaults to \(k(v_i, v_j; \sigma) = \exp\left(-\frac{(v_i-v_j)^2}{2\sigma^2} \right)\) for each support object. Defaults to None.

  • support_kernels_params (Sequence[Sequence[float]], optional) – A list of lists, where each sublist contains the additional parameters corresponding to each support kernel (parameters that follow the semi-colon in the expression above). As an example, if using the default configuration for support_kernels for two different support objects (say CT and PET), one could given support_kernel_params=[[40],[5]] If none then defaults to a list of N*[[1]] where N is the number of support objects. Defaults to None.

  • distance_kernel (Callable, optional) – Kernel used to weight based on voxel-voxel distance. If none, defaults to :math:`k(x_i, x_j; sigma) = expleft(-frac{(x_i-x_j)^2}{2sigma^2} right) Defaults to None.

  • distance_kernel_params (_type_, optional) – A list of parameters corresponding to additional parameters for the distance_kernel (i.e. the parameters that follow the semi-colon in the expression above). If none, then defaults to \(\sigma=1\). Defaults to None.

  • size (int, optional) – The size of each kernel. Defaults to 5.

  • top_N (int | None) –

  • kernel_on_gpu (bool) –

compute_kernel()[source]#
configure(object_meta, proj_meta)[source]#

Function used to initalize the transform using corresponding object and projection metadata

Parameters:
Return type:

None

forward(object)[source]#

Forward transform corresponding to \(K\alpha\)

Parameters:

object (torch.Tensor) – Coefficient image \(\alpha\)

Returns:

Image \(K\alpha\)

Return type:

torch.tensor

backward(object, norm_constant=None)[source]#

Backward transform corresponding to \(K^T\alpha\). Since the matrix is symmetric, the implementation is the same as forward.

Parameters:
  • object (torch.Tensor) – Coefficient image \(\alpha\)

  • norm_constant (torch.Tensor | None) –

Returns:

Image \(K^T\alpha\)

Return type:

torch.tensor

class pytomography.transforms.GaussianFilter(FWHM, n_sigmas=3)[source]#

Bases: pytomography.transforms.Transform

Applies a Gaussian smoothing filter to the reconstructed object with the specified full-width-half-max (FWHM)

Parameters:
  • FWHM (float) – Specifies the width of the gaussian

  • n_sigmas (float) – Number of sigmas to include before truncating the kernel.

configure(object_meta, proj_meta)[source]#

Configures the transform to the object/proj metadata. This is done after creating the network so that it can be adjusted to the system matrix.

Parameters:
  • object_meta (ObjectMeta) – Object metadata.

  • proj_meta (ProjMeta) – Projections metadata.

Return type:

None

_get_kernels()[source]#

Obtains required kernels for smoothing

__call__(object)[source]#

Alternative way to call

forward(object)[source]#

Applies the Gaussian smoothing

Parameters:

object (torch.tensor) – Object to smooth

Returns:

Smoothed object

Return type:

torch.tensor

backward(object, norm_constant=None)[source]#

Applies Gaussian smoothing in back projection. Because the operation is symmetric, it is the same as the forward projection.

Parameters:
  • object (torch.tensor) – Object to smooth

  • norm_constant (torch.tensor, optional) – Normalization constant used in iterative algorithms. Defaults to None.

Returns:

Smoothed object

Return type:

torch.tensor

class pytomography.transforms.RotationTransform(mode='bilinear')[source]#

Bases: pytomography.transforms.Transform

obj2obj transform used to rotate an object to angle \(\beta\) in the DICOM reference frame. (Note that an angle of )

Parameters:

mode (str) – Interpolation mode used in the rotation.

forward(object, angles)[source]#

Rotates an object to angle \(\beta\) in the DICOM reference frame. Note that the scanner angle \(\beta\) is related to \(\phi\) (azimuthal angle) by \(\phi = 3\pi/2 - \beta\).

Parameters:
  • object (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being rotated.

  • angles (torch.Tensor) – Tensor of size [batch_size] corresponding to the rotation angles.

Returns:

Tensor of size [batch_size, Lx, Ly, Lz] where each element in the batch dimension is rotated by the corresponding angle.

Return type:

torch.tensor

backward(object, angles)[source]#

Forward projection \(A:\mathbb{U} \to \mathbb{U}\) of attenuation correction.

Parameters:
  • object (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being rotated.

  • angles (torch.Tensor) – Tensor of size [batch_size] corresponding to the rotation angles.

Returns:

Tensor of size [batch_size, Lx, Ly, Lz] where each element in the batch dimension is rotated by the corresponding angle.

Return type:

torch.tensor

class pytomography.transforms.DVFMotionTransform(dvf_forward=None, dvf_backward=None)[source]#

Bases: pytomography.transforms.Transform

The parent class for all transforms used in reconstruction (obj2obj, im2im, obj2im). Subclasses must implement the __call__ method.

Parameters:
  • device (str) – Pytorch device used for computation

  • dvf_forward (torch.Tensor | None) –

  • dvf_backward (torch.Tensor | None) –

_get_vol_ratio(DVF)[source]#
_get_old_coordinates()[source]#

Obtain meshgrid of coordinates corresponding to the object

Returns:

Tensor of coordinates corresponding to input object

Return type:

torch.Tensor

_get_new_coordinates(old_coordinates, DVF)[source]#

Obtain the new coordinates of each voxel based on the DVF.

Parameters:
  • old_coordinates (torch.Tensor) – Old coordinates of each voxel

  • DVF (torch.Tensor) – Deformation vector field.

Returns:

_description_

Return type:

_type_

_apply_dvf(DVF, vol_ratio, object_i)[source]#

Applies the deformation vector field to the object

Parameters:
  • DVF (torch.Tensor) – Deformation vector field

  • object_i (torch.Tensor) – Old object.

Returns:

Deformed object.

Return type:

torch.Tensor

forward(object_i)[source]#

Forward transform of deformation vector field

Parameters:

object_i (torch.Tensor) – Original object.

Returns:

Deformed object corresponding to forward transform.

Return type:

torch.Tensor

backward(object_i)[source]#

Backward transform of deformation vector field

Parameters:

object_i (torch.Tensor) – Original object.

Returns:

Deformed object corresponding to backward transform.

Return type:

torch.Tensor