pytomography.transforms
#
This module contains transform operations used to build the system matrix. Currently, the PET transforms only support 2D PET.
Subpackages#
Submodules#
Package Contents#
Classes#
The parent class for all transforms used in reconstruction (obj2obj, im2im, obj2im). Subclasses must implement the |
|
obj2obj transform used to model the effects of attenuation in SPECT. This transform accepts either an |
|
obj2obj transform used to model the effects of PSF blurring in SPECT. The smoothing kernel used to apply PSF modeling uses a Gaussian kernel with width \(\sigma\) dependent on the distance of the point to the detector; that information is specified in the |
|
proj2proj transformation used to set pixel values equal to zero at the first and last few z slices. This is often required when reconstructing DICOM data due to the finite field of view of the projection data, where additional axial slices are included on the top and bottom, with zero measured detection events. This transform is included in the system matrix, to model the sharp cutoff at the finite FOV. |
|
proj2proj mapping used to model the effects of attenuation in PET. |
|
proj2proj transform used to model the effects of PSF blurring in PET. The smoothing kernel is assumed to be independent of \(\theta\) and \(z\), but is dependent on \(r\). |
|
Object to object transform used to take in a coefficient image \(\alpha\) and return an image estimate \(f = K\alpha\). This transform implements the matrix \(K\). |
- class pytomography.transforms.Transform[source]#
The parent class for all transforms used in reconstruction (obj2obj, im2im, obj2im). Subclasses must implement the
__call__
method.- Parameters:
device (str) – Pytorch device used for computation
- configure(object_meta, proj_meta)#
Configures the transform to the object/proj metadata. This is done after creating the network so that it can be adjusted to the system matrix.
- Parameters:
object_meta (ObjectMeta) – Object metadata.
proj_meta (ProjMeta) – Projections metadata.
- Return type:
None
- abstract forward(x)#
Abstract method; must be implemented in subclasses to apply a correction to an object/proj and return it
- Parameters:
x (torch.tensor) –
- abstract backward(x)#
Abstract method; must be implemented in subclasses to apply a correction to an object/proj and return it
- Parameters:
x (torch.tensor) –
- class pytomography.transforms.SPECTAttenuationTransform(attenuation_map=None, filepath=None)[source]#
Bases:
pytomography.transforms.Transform
obj2obj transform used to model the effects of attenuation in SPECT. This transform accepts either an
attenuation_map
(which must be aligned with the SPECT projection data) or afilepath
consisting of folder containing CT DICOM files all pertaining to the same scan- Parameters:
attenuation_map (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] corresponding to the attenuation coefficient in \({\text{cm}^{-1}}\) at the photon energy corresponding to the particular scan
filepath (Sequence[str]) – Folder location of CT scan; all .dcm files must correspond to different slices of the same scan.
- configure(object_meta, proj_meta)#
Function used to initalize the transform using corresponding object and projection metadata
- Parameters:
object_meta (SPECTObjectMeta) – Object metadata.
proj_meta (SPECTProjMeta) – Projection metadata.
- Return type:
None
- forward(object_i, ang_idx)#
Forward projection \(A:\mathbb{U} \to \mathbb{U}\) of attenuation correction.
- Parameters:
object_i (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being projected along
axis=1
.ang_idx (torch.Tensor) – The projection indices: used to find the corresponding angle in projection space corresponding to each projection angle in
object_i
.
- Returns:
Tensor of size [batch_size, Lx, Ly, Lz] such that projection of this tensor along the first axis corresponds to an attenuation corrected projection.
- Return type:
torch.tensor
- backward(object_i, ang_idx, norm_constant=None)#
Back projection \(A^T:\mathbb{U} \to \mathbb{U}\) of attenuation correction. Since the matrix is diagonal, the implementation is the same as forward projection. The only difference is the optional normalization parameter.
- Parameters:
object_i (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being projected along
axis=1
.ang_idx (torch.Tensor) – The projection indices: used to find the corresponding angle in projection space corresponding to each projection angle in
object_i
.norm_constant (torch.tensor, optional) – A tensor used to normalize the output during back projection. Defaults to None.
- Returns:
Tensor of size [batch_size, Lx, Ly, Lz] such that projection of this tensor along the first axis corresponds to an attenuation corrected projection.
- Return type:
torch.tensor
- class pytomography.transforms.SPECTPSFTransform(psf_meta)[source]#
Bases:
pytomography.transforms.Transform
obj2obj transform used to model the effects of PSF blurring in SPECT. The smoothing kernel used to apply PSF modeling uses a Gaussian kernel with width \(\sigma\) dependent on the distance of the point to the detector; that information is specified in the
SPECTPSFMeta
parameter.- Parameters:
psf_meta (SPECTPSFMeta) – Metadata corresponding to the parameters of PSF blurring
- configure(object_meta, proj_meta)#
Function used to initalize the transform using corresponding object and projection metadata
- Parameters:
object_meta (SPECTObjectMeta) – Object metadata.
proj_meta (SPECTProjMeta) – Projections metadata.
- Return type:
None
- _compute_kernel_size(radius, axis)#
Function used to compute the kernel size used for PSF blurring. In particular, uses the
min_sigmas
attribute ofSPECTPSFMeta
to determine what the kernel size should be such that the kernel encompasses at leastmin_sigmas
at all points in the object.- Returns:
The corresponding kernel size used for PSF blurring.
- Return type:
int
- _get_sigma(radius)#
Uses PSF Meta data information to get blurring \(\sigma\) as a function of distance from detector.
- Parameters:
radius (float) – The distance from the detector.
- Returns:
An array of length Lx corresponding to blurring at each point along the 1st axis in object space
- Return type:
array
- _apply_psf(object, ang_idx)#
Applies PSF modeling to an object with corresponding angle indices
- Parameters:
object (torch.tensor) – Tensor of shape
[batch_size, Lx, Ly, Lz]
corresponding to object rotated at different anglesang_idx (Sequence[int]) – List of length
batch_size
corresponding to angle of each object in the batch
- Returns:
Object with PSF modeling applied
- Return type:
torch.tensor
- forward(object_i, ang_idx)#
Applies the PSF transform \(A:\mathbb{U} \to \mathbb{U}\) for the situation where an object is being detector by a detector at the \(+x\) axis.
- Parameters:
object_i (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being projected along its first axis
ang_idx (int) – The projection indices: used to find the corresponding angle in projection space corresponding to each projection angle in
object_i
.
- Returns:
Tensor of size [batch_size, Lx, Ly, Lz] such that projection of this tensor along the first axis corresponds to n PSF corrected projection.
- Return type:
torch.tensor
- backward(object_i, ang_idx, norm_constant=None)#
Applies the transpose of the PSF transform \(A^T:\mathbb{U} \to \mathbb{U}\) for the situation where an object is being detector by a detector at the \(+x\) axis. Since the PSF transform is a symmetric matrix, its implemtation is the same as the
forward
method.- Parameters:
object_i (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] being projected along its first axis
ang_idx (int) – The projection indices: used to find the corresponding angle in projection space corresponding to each projection angle in
object_i
.norm_constant (torch.tensor, optional) – A tensor used to normalize the output during back projection. Defaults to None.
- Returns:
Tensor of size [batch_size, Lx, Ly, Lz] such that projection of this tensor along the first axis corresponds to n PSF corrected projection.
- Return type:
torch.tensor
- class pytomography.transforms.CutOffTransform(proj)[source]#
Bases:
pytomography.transforms.Transform
proj2proj transformation used to set pixel values equal to zero at the first and last few z slices. This is often required when reconstructing DICOM data due to the finite field of view of the projection data, where additional axial slices are included on the top and bottom, with zero measured detection events. This transform is included in the system matrix, to model the sharp cutoff at the finite FOV.
- Parameters:
proj (torch.tensor) – Measured projection data.
- forward(proj)#
Forward projection \(B:\mathbb{V} \to \mathbb{V}\) of the cutoff transform.
- Parameters:
proj (torch.Tensor) – Tensor of size [batch_size, Ltheta, Lr, Lz] which transform is appplied to
- Returns:
Original projections, but with certain z-slices equal to zero.
- Return type:
torch.tensor
- backward(proj, norm_constant=None)#
Back projection \(B^T:\mathbb{V} \to \mathbb{V}\) of the cutoff transform. Since this is a diagonal matrix, the implementation is the same as forward projection, but with the optional norm_constant argument.
- Parameters:
proj (torch.Tensor) – Tensor of size [batch_size, Ltheta, Lr, Lz] which transform is appplied to
norm_constant (torch.Tensor | None, optional) – A tensor used to normalize the output during back projection. Defaults to None.
- Returns:
Original projections, but with certain z-slices equal to zero.
- Return type:
torch.tensor
- class pytomography.transforms.PETAttenuationTransform(CT)[source]#
Bases:
pytomography.transforms.Transform
proj2proj mapping used to model the effects of attenuation in PET.
- Parameters:
CT (torch.tensor) – Tensor of size [batch_size, Lx, Ly, Lz] corresponding to the attenuation coefficient in \({\text{cm}^{-1}}\) at a photon energy of 511keV.
device (str, optional) – Pytorch device used for computation. If None, uses the default device pytomography.device Defaults to None.
- configure(object_meta, proj_meta)#
Function used to initalize the transform using corresponding object and projection metadata
- Parameters:
object_meta (ObjectMeta) – Object metadata.
proj_meta (ProjMeta) – Projection metadata.
- Return type:
None
- forward(proj)#
Applies forward projection of attenuation modeling \(B:\mathbb{V} \to \mathbb{V}\) to 2D PET projections.
- Parameters:
proj (torch.Tensor) – Tensor of size [batch_size, Ltheta, Lr, Lz] which transform is appplied to
- Returns:
Tensor of size [batch_size, Ltheta, Lr, Lz] corresponding to attenuation-corrected projections.
- Return type:
torch.Tensor
- backward(proj, norm_constant=None)#
Applies back projection of attenuation modeling \(B^T:\mathbb{V} \to \mathbb{V}\) to 2D PET projections. Since the matrix is diagonal, its the
backward
implementation is identical to theforward
implementation; the only difference is the optionalnorm_constant
which is needed if one wants to normalize the back projection.- Parameters:
proj (torch.Tensor) – Tensor of size [batch_size, Ltheta, Lr, Lz] which transform is appplied to
norm_constant (torch.Tensor | None, optional) – A tensor used to normalize the output during back projection. Defaults to None.
- Returns:
Tensor of size [batch_size, Ltheta, Lr, Lz] corresponding to attenuation-corrected projections.
- Return type:
torch.tensor
- class pytomography.transforms.PETPSFTransform(kerns)[source]#
Bases:
pytomography.transforms.Transform
proj2proj transform used to model the effects of PSF blurring in PET. The smoothing kernel is assumed to be independent of \(\theta\) and \(z\), but is dependent on \(r\).
- Parameters:
kerns (Sequence[callable]) – A sequence of PSF kernels applied to the Lr dimension of the projections with shape [batch_size, Lr, Ltheta, Lz]
- configure(object_meta, proj_meta)#
Function used to initalize the transform using corresponding object and proj metadata
- Parameters:
object_meta (ObjectMeta) – Object metadata.
proj_meta (ProjMeta) – Projection metadata.
- Return type:
None
- construct_matrix()#
Constructs the matrix used to apply PSF blurring.
- forward(proj)#
Applies the forward projection of PSF modeling \(B:\mathbb{V} \to \mathbb{V}\) to a PET proj.
- Parameters:
proj (torch.tensor]) – Tensor of size [batch_size, Ltheta, Lr, Lz] corresponding to the projections
- Returns:
Tensor of size [batch_size, Ltheta, Lr, Lz] corresponding to the PSF corrected projections.
- Return type:
torch.tensor
- backward(proj, norm_constant=None)#
Applies the back projection of PSF modeling \(B^T:\mathbb{V} \to \mathbb{V}\) to PET projections .
- Parameters:
proj (torch.tensor]) – Tensor of size [batch_size, Ltheta, Lr, Lz] corresponding to the projections norm_constant (torch.tensor, optional): A tensor used to normalize the output during back projection. Defaults to None.
norm_constant (torch.Tensor | None) –
- Returns:
Tensor of size [batch_size, Ltheta, Lr, Lz] corresponding to the PSF corrected projections.
- Return type:
torch.tensor
- class pytomography.transforms.KEMTransform(support_objects, support_kernels=None, support_kernels_params=None, distance_kernel=None, distance_kernel_params=None, size=5)#
Bases:
pytomography.transforms.Transform
Object to object transform used to take in a coefficient image \(\alpha\) and return an image estimate \(f = K\alpha\). This transform implements the matrix \(K\).
- Parameters:
support_objects (Sequence[torch.tensor]) – Objects used for support when building each basis function. These may correspond to PET/CT/MRI images, for example.
support_kernels (Sequence[Callable], optional) – A list of functions corresponding to the support kernel of each support object. If none, defaults to \(k(v_i, v_j; \sigma) = \exp\left(-\frac{(v_i-v_j)^2}{2\sigma^2} \right)\) for each support object. Defaults to None.
support_kernels_params (Sequence[Sequence[float]], optional) – A list of lists, where each sublist contains the additional parameters corresponding to each support kernel (parameters that follow the semi-colon in the expression above). As an example, if using the default configuration for
support_kernels
for two different support objects (say CT and PET), one could givensupport_kernel_params=[[40],[5]]
If none then defaults to a list of N*[[1]] where N is the number of support objects. Defaults to None.distance_kernel (Callable, optional) – Kernel used to weight based on voxel-voxel distance. If none, defaults to :math:`k(x_i, x_j; sigma) = expleft(-frac{(x_i-x_j)^2}{2sigma^2} right) Defaults to None.
distance_kernel_params (_type_, optional) – A list of parameters corresponding to additional parameters for the
distance_kernel
(i.e. the parameters that follow the semi-colon in the expression above). If none, then defaults to \(\sigma=1\). Defaults to None.size (int, optional) – The size of each kernel. Defaults to 5.
- forward(object)#
Forward transform corresponding to \(K\alpha\)
- Parameters:
object (torch.Tensor) – Coefficient image \(\alpha\)
- Returns:
Image \(K\alpha\)
- Return type:
torch.tensor
- backward(object, norm_constant=None)#
Backward transform corresponding to \(K^T\alpha\). Since the matrix is symmetric, the implementation is the same as forward.
- Parameters:
object (torch.Tensor) – Coefficient image \(\alpha\)
norm_constant (torch.Tensor | None) –
- Returns:
Image \(K^T\alpha\)
- Return type:
torch.tensor