pytomography.projectors
#
This module contains classes/functionality for operators that map between distinct vector spaces. One (very important) operator of this form is the system matrix \(H:\mathbb{U} \to \mathbb{V}\), which maps from object space \(\mathbb{U}\) to image space \(\mathbb{V}\)
Subpackages#
Submodules#
Package Contents#
Classes#
Abstract class for a general system matrix \(H:\mathbb{U} \to \mathbb{V}\) which takes in an object \(f \in \mathbb{U}\) and maps it to corresponding projections \(g \in \mathbb{V}\) that would be produced by the imaging system. A system matrix consists of sequences of object-to-object and proj-to-proj transforms that model various characteristics of the imaging system, such as attenuation and blurring. While the class implements the operator \(H:\mathbb{U} \to \mathbb{V}\) through the |
|
Abstract class for a general system matrix \(H:\mathbb{U} \to \mathbb{V}\) which takes in an object \(f \in \mathbb{U}\) and maps it to corresponding projections \(g \in \mathbb{V}\) that would be produced by the imaging system. A system matrix consists of sequences of object-to-object and proj-to-proj transforms that model various characteristics of the imaging system, such as attenuation and blurring. While the class implements the operator \(H:\mathbb{U} \to \mathbb{V}\) through the |
|
System matrix for SPECT imaging. By default, this applies to parallel hole collimators, but appropriate use of proj2proj_transforms can allow this system matrix to also model converging/diverging collimator configurations as well. |
|
SPECT system matrix where the object space is a vector of length \(N\) consisting of the mean activities for each masks in |
|
System matrix of PET list mode data. Forward projections corresponds to computing the expected counts along all LORs specified: in particular it approximates \(g_i = \int_{\text{LOR}_i} h(r) f(r) dr\) where index \(i\) corresponds to a particular detector pair and \(h(r)\) is a Gaussian function that incorporates time-of-flight information (\(h(r)=1\) for non-time-of-flight). The integral is approximated in the discrete object space using Joseph3D projections. In general, the system matrix implements two different projections, the quantity \(H\) which projects to LORs corresponding to all detected events, and the quantity \(\tilde{H}\) which projects to all valid LORs. The quantity \(H\) is used for standard forward/back projection, while \(\tilde{H}\) is used to compute the sensitivity image. |
|
Given a KEM transform \(K\) and a system matrix \(H\), implements the transform \(HK\) (and backward transform \(K^T H^T\)) |
|
Abstract class for a general system matrix \(H:\mathbb{U} \to \mathbb{V}\) which takes in an object \(f \in \mathbb{U}\) and maps it to corresponding projections \(g \in \mathbb{V}\) that would be produced by the imaging system. A system matrix consists of sequences of object-to-object and proj-to-proj transforms that model various characteristics of the imaging system, such as attenuation and blurring. While the class implements the operator \(H:\mathbb{U} \to \mathbb{V}\) through the |
- class pytomography.projectors.SystemMatrix(obj2obj_transforms, proj2proj_transforms, object_meta, proj_meta)[source]#
Abstract class for a general system matrix \(H:\mathbb{U} \to \mathbb{V}\) which takes in an object \(f \in \mathbb{U}\) and maps it to corresponding projections \(g \in \mathbb{V}\) that would be produced by the imaging system. A system matrix consists of sequences of object-to-object and proj-to-proj transforms that model various characteristics of the imaging system, such as attenuation and blurring. While the class implements the operator \(H:\mathbb{U} \to \mathbb{V}\) through the
forward
method, it also implements \(H^T:\mathbb{V} \to \mathbb{U}\) through the backward method, required during iterative reconstruction algorithms such as OSEM.- Parameters:
obj2obj_transforms (Sequence[Transform]) – Sequence of object mappings that occur before forward projection.
im2im_transforms (Sequence[Transform]) – Sequence of proj mappings that occur after forward projection.
object_meta (ObjectMeta) – Object metadata.
proj_meta (ProjMeta) – Projection metadata.
proj2proj_transforms (list[pytomography.transforms.Transform]) –
- abstract forward(object, **kwargs)[source]#
Implements forward projection \(Hf\) on an object \(f\).
- Parameters:
object (torch.tensor[batch_size, Lx, Ly, Lz]) – The object to be forward projected
angle_subset (list, optional) – Only uses a subset of angles (i.e. only certain values of \(j\) in formula above) when back projecting. Useful for ordered-subset reconstructions. Defaults to None, which assumes all angles are used.
- Returns:
Forward projected proj where Ltheta is specified by self.proj_meta and angle_subset.
- Return type:
torch.tensor[batch_size, Ltheta, Lx, Lz]
- abstract backward(proj, angle_subset=None, return_norm_constant=False)[source]#
Implements back projection \(H^T g\) on a set of projections \(g\).
- Parameters:
proj (torch.Tensor) – proj which is to be back projected
angle_subset (list, optional) – Only uses a subset of angles (i.e. only certain values of \(j\) in formula above) when back projecting. Useful for ordered-subset reconstructions. Defaults to None, which assumes all angles are used.
return_norm_constant (bool) – Whether or not to return \(1/\sum_j H_{ij}\) along with back projection. Defaults to ‘False’.
- Returns:
the object obtained from back projection.
- Return type:
torch.tensor[batch_size, Lr, Lr, Lz]
- class pytomography.projectors.ExtendedSystemMatrix(system_matrices, obj2obj_transforms=None, proj2proj_transforms=None)[source]#
Bases:
SystemMatrix
Abstract class for a general system matrix \(H:\mathbb{U} \to \mathbb{V}\) which takes in an object \(f \in \mathbb{U}\) and maps it to corresponding projections \(g \in \mathbb{V}\) that would be produced by the imaging system. A system matrix consists of sequences of object-to-object and proj-to-proj transforms that model various characteristics of the imaging system, such as attenuation and blurring. While the class implements the operator \(H:\mathbb{U} \to \mathbb{V}\) through the
forward
method, it also implements \(H^T:\mathbb{V} \to \mathbb{U}\) through the backward method, required during iterative reconstruction algorithms such as OSEM.- Parameters:
obj2obj_transforms (Sequence[Transform]) – Sequence of object mappings that occur before forward projection.
im2im_transforms (Sequence[Transform]) – Sequence of proj mappings that occur after forward projection.
object_meta (ObjectMeta) – Object metadata.
proj_meta (ProjMeta) – Projection metadata.
system_matrices (Sequence[SystemMatrix]) –
proj2proj_transforms (Sequence[pytomography.transforms.Transform]) –
- forward(object, angle_subset=None)[source]#
Forward transform \(H' = \sum_n v_n \otimes B_n H_n A_n\), This adds an additional dimension to the projection space.
- Parameters:
object (torch.Tensor[1,Lx,Ly,Lz]) – Object to be forward projected. Must have a batch size of 1.
angle_subset (Sequence[int], optional) – Only uses a subset of angles (i.e. only certain values of \(j\) in formula above) when back projecting. Useful for ordered-subset reconstructions. Defaults to None, which assumes all angles are used.
- Returns:
Forward projection.
- Return type:
torch.Tensor[N_gates,…]
- backward(proj, angle_subset=None)[source]#
Back projection \(H' = \sum_n v_n^T \otimes A_n^T H_n^T B_n^T\). This maps an extended projection back to the original object space.
- Parameters:
proj (torch.Tensor[N,...]) – Projection data to be back-projected.
angle_subset (Sequence[int], optional) – Only uses a subset of angles (i.e. only certain values of \(j\) in formula above) when back projecting. Useful for ordered-subset reconstructions. Defaults to None, which assumes all angles are used.. Defaults to None.
- Returns:
Back projection.
- Return type:
torch.Tensor[1,Lx,Ly,Lz]
- get_subset_splits(n_subsets)[source]#
Returns a list of subsets (where each subset contains indicies corresponding to different angles). For example, if the projections consisted of 6 total angles, then
get_subsets_splits(2)
would return[[0,2,4],[1,3,5]]
.- Parameters:
n_subsets (int) – number of subsets used in OSEM
- Returns:
list of index arrays for each subset
- Return type:
list
- class pytomography.projectors.SPECTSystemMatrix(obj2obj_transforms, proj2proj_transforms, object_meta, proj_meta, n_parallel=1)[source]#
Bases:
pytomography.projectors.system_matrix.SystemMatrix
System matrix for SPECT imaging. By default, this applies to parallel hole collimators, but appropriate use of proj2proj_transforms can allow this system matrix to also model converging/diverging collimator configurations as well.
- Parameters:
obj2obj_transforms (Sequence[Transform]) – Sequence of object mappings that occur before forward projection.
proj2proj_transforms (Sequence[Transform]) – Sequence of proj mappings that occur after forward projection.
object_meta (SPECTObjectMeta) – SPECT Object metadata.
proj_meta (SPECTProjMeta) – SPECT projection metadata.
n_parallel (int) – Number of projections to use in parallel when applying transforms. More parallel events may speed up reconstruction time, but also increases GPU usage. Defaults to 1.
- compute_normalization_factor(subset_idx=None)[source]#
Function used to get normalization factor \(H^T_m 1\) corresponding to projection subset \(m\).
- Parameters:
subset_idx (int | None, optional) – Index of subset. If none, then considers all projections. Defaults to None.
- Returns:
normalization factor \(H^T_m 1\)
- Return type:
torch.Tensor
- set_n_subsets(n_subsets)[source]#
Sets the subsets for this system matrix given
n_subsets
total subsets.- Parameters:
n_subsets (int) – number of subsets used in OSEM
- Return type:
list
- get_projection_subset(projections, subset_idx)[source]#
Gets the subset of projections \(g_m\) corresponding to index \(m\).
- Parameters:
projections (torch.tensor) – full projections \(g\)
subset_idx (int) – subset index \(m\)
- Returns:
subsampled projections \(g_m\)
- Return type:
torch.tensor
- get_weighting_subset(subset_idx)[source]#
Computes the relative weighting of a given subset (given that the projection space is reduced). This is used for scaling parameters relative to \(H_m^T 1\) in reconstruction algorithms, such as prior weighting \(\beta\)
- Parameters:
subset_idx (int) – Subset index
- Returns:
Weighting for the subset.
- Return type:
float
- forward(object, subset_idx=None)[source]#
Applies forward projection to
object
for a SPECT imaging system.- Parameters:
object (torch.tensor[batch_size, Lx, Ly, Lz]) – The object to be forward projected
subset_idx (int, optional) – Only uses a subset of angles \(g_m\) corresponding to the provided subset index \(m\). If None, then defaults to the full projections \(g\).
- Returns:
forward projection estimate \(g_m=H_mf\)
- Return type:
torch.tensor
- backward(proj, subset_idx=None, return_norm_constant=False)[source]#
Applies back projection to
proj
for a SPECT imaging system.- Parameters:
proj (torch.tensor) – projections \(g\) which are to be back projected
subset_idx (int, optional) – Only uses a subset of angles \(g_m\) corresponding to the provided subset index \(m\). If None, then defaults to the full projections \(g\).
return_norm_constant (bool) – Whether or not to return \(H_m^T 1\) along with back projection. Defaults to ‘False’.
- Returns:
the object \(\hat{f} = H_m^T g_m\) obtained via back projection.
- Return type:
torch.tensor
- class pytomography.projectors.SPECTSystemMatrixMaskedSegments(obj2obj_transforms, proj2proj_transforms, object_meta, proj_meta, masks)[source]#
Bases:
SPECTSystemMatrix
SPECT system matrix where the object space is a vector of length \(N\) consisting of the mean activities for each masks in
masks
. This system matrix can be used in reconstruction algorithms to obtain maximum liklihood estimations for the average value of \(f\) inside each of the masks.- Parameters:
obj2obj_transforms (Sequence[Transform]) – Sequence of object mappings that occur before forward projection.
proj2proj_transforms (Sequence[Transform]) – Sequence of proj mappings that occur after forward projection.
object_meta (SPECTObjectMeta) – SPECT Object metadata.
proj_meta (SPECTProjMeta) – SPECT proj metadata.
masks (torch.Tensor) – Masks corresponding to each segmented region.
- forward(activities, angle_subset=None)[source]#
Implements forward projection \(HUa\) on a vector of activities \(a\) corresponding to self.masks.
- Parameters:
activities (torch.tensor[batch_size, n_masks]) – Activities in each mask region.
angle_subset (list, optional) – Only uses a subset of angles (i.e. only certain values of \(j\) in formula above) when back projecting. Useful for ordered-subset reconstructions. Defaults to None, which assumes all angles are used.
- Returns:
Forward projected projections where Ltheta is specified by self.proj_meta and angle_subset.
- Return type:
torch.tensor[batch_size, Ltheta, Lx, Lz]
- backward(proj, angle_subset=None, prior=None, normalize=False, return_norm_constant=False)[source]#
Implements back projection \(U^T H^T g\) on projections \(g\), returning a vector of activities for each mask region.
- Parameters:
proj (torch.tensor[batch_size, Ltheta, Lr, Lz]) – projections which are to be back projected
angle_subset (list, optional) – Only uses a subset of angles (i.e. only certain values of \(j\) in formula above) when back projecting. Useful for ordered-subset reconstructions. Defaults to None, which assumes all angles are used.
prior (Prior, optional) – If included, modifes normalizing factor to \(\frac{1}{\sum_j H_{ij} + P_i}\) where \(P_i\) is given by the prior. Used, for example, during in MAP OSEM. Defaults to None.
normalize (bool) – Whether or not to divide result by \(\sum_j H_{ij}\)
return_norm_constant (bool) – Whether or not to return \(1/\sum_j H_{ij}\) along with back projection. Defaults to ‘False’.
- Returns:
the activities in each mask region.
- Return type:
torch.tensor[batch_size, n_masks]
- class pytomography.projectors.PETLMSystemMatrix(object_meta, proj_meta, obj2obj_transforms=[], attenuation_map=None, N_splits=1, device=pytomography.device)[source]#
Bases:
pytomography.projectors.SystemMatrix
System matrix of PET list mode data. Forward projections corresponds to computing the expected counts along all LORs specified: in particular it approximates \(g_i = \int_{\text{LOR}_i} h(r) f(r) dr\) where index \(i\) corresponds to a particular detector pair and \(h(r)\) is a Gaussian function that incorporates time-of-flight information (\(h(r)=1\) for non-time-of-flight). The integral is approximated in the discrete object space using Joseph3D projections. In general, the system matrix implements two different projections, the quantity \(H\) which projects to LORs corresponding to all detected events, and the quantity \(\tilde{H}\) which projects to all valid LORs. The quantity \(H\) is used for standard forward/back projection, while \(\tilde{H}\) is used to compute the sensitivity image.
- Parameters:
object_meta (SPECTObjectMeta) – Metadata of object space, containing information on voxel size and dimensions.
proj_meta (PETLMProjMeta) – PET listmode projection space metadata. This information contains the detector ID pairs of all detected events, as well as a scanner lookup table and time-of-flight metadata. In addition, this meadata contains all information regarding event weights, typically corresponding to the effects of attenuation \(\mu\) and \(\eta\).
obj2obj_transforms (Sequence[Transform]) – Object to object space transforms applied before forward projection and after back projection. These are typically used for PSF modeling in PET imaging.
attenuation_map (torch.tensor[float] | None, optional) – Attenuation map used for attenuation modeling. If provided, all weights will be scaled by detection probabilities derived from this map. Note that this scales on top of any weights provided in
proj_meta
, so if attenuation is already accounted for there, this is not needed. Defaults to None.N_splits (int) – Splits up computation of forward/back projection to save GPU memory. Defaults to 1.
device (str) – The device on which forward/back projection tensors are output. This is seperate from
pytomography.device
, which handles internal computations. The reason for having the option of a second device is that the projection space may be very large, and certain GPUs may not have enough memory to store the projections. Ifdevice
is not the same aspytomography.device
, then one must also specify the samedevice
in any reconstruction algorithm used. Defaults topytomography.device
.
- set_n_subsets(n_subsets)[source]#
Returns a list where each element consists of an array of indices corresponding to a partitioned version of the projections.
- Parameters:
n_subsets (int) – Number of subsets to partition the projections into
- Returns:
List of arrays where each array corresponds to the projection indices of a particular subset.
- Return type:
list
- get_projection_subset(projections, subset_idx)[source]#
Obtains subsampled projections \(g_m\) corresponding to subset index \(m\). For LM PET, its always the case that \(g_m=1\), but this function is still required for subsampling scatter \(s_m\) as is required in certain reconstruction algorithms
- Parameters:
projections (torch.Tensor) – total projections \(g\)
subset_idx (int) – subset index \(m\)
- Returns:
subsampled projections \(g_m\).
- Return type:
torch.Tensor
- get_weighting_subset(subset_idx)[source]#
Computes the relative weighting of a given subset (given that the projection space is reduced). This is used for scaling parameters relative to \(\tilde{H}_m^T 1\) in reconstruction algorithms, such as prior weighting \(\beta\)
- Parameters:
subset_idx (int) – Subset index
- Returns:
Weighting for the subset.
- Return type:
float
- compute_atteunation_probability_projection(idx)[source]#
Computes probabilities of photons being detected along an LORs corresponding to
idx
.- Parameters:
idx (torch.tensor) – Indices of the detector pairs.
- Returns:
The probabilities of photons being detected along the detector pairs.
- Return type:
torch.Tensor
- compute_sens_factor(N_splits=10)[source]#
Computes the normalization factor \(\tilde{H}^T w\) where \(w\) is the weighting specified in the projection metadata that accounts for attenuation/normalization correction.
- Parameters:
N_splits (int, optional) – Optionally splits up computation to save memory on GPU. Defaults to 10.
- compute_normalization_factor(subset_idx=None)[source]#
Function called by reconstruction algorithms to get the sensitivty image \(\tilde{H}_m^T w\).
- Parameters:
subset_idx (int | None, optional) – Subset index \(m\). If none, then considers backprojection over all subsets. Defaults to None.
- Returns:
Normalization factor.
- Return type:
torch.tensor
- forward(object, subset_idx=None)[source]#
Computes forward projection. In the case of list mode PET, this corresponds to the expected number of detected counts along each LOR corresponding to a particular object.
- Parameters:
object (torch.tensor) – Object to be forward projected
subset_idx (int, optional) – Subset index \(m\) of the projection. If None, then assumes projection to the entire projection space. Defaults to None.
- Returns:
Projections corresponding to the expected number of counts along each LOR.
- Return type:
torch.tensor
- backward(proj, subset_idx=None, return_norm_constant=False)[source]#
Computes back projection. This corresponds to tracing a sequence of LORs into object space.
- Parameters:
proj (torch.tensor) – Projections to be back projected
subset_idx (int, optional) – Subset index \(m\) of the projection. If None, then assumes projection to the entire projection space. Defaults to None.
return_norm_constant (bool, optional) – Whether or not to return the normalization constant: useful in reconstruction algorithms that require \(H_m^T 1\). Defaults to False.
- Returns:
_description_
- Return type:
torch.tensor
- class pytomography.projectors.KEMSystemMatrix(system_matrix, kem_transform)[source]#
Bases:
pytomography.projectors.system_matrix.SystemMatrix
Given a KEM transform \(K\) and a system matrix \(H\), implements the transform \(HK\) (and backward transform \(K^T H^T\))
- Parameters:
system_matrix (SystemMatrix) – System matrix corresponding to a particular imaging system
kem_transform (KEMTransform) – Transform used to go from coefficient image to real image of predicted counts.
- compute_normalization_factor(subset_idx=None)[source]#
Function used to get normalization factor \(K^T H^T_m 1\) corresponding to projection subset \(m\).
- Parameters:
subset_idx (int | None, optional) – Index of subset. If none, then considers all projections. Defaults to None.
- Returns:
normalization factor \(K^T H^T_m 1\)
- Return type:
torch.Tensor
- forward(object, subset_idx=None)[source]#
Forward transform \(HK\)
- Parameters:
object (torch.tensor) – Object to be forward projected
subset_idx (int, optional) – Only uses a subset of angles \(g_m\) corresponding to the provided subset index \(m\). If None, then defaults to the full projections \(g\).
- Returns:
Corresponding projections generated from forward projection
- Return type:
torch.tensor
- backward(proj, subset_idx=None, return_norm_constant=False)[source]#
Backward transform \(K^T H^T\)
- Parameters:
proj (torch.tensor) – Projection data to be back projected
subset_idx (int, optional) – Only uses a subset of angles \(g_m\) corresponding to the provided subset index \(m\). If None, then defaults to the full projections \(g\).
return_norm_constant (bool, optional) – Additionally returns \(K^T H^T 1\) if true; defaults to False.
- Returns:
Corresponding object generated from back projection.
- Return type:
torch.tensor
- class pytomography.projectors.MotionSystemMatrix(system_matrices, motion_transforms)[source]#
Bases:
pytomography.projectors.system_matrix.ExtendedSystemMatrix
Abstract class for a general system matrix \(H:\mathbb{U} \to \mathbb{V}\) which takes in an object \(f \in \mathbb{U}\) and maps it to corresponding projections \(g \in \mathbb{V}\) that would be produced by the imaging system. A system matrix consists of sequences of object-to-object and proj-to-proj transforms that model various characteristics of the imaging system, such as attenuation and blurring. While the class implements the operator \(H:\mathbb{U} \to \mathbb{V}\) through the
forward
method, it also implements \(H^T:\mathbb{V} \to \mathbb{U}\) through the backward method, required during iterative reconstruction algorithms such as OSEM.- Parameters:
obj2obj_transforms (Sequence[Transform]) – Sequence of object mappings that occur before forward projection.
im2im_transforms (Sequence[Transform]) – Sequence of proj mappings that occur after forward projection.
object_meta (ObjectMeta) – Object metadata.
proj_meta (ProjMeta) – Projection metadata.
system_matrices (collections.abc.Sequence[pytomography.projectors.SystemMatrix]) –
motion_transforms (collections.abc.Sequence[pytomography.transforms.Transform]) –