Kpytorch API Reference¶
The kpytorch submodule includes a miscallaneous
set of classes and methods which rely on pytorch
for automatic differentation.
-
class
knockpy.kpytorch.deeppink.
DeepPinkModel
(p, inds, rev_inds, hidden_sizes=[64], y_dist='gaussian')[source]¶ Methods
__call__
(*input, **kwargs)Call self as a function.
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to float datatype.
forward
(features)NOTE: FEATURES CANNOT BE SHUFFLED
half
()Casts all floating point parameters and buffers to
half
datatype.load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
predict
(features)Wraps forward method, for compatibility with sklearn classes.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor)Adds a persistent buffer to the module.
register_forward_hook
(hook)Registers a forward hook on the module.
register_forward_pre_hook
(hook)Registers a forward pre-hook on the module.
register_parameter
(name, param)Adds a parameter to the module.
requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
state_dict
([destination, prefix, keep_vars])Returns a dictionary containing a whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.zero_grad
()Sets gradients of all model parameters to zero.
Z_regularizer
feature_importances
l1norm
l2norm
normalize_Z_weight
share_memory
Gradient-based methods for solving MRC problems. Currently only used for group-knockoffs.
-
class
knockpy.kpytorch.mrcgrad.
MVRLoss
(Sigma, groups, init_S=None, invSigma=None, rec_prop=0, smoothing=0.01, min_smoothing=0.0001, method='mvr')[source]¶ A pytorch class to compute S-matrices for (gaussian) MX knockoffs which minimizes the trace of the feature-knockoff precision matrix (the inverse of the feature-knockoff covariance/Grahm matrix, G).
- Parameters
Sigma – p x p numpy matrix. Must already
be sorted by groups. :param groups: p length numpy array of groups. These must already be sorted and correspond to the Sigma. :param init_S: The initialization values for the S block-diagonal matrix. - A p x p matrix. The block-diagonal of this matrix, as specified by groups, will be the initial values for the S matrix. - A list of square numpy matrices, with the ith element corresponding to the block of the ith group in S. Default: Half of the identity. :param rec_prop: The proportion of data you are planning to recycle. (The optimal S matrix depends on the recycling proportion.) :param rec_prop: The proportion of knockoffs that will be recycled. :param smoothing: Calculate the loss as sum 1/(eigs + smoothing) as opposed to sum 1/eigs. This is helpful if fitting lasso statistics on extremely degenerate covariance matrices. Over the course of optimization, this smoothing parameter will go to 0. :param method: One of mvr or mmi.
Methods
__call__
(*input, **kwargs)Call self as a function.
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to float datatype.
forward
([smoothing])Calculates trace of inverse grahm feature-knockoff matrix
half
()Casts all floating point parameters and buffers to
half
datatype.load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
project
(**kwargs)Project by scaling sqrt_S
pull_S
()Returns the S matrix
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor)Adds a persistent buffer to the module.
register_forward_hook
(hook)Registers a forward hook on the module.
register_forward_pre_hook
(hook)Registers a forward pre-hook on the module.
register_parameter
(name, param)Adds a parameter to the module.
requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
scale_sqrt_S
(tol, num_iter)Scales sqrt_S such that 2 Sigma - S is PSD.
state_dict
([destination, prefix, keep_vars])Returns a dictionary containing a whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.Updates sqrt_S using the block parameters
zero_grad
()Sets gradients of all model parameters to zero.
share_memory
-
class
knockpy.kpytorch.mrcgrad.
PSGDSolver
(Sigma, groups, losscalc=None, lr=0.01, verbose=False, max_epochs=100, tol=1e-05, line_search_iter=10, convergence_tol=0.1, **kwargs)[source]¶ Projected gradient descent to solve for MRC knockoffs. This will work for non-convex loss objectives as well, although it’s a heuristic optimization method. :param Sigma: p x p numpy array, the correlation matrix :param groups: p-length numpy array specifying groups :param losscalc: A pytorch class wrapping nn.module which contains the following methods: - .forward() which calculates the loss based on the internally stored S matrix. - .project() which ensures that both the internally-stored S matrix as well as (2*Sigma - S) are PSD. - .pull_S(), which returns the internally-stored S matrix. If None, creates a MVRLoss class. :param lr: Initial learning rate (default 1e-2) :param verbose: if true, reports progress :param max_epochs: Maximum number of epochs in SGD :param tol: Mimimum eigenvalue allowed for PSD matrices :param line_search_iter: Number of line searches to do when scaling sqrt_S. :param convergence_tol: After each projection, we calculate improvement = 2/3 * ||prev_opt_S - opt_S||_1 + 1/3 * (improvement) When improvement < convergence_tol, we return. :param kwargs: Passed to MVRLoss
Methods
optimize
()See __init__ for arguments.
cache_S
-
knockpy.kpytorch.mrcgrad.
block_diag_sparse
(*arrs)[source]¶ Given a list of 2D torch tensors, creates a sparse block-diagonal matrix See https://github.com/pytorch/pytorch/issues/31942
-
knockpy.kpytorch.mrcgrad.
solve_mrc_psgd
(Sigma, groups=None, method='mvr', **kwargs)[source]¶ Wraps the PSGDSolver class. :param Sigma: Covariance matrix :param groups: groups for group knockoffs :param method: MRC loss (mvr or mmi) :param init_kwargs: kwargs to pass to PSGDSolver. :param optimize_kwargs: kwargs to pass to optimizer method. :returns: opt_S