causalis.scenarios.multi_unconfoundedness.refutation.unconfoundedness.sensitivity

Module Contents

Functions

compute_sensitivity_bias

compute_sensitivity_bias_local

combine_nu2

pulltheta_se_ci

compute_bias_aware_ci

format_bias_aware_summary

get_sensitivity_summary

sensitivity_benchmark

Benchmark multi-treatment sensitivity by comparing the fitted long model to a genuinely refit short model with the benchmarking covariates removed.

sensitivity_analysis

Data

__all__

API

causalis.scenarios.multi_unconfoundedness.refutation.unconfoundedness.sensitivity.__all__

[‘sensitivity_analysis’, ‘get_sensitivity_summary’, ‘sensitivity_benchmark’, ‘compute_bias_aware_ci’…

causalis.scenarios.multi_unconfoundedness.refutation.unconfoundedness.sensitivity.compute_sensitivity_bias(sigma2: Union[float, numpy.ndarray], nu2: Union[float, numpy.ndarray], psi_sigma2: numpy.ndarray, psi_nu2: numpy.ndarray) Tuple[numpy.ndarray, numpy.ndarray]
causalis.scenarios.multi_unconfoundedness.refutation.unconfoundedness.sensitivity.compute_sensitivity_bias_local(sigma2: Union[float, numpy.ndarray], nu2: Union[float, numpy.ndarray], psi_sigma2: numpy.ndarray, psi_nu2: numpy.ndarray) Tuple[numpy.ndarray, numpy.ndarray]
causalis.scenarios.multi_unconfoundedness.refutation.unconfoundedness.sensitivity.combine_nu2(m_alpha: numpy.ndarray, rr: numpy.ndarray, cf_y: float, r2_d: Union[float, numpy.ndarray], rho: Union[float, numpy.ndarray], use_signed_rr: bool = False) Tuple[Union[float, numpy.ndarray], numpy.ndarray]
causalis.scenarios.multi_unconfoundedness.refutation.unconfoundedness.sensitivity.pulltheta_se_ci(effect_estimation: Any, alpha: float) Tuple[Union[float, numpy.ndarray], Union[float, numpy.ndarray], Union[Tuple[float, float], numpy.ndarray]]
causalis.scenarios.multi_unconfoundedness.refutation.unconfoundedness.sensitivity.compute_bias_aware_ci(effect_estimation: Dict[str, Any] | Any, _=None, cf_y: float = 0.0, r2_d: Union[float, numpy.ndarray] = 0.0, rho: Union[float, numpy.ndarray] = 1.0, H0: float = 0.0, alpha: float = 0.05, use_signed_rr: bool = False) Dict[str, Any]
causalis.scenarios.multi_unconfoundedness.refutation.unconfoundedness.sensitivity.format_bias_aware_summary(res: Dict[str, Any], label: str | None = None) str
causalis.scenarios.multi_unconfoundedness.refutation.unconfoundedness.sensitivity.get_sensitivity_summary(effect_estimation: Dict[str, Any] | Any, _=None, label: Optional[str] = None) Optional[str]
causalis.scenarios.multi_unconfoundedness.refutation.unconfoundedness.sensitivity.sensitivity_benchmark(effect_estimation: Dict[str, Any] | Any, benchmarking_set: List[str], fit_args: Optional[Dict[str, Any]] = None) pandas.DataFrame

Benchmark multi-treatment sensitivity by comparing the fitted long model to a genuinely refit short model with the benchmarking covariates removed.

This function intentionally performs the short-model refit because theta_short and delta are defined by that re-estimation step. The residual-based strengths alone are not enough to recover those values.

Parameters

effect_estimation : dict or Any Estimate/model container exposing a fitted MultiTreatmentIRM-like model. benchmarking_set : list[str] Confounders to remove from the short model. fit_args : dict, optional Additional keyword arguments passed to MultiTreatmentIRM.estimate(...) on the short model. Runtime is typically dominated by the short-model fit() rather than the residual-regression calculations below. To reduce wall-clock time without changing semantics, fit the original long model with a suitable n_jobs value; that setting is forwarded into the short refit.

Returns

pandas.DataFrame A contrast-indexed DataFrame containing cf_y, r2_y, r2_d, rho, theta_long, theta_short, and delta.

causalis.scenarios.multi_unconfoundedness.refutation.unconfoundedness.sensitivity.sensitivity_analysis(effect_estimation: Dict[str, Any] | Any, _=None, cf_y: Optional[float] = None, r2_y: Optional[float] = None, r2_d: Union[float, numpy.ndarray] = 0.0, rho: Union[float, numpy.ndarray] = 1.0, H0: float = 0.0, alpha: float = 0.05, use_signed_rr: bool = False) Dict[str, Any]