ise.evaluation package

Submodules

ise.evaluation.metrics module

ise.evaluation.metrics.calculate_ece(predictions, uncertainties, true_values, bins=10)[source]

Calculate the Expected Calibration Error (ECE) for regression model predictions.

Args: predictions (numpy.ndarray): Array of predicted means by the model. uncertainties (numpy.ndarray): Array of predicted standard deviations (uncertainty estimates). true_values (numpy.ndarray): Array of actual values. bins (int): Number of bins to use for grouping predictions by their uncertainty.

Returns: float: The Expected Calibration Error.

ise.evaluation.metrics.crps(y_true, y_pred, y_std)[source]
ise.evaluation.metrics.js_divergence(p: ndarray, q: ndarray)[source]

Calculates the Jensen-Shannon Divergence between two distributions.

ise.evaluation.metrics.kl_divergence(p: ndarray, q: ndarray)[source]

Calculates the Kullback-Leibler Divergence between two distributions.

ise.evaluation.metrics.kolmogorov_smirnov(x1, x2)[source]
ise.evaluation.metrics.mape(y_true, y_pred)[source]

Calculate Mean Absolute Percentage Error (MAPE).

Args: - y_true: numpy array or a list of actual numbers - y_pred: numpy array or a list of predicted numbers

Returns: - mape: Mean Absolute Percentage Error

ise.evaluation.metrics.mean_absolute_error(y_true, y_pred)[source]

Calculate Mean Absolute Error (MAE).

Args: - y_true: numpy array or a list of actual numbers - y_pred: numpy array or a list of predicted numbers

Returns: - mae: Mean Absolute Error

ise.evaluation.metrics.mean_squared_error(y_true, y_pred)[source]

Calculate Mean Squared Error (MSE).

Args: - y_true: numpy array or a list of actual numbers - y_pred: numpy array or a list of predicted numbers

Returns: - mse: Mean Squared Error

ise.evaluation.metrics.mean_squared_error_sector(sum_sectors_true, sum_sectors_pred)[source]
ise.evaluation.metrics.r2_score(y_true, y_pred)[source]
ise.evaluation.metrics.relative_squared_error(y_true, y_pred)[source]

Calculate Relative Squared Error (RSE).

Args: - y_true: numpy array or a list of actual numbers - y_pred: numpy array or a list of predicted numbers

Returns: - rse: Relative Squared Error

ise.evaluation.metrics.sum_by_sector(array, grid_file)[source]
ise.evaluation.metrics.t_test(x1, x2)[source]

ise.evaluation.plots module

class ise.evaluation.plots.EvaluationPlotter(save_dir='.')[source]

Bases: object

sector_side_by_side(y_true, y_pred, grid_file, outline_array_true=None, outline_array_pred=None, timestep=None, save_path=None, cmap=<matplotlib.colors.LinearSegmentedColormap object>)[source]
spatial_side_by_side(y_true, y_pred, timestep=None, save_path=None, cmap=<matplotlib.colors.LinearSegmentedColormap object>, video=False)[source]
class ise.evaluation.plots.SectorPlotter(results_dataset, column=None, condition=None, save_directory=None)[source]

Bases: object

plot_callibration(color_by=None, alpha=0.2, column=None, condition=None, save=None)[source]
plot_distributions(year, column=None, condition=None, save=None)[source]
plot_ensemble(uncertainty='quantiles', column=None, condition=None, save=None)[source]
plot_ensemble_mean(uncertainty=False, column=None, condition=None, save=None)[source]
plot_histograms(year, column=None, condition=None, save=None)[source]
plot_test_series(model, data_directory, time_series=True, approx_dist=True, mc_iterations=100, confidence='95', draws='random', k=10, save_directory=None)[source]
class ise.evaluation.plots.UncertaintyBounds(data, confidence='95', quantiles=None)[source]

Bases: object

ise.evaluation.plots.plot_callibration(dataset, column=None, condition=None, color_by=None, alpha=0.2, save=None)[source]
ise.evaluation.plots.plot_distributions(dataset: DataFrame, year: int = 2100, column: str | None = None, condition: str | None = None, save: str | None = None, cache: dict | None = None)[source]

Generates a plot of comparison of distributions at a given time slice (year) from the true simulations and the predicted emulation.

Parameters:
ise.evaluation.plots.plot_ensemble(dataset: DataFrame, uncertainty: str = 'quantiles', column: str | None = None, condition: str | None = None, save: str | None = None, cache: dict | None = None)[source]

Generates a plot of the comparison of ensemble results from the true simulations and the predicted emulation. Adds uncertainty bounds and plots them side-by-side.

Parameters:
ise.evaluation.plots.plot_ensemble_mean(dataset: DataFrame, uncertainty: str = False, column=None, condition=None, save=None, cache=None)[source]

Generates a plot of the mean sea level contribution from the true simulations and the predicted emulation.

Parameters:
ise.evaluation.plots.plot_histograms(dataset: DataFrame, year: int = 2100, column: str | None = None, condition: str | None = None, save: str | None = None, cache: dict | None = None)[source]

Generates a plot of comparison of histograms at a given time slice (year) from the true simulations and the predicted emulation.

Parameters:
ise.evaluation.plots.plot_test_series(model, data_directory, time_series, approx_dist=True, mc_iterations=100, confidence='95', draws='random', k=10, save_directory=None)[source]

Module contents