Moscore#

class Moscore(score: str | tuple[Callable, Callable] = 'mean', bandwidth: int = 30, threshold_scale: float | None = 2.0, level: float = 0.01, min_detection_interval: int = 1)[source]#

Moving score algorithm for multiple changepoint detection.

A generalized version of the MOSUM (moving sum) algorithm [1] for changepoint detection. It runs a test statistic for a single changepoint at the midpoint in a moving window of length 2 * bandwidth over the data. Efficiently implemented using numba.

Parameters:
score{“mean”, “mean_var”, “mean_cov”}, tuple[Callable, Callable], default=”mean”

Test statistic to use for changepoint detection.

  • “mean”: The CUSUM statistic for a change in mean (this is equivalent to a likelihood ratio test for a change in the mean of Gaussian data). For multivariate data, the sum of the CUSUM statistics for each dimension is used.

  • “mean_var”: The likelihood ratio test for a change in the mean and/or variance of Gaussian data. For multivariate data, the sum of the likelihood ratio statistics for each dimension is used.

  • “mean_cov”: The likelihood ratio test for a change in the mean and/or covariance matrix of multivariate Gaussian data.

  • If a tuple, it must contain two numba jitted functions:

    1. The first function is the scoring function, which takes four arguments:

      1. precomputed_params: The output of the second function.

      2. starts: Start indices of the intervals to score for a change.

      3. ends: End indices of the intervals to score for a change.

      4. splits: Split indices of the intervals to score for a change.

      For each start, split and end, the score should be calculated for the data intervals [start:split] and [split+1:end], meaning that both the starts and ends are inclusive, while split is included in the left interval.

    2. The second function is the initializer, which takes the data matrix as input and returns precomputed quantities that may speed up the score calculations. If not relevant, just return the data matrix.

bandwidthint, default=30

The bandwidth is the number of samples on either side of a candidate changepoint. The minimum bandwidth depends on the test statistic. For “mean”, the minimum bandwidth is 1.

threshold_scalefloat, default=2.0

Scaling factor for the threshold. The threshold is set to threshold_scale * default_threshold, where the default threshold depends on the number of samples, the number of variables, bandwidth and level. If None, the threshold is tuned on the input data to fit.

levelfloat, default=0.01

If threshold_scale is None, the threshold is set to the (1-level)-quantile of the changepoint score on the training data. For this to be correct, the training data must contain no changepoints. If threshold_scale is a number, level is used in the default threshold, _before_ scaling.

min_detection_intervalint, default=1

Minimum number of consecutive scores above the threshold to be considered a changepoint. Must be between 1 and bandwidth/2.

Attributes:
is_fitted

Whether fit has been called.

Methods

check_is_fitted()

Check if the estimator has been fitted.

clone()

Obtain a clone of the object with same hyper-parameters.

clone_tags(estimator[, tag_names])

Clone tags from another estimator as dynamic override.

create_test_instance([parameter_set])

Construct Estimator instance if possible.

create_test_instances_and_names([parameter_set])

Create list of all test instances and a list of names for them.

dense_to_sparse(y_dense)

Convert the dense output from the transform method to a sparse format.

fit(X[, y])

Fit detector to training data.

fit_predict(X[, y])

Fit to data, then predict it.

fit_transform(X[, y])

Fit to data, then transform it.

get_class_tag(tag_name[, tag_value_default])

Get a class tag's value.

get_class_tags()

Get class tags from the class and all its parent classes.

get_config()

Get config flags for self.

get_default_threshold(n, p, bandwidth[, level])

Get the default threshold for the Moscore algorithm.

get_fitted_params([deep])

Get fitted parameters.

get_param_defaults()

Get object's parameter defaults.

get_param_names()

Get object's parameter names.

get_params([deep])

Get a dict of parameters values for this object.

get_tag(tag_name[, tag_value_default, ...])

Get tag value from estimator class and dynamic tag overrides.

get_tags()

Get tags from estimator class and dynamic tag overrides.

get_test_params([parameter_set])

Return testing parameter settings for the estimator.

is_composite()

Check if the object is composed of other BaseObjects.

load_from_path(serial)

Load object from file location.

load_from_serial(serial)

Load object from serialized memory container.

predict(X)

Detect events and return the result in a sparse format.

reset()

Reset the object to a clean post-init state.

save([path, serialization_format])

Save serialized self to bytes-like object or to (.zip) file.

score_transform(X)

Return detection scores on the input data.

set_config(**config_dict)

Set config flags to given values.

set_params(**params)

Set the parameters of this object.

set_tags(**tag_dict)

Set dynamic tags to given values.

sparse_to_dense(y_sparse, index[, columns])

Convert the sparse output from the predict method to a dense format.

transform(X)

Detect events and return the result in a dense format.

update(X[, y])

Update model with new data and optional ground truth detections.

update_predict(X[, y])

Update model with new data and detect events in it.

References

[1]

Eichinger, B., & Kirch, C. (2018). A MOSUM procedure for the estimation of

multiple random change points.

Examples

>>> from skchange.change_detectors import Moscore
>>> from skchange.datasets.generate import generate_alternating_data
>>> df = generate_alternating_data(
        n_segments=4, mean=10, segment_length=100000, p=5
    )
>>> detector = Moscore()
>>> detector.fit_predict(df)
0     99999
1    199999
2    299999
Name: changepoint, dtype: int64
static get_default_threshold(n: int, p: int, bandwidth: int, level: float = 0.01) float[source]#

Get the default threshold for the Moscore algorithm.

It is the asymptotic critical value of the univariate ‘mean’ test statitic, multiplied by p to account for the multivariate case.

Parameters:
nint

Sample size.

pint

Number of variables.

bandwidthint

Bandwidth of the Moscore algorithm.

levelfloat, optional (default=0.01)

Significance level for the test statistic.

Returns:
thresholdfloat

Threshold for the Moscore algorithm.

classmethod get_test_params(parameter_set='default')[source]#

Return testing parameter settings for the estimator.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set. There are currently no reserved values for annotators.

Returns:
paramsdict or list of dict, default = {}

Parameters to create testing instances of the class Each dict are parameters to construct an “interesting” test instance, i.e., MyClass(**params) or MyClass(**params[i]) creates a valid test instance. create_test_instance uses the first (or only) dictionary in params

check_is_fitted()[source]#

Check if the estimator has been fitted.

Raises:
NotFittedError

If the estimator has not been fitted yet.

clone()[source]#

Obtain a clone of the object with same hyper-parameters.

A clone is a different object without shared references, in post-init state. This function is equivalent to returning sklearn.clone of self.

Raises:
RuntimeError if the clone is non-conforming, due to faulty __init__.

Notes

If successful, equal in value to type(self)(**self.get_params(deep=False)).

clone_tags(estimator, tag_names=None)[source]#

Clone tags from another estimator as dynamic override.

Parameters:
estimatorestimator inheriting from :class:BaseEstimator
tag_namesstr or list of str, default = None

Names of tags to clone. If None then all tags in estimator are used as tag_names.

Returns:
Self

Reference to self.

Notes

Changes object state by setting tag values in tag_set from estimator as dynamic tags in self.

classmethod create_test_instance(parameter_set='default')[source]#

Construct Estimator instance if possible.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
instanceinstance of the class with default parameters

Notes

get_test_params can return dict or list of dict. This function takes first or single dict that get_test_params returns, and constructs the object with that.

classmethod create_test_instances_and_names(parameter_set='default')[source]#

Create list of all test instances and a list of names for them.

Parameters:
parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

Returns:
objslist of instances of cls

i-th instance is cls(**cls.get_test_params()[i])

nameslist of str, same length as objs

i-th element is name of i-th instance of obj in tests convention is {cls.__name__}-{i} if more than one instance otherwise {cls.__name__}

parameter_setstr, default=”default”

Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.

static dense_to_sparse(y_dense: Series) Series[source]#

Convert the dense output from the transform method to a sparse format.

Parameters:
y_densepd.Series

The dense output from a changepoint detector’s transform method.

Returns:
pd.Series of changepoint locations. Changepoints are defined as the last element

of a segment.

fit(X, y=None)[source]#

Fit detector to training data.

Fit trains the detector on the input data, for example by tuning a detection threshold or other hyperparameters. Detection of events does not happen here, but in the predict or transform methods, after the detector has been fit.

Parameters:
Xpd.Series, pd.DataFrame or np.ndarray

Training data to fit model to (time series).

ypd.Series, pd.DataFrame or np.ndarray, optional

Ground truth detections for training if detector is supervised.

Returns:
self

Reference to self.

Notes

Creates fitted model that updates attributes ending in “_”. Sets _is_fitted flag to True.

fit_predict(X, y=None)[source]#

Fit to data, then predict it.

Fits model to X and y with given detector parameters and returns the detected events.

Parameters:
Xpd.DataFrame, pd.Series or np.ndarray

Training data to fit model with and detect events in (time series).

ypd.Series, pd.DataFrame or np.ndarray, optional

Ground truth detections for training if detector is supervised.

Returns:
ypd.Series or pd.DataFrame

Each element or row corresponds to a detected event. Exact format depends on the detector type.

fit_transform(X, y=None)[source]#

Fit to data, then transform it.

Fits model to X and y with given detector parameters and returns the detected events in a dense format.

Parameters:
Xpd.DataFrame, pd.Series or np.ndarray

Training data to fit model with and detect events in (time series).

ypd.Series or np.ndarray, optional (default=None)

Target values of data to be detected.

Returns:
ypd.Series or pd.DataFrame

Detections for sequence X. The returned detections will be in the dense format, meaning that each element in X will be annotated according to the detection results in some meaningful way depending on the detector type.

classmethod get_class_tag(tag_name, tag_value_default=None)[source]#

Get a class tag’s value.

Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.

Parameters:
tag_namestr

Name of tag value.

tag_value_defaultany

Default/fallback value if tag is not found.

Returns:
tag_value

Value of the tag_name tag in self. If not found, returns tag_value_default.

classmethod get_class_tags()[source]#

Get class tags from the class and all its parent classes.

Retrieves tag: value pairs from _tags class attribute. Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.

Returns:
collected_tagsdict

Dictionary of class tag name: tag value pairs. Collected from _tags class attribute via nested inheritance.

get_config()[source]#

Get config flags for self.

Returns:
config_dictdict

Dictionary of config name : config value pairs. Collected from _config class attribute via nested inheritance and then any overrides and new tags from _onfig_dynamic object attribute.

get_fitted_params(deep=True)[source]#

Get fitted parameters.

State required:

Requires state to be “fitted”.

Parameters:
deepbool, default=True

Whether to return fitted parameters of components.

  • If True, will return a dict of parameter name : value for this object, including fitted parameters of fittable components (= BaseEstimator-valued parameters).

  • If False, will return a dict of parameter name : value for this object, but not include fitted parameters of components.

Returns:
fitted_paramsdict with str-valued keys

Dictionary of fitted parameters, paramname : paramvalue keys-value pairs include:

  • always: all fitted parameters of this object, as via get_param_names values are fitted parameter value for that key, of this object

  • if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value

  • if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc

classmethod get_param_defaults()[source]#

Get object’s parameter defaults.

Returns:
default_dict: dict[str, Any]

Keys are all parameters of cls that have a default defined in __init__ values are the defaults, as defined in __init__.

classmethod get_param_names()[source]#

Get object’s parameter names.

Returns:
param_names: list[str]

Alphabetically sorted list of parameter names of cls.

get_params(deep=True)[source]#

Get a dict of parameters values for this object.

Parameters:
deepbool, default=True

Whether to return parameters of components.

  • If True, will return a dict of parameter name : value for this object, including parameters of components (= BaseObject-valued parameters).

  • If False, will return a dict of parameter name : value for this object, but not include parameters of components.

Returns:
paramsdict with str-valued keys

Dictionary of parameters, paramname : paramvalue keys-value pairs include:

  • always: all parameters of this object, as via get_param_names values are parameter value for that key, of this object values are always identical to values passed at construction

  • if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value

  • if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc

get_tag(tag_name, tag_value_default=None, raise_error=True)[source]#

Get tag value from estimator class and dynamic tag overrides.

Parameters:
tag_namestr

Name of tag to be retrieved

tag_value_defaultany type, optional; default=None

Default/fallback value if tag is not found

raise_errorbool

whether a ValueError is raised when the tag is not found

Returns:
tag_valueAny

Value of the tag_name tag in self. If not found, returns an error if raise_error is True, otherwise it returns tag_value_default.

Raises:
ValueError if raise_error is True i.e. if tag_name is not in
self.get_tags().keys()
get_tags()[source]#

Get tags from estimator class and dynamic tag overrides.

Returns:
collected_tagsdict

Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance and then any overrides and new tags from _tags_dynamic object attribute.

is_composite()[source]#

Check if the object is composed of other BaseObjects.

A composite object is an object which contains objects, as parameters. Called on an instance, since this may differ by instance.

Returns:
composite: bool

Whether an object has any parameters whose values are BaseObjects.

property is_fitted[source]#

Whether fit has been called.

classmethod load_from_path(serial)[source]#

Load object from file location.

Parameters:
serialresult of ZipFile(path).open(“object)
Returns:
deserialized self resulting in output at path, of cls.save(path)
classmethod load_from_serial(serial)[source]#

Load object from serialized memory container.

Parameters:
serial1st element of output of cls.save(None)
Returns:
deserialized self resulting in output serial, of cls.save(None)
predict(X)[source]#

Detect events and return the result in a sparse format.

Parameters:
Xpd.Series, pd.DataFrame or np.ndarray

Data to detect events in (time series).

Returns:
ypd.Series or pd.DataFrame

Each element or row corresponds to a detected event. Exact format depends on the detector type.

reset()[source]#

Reset the object to a clean post-init state.

Using reset, runs __init__ with current values of hyper-parameters (result of get_params). This Removes any object attributes, except:

  • hyper-parameters = arguments of __init__

  • object attributes containing double-underscores, i.e., the string “__”

Class and object methods, and class attributes are also unaffected.

Returns:
self

Instance of class reset to a clean post-init state but retaining the current hyper-parameter values.

Notes

Equivalent to sklearn.clone but overwrites self. After self.reset() call, self is equal in value to type(self)(**self.get_params(deep=False))

save(path=None, serialization_format='pickle')[source]#

Save serialized self to bytes-like object or to (.zip) file.

Behaviour: if path is None, returns an in-memory serialized self if path is a file location, stores self at that location as a zip file

saved files are zip files with following contents: _metadata - contains class of self, i.e., type(self) _obj - serialized self. This class uses the default serialization (pickle).

Parameters:
pathNone or file location (str or Path)

if None, self is saved to an in-memory object if file location, self is saved to that file location. If:

path=”estimator” then a zip file estimator.zip will be made at cwd. path=”/home/stored/estimator” then a zip file estimator.zip will be stored in /home/stored/.

serialization_format: str, default = “pickle”

Module to use for serialization. The available options are “pickle” and “cloudpickle”. Note that non-default formats might require installation of other soft dependencies.

Returns:
if path is None - in-memory serialized self
if path is file location - ZipFile with reference to the file
score_transform(X)[source]#

Return detection scores on the input data.

Parameters:
Xpd.Series, pd.DataFrame or np.ndarray

Data to score (time series).

Returns:
ypd.Series or pd.DataFrame

Scores for sequence X. Exact format depends on the concrete detector.

set_config(**config_dict)[source]#

Set config flags to given values.

Parameters:
config_dictdict

Dictionary of config name : config value pairs. Valid configs, values, and their meaning is listed below:

displaystr, “diagram” (default), or “text”

how jupyter kernels display instances of self

  • “diagram” = html box diagram representation

  • “text” = string printout

print_changed_onlybool, default=True

whether printing of self lists only self-parameters that differ from defaults (False), or all parameter names and values (False). Does not nest, i.e., only affects self and not component estimators.

warningsstr, “on” (default), or “off”

whether to raise warnings, affects warnings from sktime only

  • “on” = will raise warnings from sktime

  • “off” = will not raise warnings from sktime

backend:parallelstr, optional, default=”None”

backend to use for parallelization when broadcasting/vectorizing, one of

  • “None”: executes loop sequentally, simple list comprehension

  • “loky”, “multiprocessing” and “threading”: uses joblib.Parallel

  • “joblib”: custom and 3rd party joblib backends, e.g., spark

  • “dask”: uses dask, requires dask package in environment

backend:parallel:paramsdict, optional, default={} (no parameters passed)

additional parameters passed to the parallelization backend as config. Valid keys depend on the value of backend:parallel:

  • “None”: no additional parameters, backend_params is ignored

  • “loky”, “multiprocessing” and “threading”: default joblib backends any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, with the exception of backend which is directly controlled by backend. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “joblib”: custom and 3rd party joblib backends, e.g., spark. Any valid keys for joblib.Parallel can be passed here, e.g., n_jobs, backend must be passed as a key of backend_params in this case. If n_jobs is not passed, it will default to -1, other parameters will default to joblib defaults.

  • “dask”: any valid keys for dask.compute can be passed, e.g., scheduler

Returns:
selfreference to self.

Notes

Changes object state, copies configs in config_dict to self._config_dynamic.

set_params(**params)[source]#

Set the parameters of this object.

The method works on simple estimators as well as on composite objects. Parameter key strings <component>__<parameter> can be used for composites, i.e., objects that contain other objects, to access <parameter> in the component <component>. The string <parameter>, without <component>__, can also be used if this makes the reference unambiguous, e.g., there are no two parameters of components with the name <parameter>.

Parameters:
**paramsdict

BaseObject parameters, keys must be <component>__<parameter> strings. __ suffixes can alias full strings, if unique among get_params keys.

Returns:
selfreference to self (after parameters have been set)
set_tags(**tag_dict)[source]#

Set dynamic tags to given values.

Parameters:
**tag_dictdict

Dictionary of tag name: tag value pairs.

Returns:
Self

Reference to self.

Notes

Changes object state by settting tag values in tag_dict as dynamic tags in self.

static sparse_to_dense(y_sparse: Series, index: Index, columns: Index | None = None) Series[source]#

Convert the sparse output from the predict method to a dense format.

Parameters:
y_sparsepd.DataFrame

The sparse output from a changepoint detector’s predict method.

indexarray-like

Indices that are to be annotated according to y_sparse.

columns: array-like

Not used. Only for API compatibility.

Returns:
pd.Series with integer labels 0, …, K for each segment between two

changepoints.

transform(X)[source]#

Detect events and return the result in a dense format.

Parameters:
Xpd.Series, pd.DataFrame or np.ndarray

Data to detect events in (time series).

Returns:
ypd.Series or pd.DataFrame

Detections for sequence X. The returned detections will be in the dense format, meaning that each element in X will be annotated according to the detection results in some meaningful way depending on the detector type.

update(X, y=None)[source]#

Update model with new data and optional ground truth detections.

Parameters:
Xpd.Series, pd.DataFrame or np.ndarray

Training data to update model with (time series).

ypd.Series, pd.DataFrame or np.ndarray, optional

Ground truth detections for training if detector is supervised.

Returns:
self

Reference to self.

Notes

Updates fitted model that updates attributes ending in “_”.

update_predict(X, y=None)[source]#

Update model with new data and detect events in it.

Parameters:
Xpd.Series, pd.DataFrame or np.ndarray

Training data to update model with and detect events in (time series).

ypd.Series, pd.DataFrame or np.ndarray, optional

Ground truth detections for training if detector is supervised.

Returns:
ypd.Series or pd.DataFrame

Each element or row corresponds to a detected event. Exact format depends on the detector type.

Notes

Updates fitted model that updates attributes ending in “_”.