Metadata-Version: 2.4
Name: timesead
Version: 0.0.2
Summary: Customized TimeSeAD library by Hui
Keywords: time series anomaly detection,anomaly detection,time series,benchmark
Author: Dennis Wagner, Tobias Michels, Florian C.F. Schulz, Arjun Nair, Hoang Huy Thai
Author-email: Dennis Wagner <dwagner@cs.uni-kl.de>, Tobias Michels <tmichels@cs.uni-kl.de>, Florian C.F. Schulz <florian.cf.schulz@tu-berlin.de>, Arjun Nair <naira@rptu.de>, Hoang Huy Thai <thaihuy836@gmail.com>
License-Expression: MIT
Requires-Dist: matplotlib>=3.3
Requires-Dist: numpy<2.4.0
Requires-Dist: scikit-learn>=0.24.2
Requires-Dist: tqdm>=4.59
Requires-Dist: pandas>=1.2
Requires-Dist: torch>=2.6
Requires-Dist: torchvision>=0.2.2
Requires-Dist: torch-geometric
Requires-Dist: opt-einsum
Requires-Dist: hmmlearn>=0.3.3
Requires-Dist: pyreadr
Requires-Dist: pyod
Requires-Dist: einops
Requires-Dist: pytest>=9.0.2
Requires-Dist: hydra-core>=1.3.2
Requires-Dist: omegaconf>=2.3
Requires-Dist: mlflow>=3.10.1
Requires-Dist: debugpy>=1.8.20
Requires-Dist: sphinx>=6.1 ; extra == 'docs'
Requires-Dist: sphinx-autoapi>=2.1 ; extra == 'docs'
Requires-Dist: myst-parser>=1.0 ; extra == 'docs'
Requires-Dist: sphinx-rtd-theme ; extra == 'docs'
Requires-Dist: pymongo>=3.12 ; extra == 'experiments'
Requires-Dist: sacred>=0.8 ; extra == 'experiments'
Requires-Python: >=3.11
Project-URL: Homepage, https://github.com/srb-cv/TimeSeAD
Project-URL: Repository, https://github.com/srb-cv/TimeSeAD
Provides-Extra: docs
Provides-Extra: experiments
Provides-Extra: extensions
Description-Content-Type: text/markdown

# TimeSeAD - Library for Benchmarking Multivariate Time Series Anomaly Detection

TimeSeAD is a library for developing and evaluating time series anomaly detection methods with focus on multivariate 
data and includes several datasets, methods, and evaluation tools. It was initially developed in the context of a 
paper analyzing evaluations of deep learning based methods for multivariate time series anomaly detection:

> Developing new methods for detecting anomalies in time series is of great practical significance, but progress is 
> hindered by the difficulty of assessing the benefit of new methods, for the following reasons. (1) Public benchmarks 
> are flawed (e.g., due to potentially erroneous anomaly labels), (2) there is no widely accepted standard evaluation 
> metric, and (3) evaluation protocols are mostly inconsistent. In this work, we address all three issues: (1) We 
> critically analyze several of the most widely-used multivariate datasets, identify a number of significant issues, and 
> select the best candidates for evaluation. (2) We introduce a new evaluation metric for time-series anomaly detection, 
> which—in contrast to previous metrics—is recall consistent and takes temporal correlations into account. (3) We analyze 
> and overhaul existing evaluation protocols and provide the largest benchmark of deep multivariate time-series anomaly 
> detection methods to date. We focus on deep-learning based methods and multivariate data, a common setting in modern 
> anomaly detection. We provide all implementations and analysis tools in a new comprehensive library for Time Series 
> Anomaly Detection, called TimeSeAD.

The paper can be found [here](https://openreview.net/forum?id=iMmsCI0JsS).

## Getting started

For installation and usage guides please refer to the [documentation](https://timesead.readthedocs.io/en/latest).

## Citation and Contact

If you use our work, please consider citing the paper
```
@article{
    wagner2023timesead,
    title={TimeSe{AD}: Benchmarking Deep Multivariate Time-Series Anomaly Detection},
    author={Wagner, Dennis and Michels, Tobias and Schulz, Florian CF and Nair, Arjun and Rudolph, Maja and Kloft, Marius},
    journal={Transactions on Machine Learning Research},
    year={2023},
    url={https://openreview.net/forum?id=iMmsCI0JsS},
    note={To Appear}
}
```

To get in touch you can reach us via [email](mailto:wagnerd@rhrk.uni-kl.de,tmichels@cs.uni-kl.de,naira@rptu.de).



### New Experiments
timesead_experiments runs experiments using Sacred. As an alternative, experiments_hydra provides training through hydra configs and tracking through MLFlow.
