Catch up on the latest AI articles

2 Time Series Machine Learning Libraries

2 Time Series Machine Learning Libraries

Time-series

3 main points
✔️ Two-time series machine learning libraries have been announced in quick succession
✔️ Merlion provides a time series anomaly detection and prediction library
✔️ Darts is an attempt to democratize and integrate modern machine learning forecasting approaches under a common, user-friendly API

Merlion: A Machine Learning Library for Time Series
written by Aadyot BhatnagarPaul KassianikChenghao LiuTian LanWenzhuo YangRowan CassiusDoyen SahooDevansh ArpitSri SubramanianGerald WooAmrita SahaArun Kumar JagotaGokulakrishnan GopalakrishnanManpreet SinghK C KrithikaSukumar MaddineniDaeki ChoBo ZongYingbo ZhouCaiming XiongSilvio SavareseSteven HoiHuan Wang
(Submitted on 20 Sep 2021)
Comments: Published on arxiv.

Subjects: Computer Vision and Pattern Recognition (cs.CV)

code:  
 

Darts: User-Friendly Modern Machine Learning for Time Series
written by Julien HerzenFrancesco LässigSamuele Giuliano PiazzettaThomas NeuerLéo TaftiGuillaume RailleTomas Van PottelberghMarek PasiekaAndrzej SkrodzkiNicolas HugueninMaxime DumonalJan KościszDennis BaderFrédérick GussetMounir BenheddiCamila WilliamsonMichal KosinskiMatej PetrikGaël Grosch
(Submitted on 7 Oct 2021 (v1), last revised 8 Oct 2021 (this version, v2))
Comments: Published on arxiv.

Subjects:  Machine Learning (cs.LG); Computation (stat.CO)

code:  

The images used in this article are from the paper, the introductory slides, or were created based on them.

first of all

Two time-series machine learning libraries have been announced in quick succession, so I'll introduce them all together.

The first is Merlion, a Salesforce group.

Time series are ubiquitous in monitoring the behavior of complex systems in real-world applications such as IT operations management, manufacturing, and cyber security. They can represent key metrics of computing resources, business indicators, or feedback from marketing campaigns on social networking sites. In all of these applications, it is important to accurately predict the trends and values of key metrics and to quickly and accurately detect anomalies in those metrics. In fact, in the software industry, anomaly detection that notifies operators promptly is one of the key machine learning techniques for automating the identification of problems and incidents to improve the availability of IT systems.

Although several tools have been proposed to account for the different potential applications of time series analysis, there are still several issues with today's industry workflows for time series analysis. These include inconsistent interfaces between data sets and models, inconsistent metrics between academic papers and industrial applications, and a relative lack of support for practical features such as post-processing, AutoML, and model combination. These issues make it difficult to benchmark across multiple datasets and settings, across a variety of models, and to make data-driven decisions about the best model for the target task.

Merlion, the Python library for time series intelligence presented here, provides an end-to-end machine learning framework that includes reading and transforming data, building and training models, post-processing model output, and evaluating model performance. It supports a variety of time series learning tasks, including forecasting and anomaly detection for both univariate and multivariate time series. Key features of Merlion include

  • A standardized and easily extensible framework for data loading, preprocessing and benchmarking a wide range of time series forecasting and anomaly detection tasks
  • A library of diverse models for both anomaly detection and prediction, integrated under a shared interface. The models include classical statistical methods, decision tree ensembles, and deep learning methods. Advanced users can fully combine each model as needed
  • Abstraction of the DefaultDetector and DefaultForecaster models for efficient and robust performance and to provide a starting point for new users
  • AutoML for automated hyperparameter adjustment and model selection
  • Practical, industry-inspired post-processing rules for anomaly detectors that make anomaly scores easier to interpret while reducing false-positive rates.
  • Easy-to-use ensembles that combine the outputs of multiple models for more robust performance
  • A flexible evaluation pipeline that simulates live model deployment and retraining in a production environment to evaluate performance in both prediction and anomaly detection
  • Native support for visualizing model predictions

Table 1 shows the feature table of Merlion compared to other libraries.

The Merlion code can be found on Github, and documentation about the API can be found at the following sites

https://github.com/salesforce/Merlion

https://opensource.salesforce.com/Merlion/v1.0.1/index.html

Typical usage of the library is as follows

from merlion.models.defaults import DefaultDetectorConfig, DefaultDetector
model = DefaultDetector(DefaultDetectorConfig())
model.train(train_data=train_data)
test_pred = model.get_anomaly_label(time_series=test_data)

Architecture and Design Principles

Merlion's modular architecture consists of five layers. "The Data Layer loads raw data, transforms it into Merlion's TimeSeries data structures, and performs the necessary pre-processing. "The Modeling Layer supports a wide range of models for prediction and anomaly detection, including AutoML for automated hyperparameter tuning. "The post-processing layer provides practical solutions for improving interactivity and reducing false-positive rates in anomaly detection models. Fig. 1 illustrates the relationship between these modules.

data layer

Merlion's core data structure is a TimeSeries. It represents a general multivariate time series T as a collection of UnivariateTimeSeries U(i). This formulation reflects the reality that individual univariates may be sampled at different rates and may contain missing data at different timestamps. after initializing the TimeSeries from the raw data, the merlion. transform module After initializing the TimeSeries from the raw data, the merlion. transform module is a preprocessing operation that can be applied before passing the TimeSeries to the model. The preprocessing includes resampling, normalization, moving averages, and time differencing.

model layer

Since no single model will work well for all-time series and all use cases, it is important to provide users with the flexibility to choose from a wide range of heterogeneous models Merlion implements a variety of models for both forecasting and anomaly detection. To make all these choices transparent to the user, we integrate all Merlion models into two generic APIs, one for prediction and one for anomaly detection. All models are initialized with a config object that contains implementation-specific hyperparameters and supports the model. train(time_series) method. Given a generic multivariate time series, the predictor will be trained to predict the value of a single target univariate value for a single target univariate. You can then get the model's forecast for a set of future timestamps by calling the model. forecast(time_stamps).

Similarly, you can use the Just call model.get_anomaly_score(time_series) to get a time series of anomaly detector's sequence of anomaly scores. Forecast-based anomaly detectors provide both model.forecast (time_stamps) and model.get_anomaly_score (time_series).

For models that require additional computation, the Layer interface, which is the basis for the autoML functionality provided. Layers can be used to implement additional logic on top of existing model definitions that are not properly fit into the model code itself, such as seasonality detection or hyperparameter tuning. Layers have three methods: generate_theta to generate candidate hyperparameters θ, evaluate_theta to evaluate the quality of θ, and set_theta to apply the selected θ to the underlying model. theta to apply the selected theta to the underlying model. Another class, ForecasterAutoMLBase, implements the forecast and train methods that leverage the methods of the Layers class to complete the predictive model. Finally, all models support the ability to adjust their forecasts with historical data time_series_prev, which is different from the data used for training. These conditional forecasts can be obtained by calling model.forecast (time_stamps, time_series_prev) or model.get_anomaly_score (time_series, time_series_prev).

postprocessing layer

translation results

All anomaly detectors have a post_rule that applies significant post-processing to the output of model.get_anomaly_score (time_series). This includes calibration and thresholding rules. The post-processed anomaly scores are stored in obtained directly by calling model.get_anomaly_label (time_series).

Ensemble and Model Selection

An ensemble is structured as a model that represents a combination of several underlying models. To this end, we have a base EnsembleBase class that abstracts the process of obtaining forecasts Y1, ..., Ym from m underlying models on a single time series T, Ym, and a base EnsembleBase class that abstracts the process of obtaining the results Y1,..., and Ym. to the output of the ensemble. to the output of the ensemble. These combinations include traditional average ensembles as well as model selection based on evaluation metrics such as sMAPE.

evaluation pipeline

When a time series model is deployed live in production, training, and inference are typically not performed in batches on the complete time series. Rather, the model is retrained at normal intervals, and inference is performed in streaming mode when possible. To simulate this setting more realistically, we provide an EvaluatorBase class that implements the following evaluation loop.

  1. Train an initial model with recent historical training data
  2. Periodically (e.g., once a day), we retrain the entire model with the most recent data. This can be for the entire history, or a more limited window (e.g. 4 weeks).
  3. Get the model's prediction (forecast or heteroskedasticity score) of the time series values that will occur during the retraining. The user can customize whether this should be done in batch, streaming, or intermediate rhythms.
  4. Compare the model's predictions with the correct answers and report quantitative metrics

It also provides a wide range of evaluation metrics for both forecasting and anomaly detection, implemented as the enumerations ForecastMetric and TSADMetric, respectively. Finally, we provide the scripts benchmark_forecast.py and benchmark_anomaly.py. This allows the user to use this logic to easily evaluate the model performance of the datasets contained in the ts_datasets module.

time-series forecasting

Merlion contains several models for univariate time series forecasting. These include classical statistical methods such as ARIMA, SARIMA, and ETS (error, trend, and seasonality), recent algorithms such as Prophet, an earlier algorithm created by the author group, MSES (Cassius et al., 2021), and deep autoregressive LSTM. The multivariate predictive models used here are based on autoregressive and decision tree ensemble algorithms. For the autoregressive algorithm, we employ a vector autoregressive model that captures the relationships between multiple sequences as they change over time. For the decision tree ensemble, we consider random forests and gradient boosting as base models. We allow the model to generate forecasts for any prediction period, similar to traditional models such as VARs. Furthermore, all multivariate forecasting models share a common API with univariate forecasting models, so they are common to both univariate and multivariate forecasting tasks.

AutoML

translation results

The AutoML module for time series prediction models is slightly different from the autoML for traditional machine learning models. This is because it considers not only the traditional optimization of hyperparameters but also the detection of some properties of the time series. For example, SARIMA includes autoregressive parameters, difference orders, moving average parameters, seasonal autoregressive parameters, seasonal difference orders, seasonal moving average parameters, and seasonality.

We further reduce the training time of the autoML module in the following way We obtain an initial list of candidate models that achieve good performance with relatively few optimization iterations. We then retrain each of these candidates until the models converge and select the best model by AIC.

ensemble

Merlion's ensemble of predictors allows the user to transparently combine models in two ways. First, it supports traditional ensembles that report the mean or median value predicted by all models at each timestamp. Second, we support automatic model selection. When performing model selection, we split the training data into training and validation data, train each model on the training data, and retrieve the predictions for the validation data. It then evaluates the quality of these predictions using user-specified metrics and, after retraining on the full training data, returns the model that achieved the best performance.

There are many ways to evaluate the accuracy of a prediction model. Merlion's ForecastMetric provides MAE, RMSE, sMAPE, MARRE, and other metrics.

time-series anomaly detection

Merlion contains several models dedicated to univariate time series anomaly detection. These fall into two groups: forecast-based and statistical. Merlion's predictors are easily adapted to anomaly detection because they predict specific univariate values in a general time series. The anomaly score is the residual between the predicted and actual time series values, optionally normalized by the predicted standard error of the underlying predictor. The univariate statistical method provides Spectral Residual and two simple baselines WindStats and ZMS. In addition, we offer both statistical methods and deep learning models capable of handling both univariate and multivariate heteroskedasticity detection. The statistical methods include Isolation Forest and Random Cut Forest. Deep learning models include autoencoders, deep autoencoding Gaussian mixture models, LSTM encoder decoders, and variational autoencoders.

Merlion supports two key post-processing steps in a heterogeneity detector: calibration and thresholding. Calibration is important to improve the interpretability of the model, while thresholding converts a series of continuous dysmorphic scores into individual labels, reducing the false positive rate.

All of Merlion's anomaly detectors return an anomaly score st which is positively correlated with the severity of the anomaly. However, the scale and distribution of these dissimilarity scores vary widely. For example, Isolation Forest returns an anomaly score st ∈ [0, 1]; Spectral Residual returns an unnormalized saliency map; and DAGMM returns an anomaly score st ∈ [0, 1]. DAGMM returns the negative log probability.

Calibration

To use a model successfully, you need to be able to interpret the anomaly scores returned by the model. This would make many models immediately unusable by users unfamiliar with the particular implementation. Calibration fills this gap by allowing all anomaly scores to be interpreted as z-scores, i.e. values extracted from a standard normal distribution. This simple post-processing step dramatically improves the interpretability of the heteroskedasticity scores returned by individual models.

Threshold processing

The most common way to determine whether an individual timestamp t is an anomaly is to compare the anomaly score st with a threshold τ. However, in many real-world systems, a human is alerted each time an anomaly is detected. A high false-positive rate increases the load on the user to investigate each alert and may result in a system that the user does not trust. A way to avoid this problem is to include additional automated checks that must be passed before alerting a human. These steps can significantly improve accuracy without adversely affecting repeatability, and Merlion implements all of these features in the user-configurable AggregateAlarms post-processing rules.

ensemble

Because both the time series and its heterogeneity are so diverse, no single model can be expected to be optimal for all use cases. As a general rule, a heterogeneous ensemble of models is likely to generalize better than individual models within that ensemble. Since the anomaly scores of all Merlion models can be interpreted as z-scores, an ensemble of anomaly detectors can be constructed by simply reporting the average calibrated anomaly scores returned by the individual models and applying thresholds. Empirically, we find that the ensemble reliably achieves the strongest or most competitive performance across multiple open source and internal datasets for both univariate (Table 10) and multivariate (Table 13) anomaly detection.

valuation index

A key challenge in designing appropriate evaluation metrics for time series anomaly detection lies in the fact that anomalies are almost always time frames rather than discrete points. Thus, while it is easy to compute pointwise (PW) fit rates, recall rates, and F1 scores for predicted anomaly label sequences compared to correct label sequences, these metrics do not reflect the quantities of interest to human operators.

We propose a point adjustment (PA) metric as a solution to this problem. If any point in the positive dissimilarity window is labeled as dissimilar, then all points in the segment are treated as true positives. If the window is not flagged as an anomaly, then all points are labeled as false negatives. Anomalies predicted outside the anomaly window are treated as false positives. The goodness of fit, repeatability, and F1 can be calculated based on these adjusted true/false positive/negative counts. However, the drawback of the PA metric is that it is biased towards reward models for detecting long anomalies rather than short anomalies.

An alternative is the updated point adjustment (RPA) metric. In this case, if any point in the positive dissimilarity window is labeled as dissimilar, a single true positive is registered. If the window is not flagged as dissimilar, then one false positive is recorded. Any anomaly predicted outside the anomaly window will be treated as a false positive.

experiment

Since Merlion is a library, it is equipped with various methods as described so far, but since a performance comparison is performed here, it is better to refer to it when selecting.

We show benchmark results generated using Merlion with a popular baseline model across several time-series datasets.

univariate forecasting

We primarily evaluate our models on the M4 benchmark, a reputable time series forecasting competition. The dataset contains 100,000-time series from a variety of domains, including financial, industry, and demographic forecasting, with sampling frequencies ranging from hourly to yearly. The sampling frequency ranges from hourly to yearly; Table 2 summarizes the dataset. In addition, we evaluate three internal datasets of cloud KPIs, which are described in Table 3. To reduce the impact of outliers, we show both the mean and median MAPE for each method.

We compare ARIMA, Prophet (Taylor and Letham, 2017), ETS (Error, Trend, Seasonality), and MSES. These are implemented using merlion. models.auto module.

Tables 4 and 5 show the performance of each model on the public and internal datasets, respectively; Table 6 shows the average improvement achieved using the autoML module.

multivariate forecasting

We collect a public dataset and an internal dataset (Table 7) and train the model by training partitioning of the data. For some datasets, we resample the data at a specified granularity. For each time series, we train the model in training partitions and predict the first univariate as the target sequence. We do not retrain the model but use the evaluation pipeline to incrementally obtain forecasts for test splits using a rolling window. Predict the time series values for the next three timestamps while conditioning the prediction on the previous 21 timestamps. We obtain these 3-step predictions for all timestamps of the test split and evaluate the quality of the predictions using sMAPE if possible, otherwise using RMSE.

The multivariate predictive models used are based on autoregressive and decision tree ensemble algorithms. We compare the VAR, the GB Forecaster based on the gradient boosting algorithm, and the RF Forecaster based on the random forest algorithm.

Table 8 shows the performance of each model. GBForecaster achieves the best results on three of the four data sets. The VAR model shows competitive performance on only one data set. For this reason, we consider GBForecaster to be a good "default" model for new users and early exploration.

univariate variant detection

We report results for the four public datasets and the internal dataset (Table 9). For the internal dataset and the NumentaAnomaly Benchmark, we choose a single (calibrated) detection threshold for all-time series and all algorithms; for the AIOps challenge, we use labeled anomalies; for the AIOps challenge, we use a single (calibrated) detection threshold for all-time series and all algorithms. Training split for each time series to select a detection threshold that optimizes F1. UC Riverside Time Series Anomaly Archive to select the detection threshold that optimizes F1 in the test split. This is because the datasets are very diverse (and therefore a single threshold does not apply to all-time series). Table 9 summarizes these datasets and the evaluation choices. We use the evaluation pipeline to evaluate each model. After training the initial model on a time-series training split, we retrain the model daily or hourly without supervision on the complete data up to that point (without adjusting the calibrators or thresholds). The full predicted heteroskedasticity score is then obtained incrementally. How to simulate time series and live deployment scenarios. We also consider batch forecasting. In batch prediction, the first trained model predicts the dissimilarity score for the entire test split in a single step, without re-training. Note that the UCR dataset does not include a timestamp; we treat it as if it were sampled once per minute, but this is an incomplete assumption. For efficiency reasons, we will only consider batch forecasts and "daily" retraining of this dataset.

Evaluate two classes of models: prediction-based anomaly detectors and statistical methodsFor prediction-based methods, use ARIMA, AutoETS, and AutoProphet. For the statistical methods, we use IsolationForest. RandomCutForest and SpectralResidual are also considered as two simple baselines WindStats and ZMS; ensembles of AutoETS, RRCF, and ZMS are also considered.

Table 10 shows the adjusted point-adjusted F1 scores achieved by each model on each dataset, showing that the AutoETS, RRCF, and ZMS ensembles achieve the best performance on two of the four datasets and the second-best on the other datasets We can see.

Table 11 examines the impact of the retraining schedule for each model, averaged across all datasets. We can see that daily and hourly retraining of the prediction-based models and the proposed ensembles significantly improves anomaly detection performance.

multivariate anomaly detection

The evaluation settings for multivariate anomaly detection are almost identical to those for univariate anomaly detection; Table 12 shows the multivariate time series anomaly detection datasets for this experiment. Note that we treat anomaly detection on all these datasets as a completely unsupervised learning task. In the univariate setting, for efficiency reasons, we only consider batch forecasting and weekly retraining.

We consider two classes of models: statistical methods and deep learning models. For statistical methods, we evaluate IsolationForest and RandomCutForest. For deep learning approaches, we evaluate AutoEncoder, DeepAutoncodingGaussianMixtureModel, LSTM Encoder Decoder, and Variational AutoEncoder. An ensemble of RandomCutForest and VariationalAutoencoder can also be selected.

Table 13 shows the adjusted point-adjusted F1 scores achieved by each model. Although there is no clear winner across all datasets, the proposed ensemble consistently achieves a smaller gap in F1 scores compared to the best model in each dataset. The LSTM encoder-decoder achieves Similar average gaps, but with greater variability. Therefore, we consider ensembles to be a good "default" model for new users or early explorations, as it ensures that reasonable performance is obtained across multiple datasets. Finally, as in the univariate case, Table 14 shows that the impact of relearning is ambiguous for all multivariate statistical models considered (first block) and deep learning models (second block).

Summary (Merlion)

We introduced Merlion, an open-source machine learning library for time series. It is designed to address many of the issues in today's industry workflows for time series anomaly detection and forecasting. It provides an easily extensible interface and implementation that integrates across a wide range of models and datasets, an autoML module that consistently improves the performance of multiple forecasting models, post-processing rules for anomaly detectors that improve interpretability and reduce false-positive rates, and transparency. It provides ensembles that reliably achieve superior performance on multiple benchmark datasets. These capabilities are coupled with a flexible pipeline for quantitative assessment of model performance and a visualization module for more qualitative analysis. further development and improvement of Merlion are actively underway, according to the company. Planned future work includes adding support for more models, including the latest deep learning models and online learning algorithms, developing a streaming platform to facilitate model deployment in real production environments, and implementing advanced capabilities for multivariate time series analysis.

Darts

Darts: User-Friendly Modern Machine Learning for Time Series
written by Julien HerzenFrancesco LässigSamuele Giuliano PiazzettaThomas NeuerLéo TaftiGuillaume RailleTomas Van PottelberghMarek PasiekaAndrzej SkrodzkiNicolas HugueninMaxime DumonalJan KościszDennis BaderFrédérick GussetMounir BenheddiCamila WilliamsonMichal KosinskiMatej PetrikGaël Grosch
(Submitted on 7 Oct 2021 (v1), last revised 8 Oct 2021 (this version, v2))
Comments: Published on arxiv.

Subjects:  Machine Learning (cs.LG); Computation (stat.CO)

code:  

 

Another time-series library, Darts, has been proposed by Unit8, Switzerland, with a new, relatively high-level API that integrates classical and ML-based forecasting models.

Darts has its own TimeSeries data container type that represents a single time series. TimeSeries is immutable and ensures that the data represents a well-formed time series with the correct shape, type, and sorted time index. A TimeSeries can be indexed using either a Pandas DatetimeIndex or an Int64Index. The TimeSeries wraps a three-dimensional xarray DataArray. The dimensions are (time, component, sample). where component represents the dimension of a multivariate series and sample represents the sample of a stochastic time series. The TimeSeries class provides several methods for converting to and from other common types, such as Pandas Dataframes and NumPy arrays. It can also perform mathematical operations, indexing, partitioning, time differencing, interpolation, mapping functions, embedding timestamps, plotting, computing marginal quartiles, and other useful operations. For immutability, TimeSeries maintains its copy of the data and relies heavily on NumPy views to access the data efficiently without copying (e.g., when training the model). The main advantage of using a dedicated type that provides such guarantees is that all Darts models can use and generate TimeSeries. This allows you to provide a consistent API. For example, it is easy to create a model that uses the output of another model.

Predictive API

All models in Darts support the same basic fit(series: TimeSeries)->None and predict(n: int)->TimeSeries interface, and are trained on a single series, predicting n time steps after the end of the series. In addition, most models also provide richer functionality. For example, the ability to train on a sequence of time series (using calls such as fit([series1, series2, ...]), etc.) using calls such as Models can have different internal mechanisms (inter-sequence, fixed-length, iterative, autoregressive, etc.), and this integrated API allows you to seamlessly compare, backtest, and ensemble different models without knowing their internal workings. At the time of writing, models implemented in Darts include (V)ARIMA, exponential smoothing, AutoARIMA, Theta, Prophet, FFT-based prediction, DeepAR-like RNN models, N-BEATS, TCN, and any external Tabular regression models General regression models (e.g. scikit-learn models) that can wrap around The list is constantly expanding, and external and reference implementations of new models are welcome.

meta-learning

An important part of Darts is its support for meta-learning, or the ability to train a single model on potentially many individual time series. The darts. utils. The data module contains various implementations of time-series datasets. These datasets specify how to slice the series (and potential covariates) into training samples; Darts chooses a default slicing logic that is model-specific but can also be user-defined in a custom way if desired. All neural networks are implemented using PyTorch, which supports GPU training and inference. We rely on a custom sequence implementation to load data lazily, potentially using large data sets that are not held in memory.

Past/future covariates

Some models support covariate series as a way to specify external data that may be useful in forecasting the target series. Darts distinguishes future covariates that are known in the future (e.g., weather forecasts) from past covariates that are known only in the past. The model accepts arguments for the past covariates and/or the future covariates. This makes it clear whether future values are needed during inference and reduces the risk of making mistakes. Covariate series do not need to be aligned with the target, as they are aligned by the slicing logic based on their respective time horizons.

probabilistic forecast

Some models in Darts (and almost all deep learning models) support probabilistic prediction. Simultaneous distributions of components and time are represented by storing Monte Carlo samples directly in a TimeSeries object. This representation is very flexible as it is independent of the parametric form and can capture arbitrary simultaneous distributions. The computational cost of sampling is usually negligible because samples can be computed efficiently in a vectorized fashion using batch processing. Probabilistic deep learning models can be adapted to any likelihood form, as long as the negative log-likelihood loss is differentiable. At the time of writing, Darts provides 16 ready-to-use distributions (both continuous and discrete, univariate and multivariate). Finally, we provide a way to specify prior beliefs about the output distribution by specifying a prior distribution that is independent of time for the parameters of the distribution.

Other Features

Darts includes many additional features such as transformers and pipelines for data preprocessing, backtesting (all models provide a backtest () method), grid search for model selection, extensive metrics, dynamic time warping modules, and ensemble models (with the possibility to train the ensemble itself using regression models). Darts also includes filtering models such as Kalman filters and Gaussian processes that provide probabilistic modeling of time series. Finally, the darts. datasets module contains a variety of public datasets that can be conveniently loaded as TimeSeries.

examples showing the use (of a word)

The following code shows how to fit a single TCN model to the default hyperparameters of two different (and completely different) series and predict one of them. The network outputs the parameters of the Laplace distribution. The code includes the complete prediction pipeline, from loading and preprocessing the data to plotting the predictions using arbitrary quartiles (shown on the right).

Summary (Darts)

Darts is an attempt to democratize modern machine learning prediction approaches and integrate them (along with traditional approaches) under a common, user-friendly API. The library is still under active development, and some of the future work will include extending the API to include anomaly detection and time series classification models, support for irregularly spaced data (e.g., point processes), and similar to what exists in the computer vision and NLP domains This includes providing a collection of models that have been pre-trained on large data sets.

summary

Two libraries that take into account the various circumstances associated with time-series data analysis, which is important in many situations of data utilization, have been released. It will be able to ensure stable performance with a group of functions that support generous data analysis work.

  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!
友安 昌幸 (Masayuki Tomoyasu) avatar
JDLA G certificate 2020#2, E certificate2021#1 Japan Society of Data Scientists, DS Certificate Japan Society for Innovation Fusion, DX Certification Expert Amiko Consulting LLC, CEO

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us