abacusai.api_class.model

Module Contents

Classes

TrainingConfig

Helper class that provides a standard way to create an ABC using

ForecastingTrainingConfig

Training config for the FORECASTING problem type

_TrainingConfigFactory

Helper class that provides a standard way to create an ABC using

class abacusai.api_class.model.TrainingConfig

Bases: abacusai.api_class.abstract.ApiClass

Helper class that provides a standard way to create an ABC using inheritance.

_upper_snake_case_keys: bool
_support_kwargs: bool
kwargs: dict
problem_type: abacusai.api_class.enums.ProblemType
class abacusai.api_class.model.ForecastingTrainingConfig

Bases: TrainingConfig

Training config for the FORECASTING problem type :param problem_type: FORECASTING :type problem_type: ProblemType :param prediction_length: How many timesteps in the future to predict. :type prediction_length: int :param objective: Ranking scheme used to select final best model. :type objective: ForecastingObjective :param sort_objective: Ranking scheme used to sort models on the metrics page. :type sort_objective: ForecastingObjective :param forecast_frequency: Forecast frequency. :type forecast_frequency: ForecastingFrequency :param probability_quantiles: Prediction quantiles. :type probability_quantiles: list[float] :param no_validation_set: Do not generate validation set, test set will be used instead. :type no_validation_set: bool :param force_prediction_length: Force length of test window to be the same as prediction length. :type force_prediction_length: int :param filter_items: Filter items with small history and volume. :type filter_items: bool :param enable_cold_start: Enable cold start forecasting by training/predicting for zero history items. :type enable_cold_start: bool :param enable_multiple_backtests: Whether to enable multiple backtesting or not. :type enable_multiple_backtests: bool :param total_backtesting_windows: Total backtesting windows to use for the training. :type total_backtesting_windows: int :param backtest_window_step_size: Use this step size to shift backtesting windows for model training. :type backtest_window_step_size: int :param full_data_retraining: Train models separately with all the data. :type full_data_retraining: bool :param type_of_split: Type of data splitting into train/test. :type type_of_split: ForecastingDataSplitType :param test_by_item: Partition train/test data by item rather than time if true. :type test_by_item: bool :param test_start: Limit training data to dates before the given test start. :type test_start: datetime :param test_split: Percent of dataset to use for test data. We support using a range between 5% to 20% of your dataset to use as test data. :type test_split: int :param loss_function: Loss function for training neural network. :type loss_function: ForecastingLossFunction :param underprediction_weight: Weight for underpredictions :type underprediction_weight: float :param disable_networks_without_analytic_quantiles: Disable neural networks, which quantile functions do not have analytic expressions (e.g, mixture models) :type disable_networks_without_analytic_quantiles: bool :param initial_learning_rate: Initial learning rate. :type initial_learning_rate: float :param l2_regularization_factor: L2 regularization factor. :type l2_regularization_factor: float :param dropout_rate: Dropout percentage rate. :type dropout_rate: int :param recurrent_layers: Number of recurrent layers to stack in network. :type recurrent_layers: int :param recurrent_units: Number of units in each recurrent layer. :type recurrent_units: int :param convolutional_layers: Number of convolutional layers to stack on top of recurrent layers in network. :type convolutional_layers: int :param convolution_filters: Number of filters in each convolution. :type convolution_filters: int :param local_scaling_mode: Options to make NN inputs stationary in high dynamic range datasets. :type local_scaling_mode: ForecastingLocalScaling :param zero_predictor: Include subnetwork to classify points where target equals zero. :type zero_predictor: bool :param skip_missing: Make the RNN ignore missing entries rather instead of processing them. :type skip_missing: bool :param batch_size: Batch size. :type batch_size: ForecastingBatchSize :param batch_renormalization: Enable batch renormalization between layers. :type batch_renormalization: bool :param history_length: While training, how much history to consider. :type history_length: int :param prediction_step_size: Number of future periods to include in objective for each training sample. :type prediction_step_size: int :param training_point_overlap: Amount of overlap to allow between training samples. :type training_point_overlap: float :param max_scale_context: Maximum context to use for local scaling. :type max_scale_context: int :param quantiles_extension_method: Quantile extension method :type quantiles_extension_method: ForecastingQuanitlesExtensionMethod :param number_of_samples: Number of samples for ancestral simulation :type number_of_samples: int :param symmetrize_quantiles: Force symmetric quantiles (like in Gaussian distribution) :type symmetrize_quantiles: bool :param use_log_transforms: Apply logarithmic transformations to input data. :type use_log_transforms: bool :param smooth_history: Smooth (low pass filter) the timeseries. :type smooth_history: float :param prediction_offset: Offset for prediction. :type prediction_offset: int :param skip_local_scale_target: Skip using per training/prediction window target scaling. :type skip_local_scale_target: bool :param timeseries_weight_column: If set, we use the values in this column from timeseries data to assign time dependent item weights during training and evaluation. :type timeseries_weight_column: str :param item_attributes_weight_column: If set, we use the values in this column from item attributes data to assign weights to items during training and evaluation. :type item_attributes_weight_column: str :param use_timeseries_weights_in_objective: If True, we include weights from column set as “TIMESERIES WEIGHT COLUMN” in objective functions. :type use_timeseries_weights_in_objective: bool :param use_item_weights_in_objective: If True, we include weights from column set as “ITEM ATTRIBUTES WEIGHT COLUMN” in objective functions. :type use_item_weights_in_objective: bool :param skip_timeseries_weight_scaling: If True, we will avoid normalizing the weights. :type skip_timeseries_weight_scaling: bool :param timeseries_loss_weight_column: Use value in this column to weight the loss while training. :type timeseries_loss_weight_column: str :param use_item_id: Include a feature to indicate the item being forecast. :type use_item_id: bool :param use_all_item_totals: Include as input total target across items. :type use_all_item_totals: bool :param handle_zeros_as_missing: If True, handle zero values in demand as missing data. :type handle_zeros_as_missing: bool :param datetime_holiday_calendars: Holiday calendars to augment training with. :type datetime_holiday_calendars: list[HolidayCalendars] :param fill_missing_values: Strategy for filling in missing values. :type fill_missing_values: list[dict] :param enable_clustering: Enable clustering in forecasting. :type enable_clustering: bool :param data_split_feature_group_table_name: Specify the table name of the feature group to export training data with the fold column. :type data_split_feature_group_table_name: str :param custom_loss_functions: Registered custom losses available for selection. :type custom_loss_functions: list[str] :param custom_metrics: Registered custom metrics available for selection. :type custom_metrics: list[str]

problem_type: abacusai.api_class.enums.ProblemType
prediction_length: int
objective: abacusai.api_class.enums.ForecastingObjective
sort_objective: abacusai.api_class.enums.ForecastingObjective
forecast_frequency: abacusai.api_class.enums.ForecastingFrequency
probability_quantiles: List[float]
no_validation_set: bool
force_prediction_length: bool
filter_items: bool
enable_cold_start: bool
enable_multiple_backtests: bool
total_backtesting_windows: int
backtest_window_step_size: int
full_data_retraining: bool
type_of_split: abacusai.api_class.enums.ForecastingDataSplitType
test_by_item: bool
test_start: datetime.datetime
test_split: int
loss_function: abacusai.api_class.enums.ForecastingLossFunction
underprediction_weight: float
disable_networks_without_analytic_quantiles: bool
initial_learning_rate: float
l2_regularization_factor: float
dropout_rate: int
recurrent_layers: int
recurrent_units: int
convolutional_layers: int
convolution_filters: int
local_scaling_mode: abacusai.api_class.enums.ForecastingLocalScaling
zero_predictor: bool
skip_missing: bool
batch_size: abacusai.api_class.enums.BatchSize
batch_renormalization: bool
history_length: int
prediction_step_size: int
training_point_overlap: float
max_scale_context: int
quantiles_extension_method: abacusai.api_class.enums.ForecastingQuanitlesExtensionMethod
number_of_samples: int
symmetrize_quantiles: bool
use_log_transforms: bool
smooth_history: float
prediction_offset: int
skip_local_scale_target: bool
timeseries_weight_column: str
item_attributes_weight_column: str
use_timeseries_weights_in_objective: bool
use_item_weights_in_objective: bool
skip_timeseries_weight_scaling: bool
timeseries_loss_weight_column: str
use_item_id: bool
use_all_item_totals: bool
handle_zeros_as_missing: bool
datetime_holiday_calendars: List[abacusai.api_class.enums.HolidayCalendars]
fill_missing_values: List[dict]
enable_clustering: bool
data_split_feature_group_table_name: str
custom_loss_functions: List[str]
custom_metrics: List[str]
class abacusai.api_class.model._TrainingConfigFactory

Bases: abacusai.api_class.abstract._ApiClassFactory

Helper class that provides a standard way to create an ABC using inheritance.

config_abstract_class
config_class_key = 'problem_type'
config_class_map