Notebook
If you are using a Jupyter notebook, the notebook functions may come in handy to evaluate models and view results more easily.
results_vis
visualize the forecast results from many different Forecaster objects leveraging Jupyter widgets.
- param f_dict
dictionary of forcaster objects.
- type f_dict
dict
- param plot_type
one of {“forecast”,”test”}, default “forecast”; the type of results to visualize.
- type plot_type
str
- param print_attr
optional; the attributes from history to print; passed to print_attr parameter when plot_type = ‘forecast’; ignored when plot_type = ‘test’.
- type print_attr
list
- param include_train
optional; whether to include the complete training set in the plot or how many traning-set observations to include; passed to include_train parameter when plot_type = ‘test’; ignored when plot_type = ‘forecast’.
- type include_train
bool or int
- returns
None
Data:
from scalecast.Forecaster import Forecaster
from scalecast import GridGenerator
from scalecast.notebook import tune_test_forecast, results_vis
import pandas_datareader as pdr # pip install pandas-datareader
f_dict = {}
models = ('mlr','elasticnet','mlp')
GridGenerator.get_example_grid() # writes the grids.py file to your working directory
for sym in ('UNRATE','GDP'):
df = pdr.get_data_fred(sym, start = '2000-01-01')
f = Forecaster(y=df[sym],current_dates=df.index)
f.generate_future_dates(12) # forecast 12 periods to the future
f.set_test_length(12) # test models on 12 periods
f.set_validation_length(4) # validate on the previous 4 periods
f.add_time_trend()
f.add_seasonal_regressors('quarter',raw=False,dummy=True)
tune_test_forecast(f,models) # adds a progress bar that is nice for notebooks
f_dict[sym] = f
results_vis(f_dict) # toggle through results with jupyter widgets
tune_test_forecast
tunes, tests, and forecasts a series of models with a progress bar through tqdm.
- param forecaster
the object to visualize; works best if two or more models have been evaluated.
- type forecaster
Forecaster
- param models
each element must match an element in scalecast.Forecaster._estimators_ (except “combo”, which cannot be tuned).
- type models
list-like
- param dynamic_tuning
default False; whether to dynamically tune the forecast (meaning AR terms will be propogated with predicted values); setting this to False means faster performance, but gives a less-good indication of how well the forecast will perform out x amount of periods; when False, metrics effectively become an average of one-step forecasts.
- type dynamic_tuning
bool
- param dynamic_testing
default True; whether to dynamically test the forecast (meaning AR terms will be propogated with predicted values); setting this to False means faster performance, but gives a less-good indication of how well the forecast will perform out x amount of periods; when False, test-set metrics effectively become an average of one-step forecasts.
- type dynamic_testing
bool
- param summary_stats
default False; whether to save summary stats for the models that offer those.
- type summary_stats
bool
- param feature_importance
default False; whether to save permutation feature importance information for the models that offer those.
- type feature_importance
bool
- returns
None
from scalecast.notebook import tune_test_forecast
models = ('arima','mlr','mlp')
tune_test_forecast(f,models) # displays a progress bar through tqdm