{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", " \n", "## [mlcourse.ai](https://mlcourse.ai) – Open Machine Learning Course \n", "\n", "\n", "Author: [Dmitriy Sergeyev](https://github.com/DmitrySerg), Data Scientist @ Zeptolab, lecturer in the Center of Mathematical Finance in MSU. Translated by: @borowis. This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#
9. Time series analysis in Python
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Hi there!\n", "\n", "We continue our open machine learning course with a new article on time series.\n", "\n", "Let's take a look at how to work with time series in Python: what methods and models we can use for prediction, what double and triple exponential smoothing is, what to do if stationarity is not your favorite thing, how to build SARIMA and stay alive, how to make predictions using xgboost... In addition, all of this will be applied to (harsh) real world examples." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Article outline:\n", "1. [Introduction](#Introduction)\n", " - [Forecast quality metrics](#Forecast-quality-metrics)\n", "2. [Move, smoothe, evaluate](#Move,-smoothe,-evaluate)\n", " - Rolling window estimations\n", " - Exponential smoothing, Holt-Winters model\n", " - Time-series cross validation, parameters selection\n", "3. [Econometric approach](#Econometric-approach)\n", " - Stationarity, unit root\n", " - Getting rid of non-stationarity\n", " - SARIMA intuition and model building\n", "4. [Linear (and not quite) models for time series](#Linear-(and-not-quite)-models-for-time-series)\n", " - [Feature extraction](#Feature-extraction)\n", " - [Time series lags](#Time-series-lags)\n", " - [Target encoding](#Target-encoding)\n", " - [Regularization and feature selection](#Regularization-and-feature-selection)\n", " - [Boosting](#Boosting)\n", "5. [Conclusion](#Conclusion)\n", "6. [Demo assignment](#Demo-assignment)\n", "7. [Useful resources](#Useful-resources)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In my day-to-day job, I encounter time-series related tasks almost every day. The most frequent questions asked are the following: what will happen with our metrics in the next day/week/month/etc., how many users will install our app, how much time will they spend online, how many actions will users complete, and so on. We can approach these prediction tasks using different methods depending on the required quality of the prediction, length of the forecast period, and, of course, the time within which we have to choose features and tune parameters to achieve desired results." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction\n", "\n", "We begin with a simple [definition](https://en.wikipedia.org/wiki/Time_series) of time series:\n", "> *Time series* is a series of data points indexed (or listed or graphed) in time order.\n", "\n", "Therefore, the data is organized by relatively deterministic timestamps, and may, compared to random sample data, contain additional information that we can extract." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's import some libraries. First, we will need the [statsmodels](http://statsmodels.sourceforge.net/stable/) library, which has many statistical modeling functions, including time series. For R afficionados who had to move to Python, `statsmodels` will definitely look more familiar since it supports model definitions like 'Wage ~ Age + Education'." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [] }, "outputs": [], "source": [ "import matplotlib.pyplot as plt # plots\n", "import numpy as np # vectors and matrices\n", "import pandas as pd # tables and data manipulations\n", "import seaborn as sns # more plots\n", "\n", "sns.set()\n", "\n", "import warnings \n", "from itertools import product # some useful functions\n", "\n", "import scipy.stats as scs\n", "import statsmodels.api as sm\n", "import statsmodels.formula.api as smf # statistics and econometrics\n", "import statsmodels.tsa.api as smt\n", "from dateutil.relativedelta import \\\n", " relativedelta # working with dates with style\n", "from scipy.optimize import minimize # for function minimization\n", "from tqdm.notebook import tqdm\n", "\n", "warnings.filterwarnings(\"ignore\") # `do not disturbe` mode\n", "\n", "%matplotlib inline\n", "%config InlineBackend.figure_format = 'retina'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As an example, let's look at real mobile game data. Specifically, we will look into ads watched per hour and in-game currency spend per day:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ads = pd.read_csv(\"../../data/ads.csv\", index_col=[\"Time\"], parse_dates=[\"Time\"])\n", "currency = pd.read_csv(\n", " \"../../data/currency.csv\", index_col=[\"Time\"], parse_dates=[\"Time\"]\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(12, 6))\n", "plt.plot(ads.Ads)\n", "plt.title(\"Ads watched (hourly data)\")\n", "plt.grid(True)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(15, 7))\n", "plt.plot(currency.GEMS_GEMS_SPENT)\n", "plt.title(\"In-game currency spent (daily data)\")\n", "plt.grid(True)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Forecast quality metrics\n", "\n", "Before we begin forecasting, let's understand how to measure the quality of our predictions and take a look at the most commonly used metrics." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- [R squared](http://scikit-learn.org/stable/modules/model_evaluation.html#r2-score-the-coefficient-of-determination): coefficient of determination (in econometrics, this can be interpreted as the percentage of variance explained by the model), $(-\\infty, 1]$\n", "\n", "$R^2 = 1 - \\frac{SS_{res}}{SS_{tot}}$ \n", "\n", "```python\n", "sklearn.metrics.r2_score\n", "```\n", "---\n", "- [Mean Absolute Error](http://scikit-learn.org/stable/modules/model_evaluation.html#mean-absolute-error): this is an interpretable metric because it has the same unit of measurment as the initial series, $[0, +\\infty)$\n", "\n", "$MAE = \\frac{\\sum\\limits_{i=1}^{n} |y_i - \\hat{y}_i|}{n}$ \n", "\n", "```python\n", "sklearn.metrics.mean_absolute_error\n", "```\n", "---\n", "- [Median Absolute Error](http://scikit-learn.org/stable/modules/model_evaluation.html#median-absolute-error): again, an interpretable metric that is particularly interesting because it is robust to outliers, $[0, +\\infty)$\n", "\n", "$MedAE = median(|y_1 - \\hat{y}_1|, ... , |y_n - \\hat{y}_n|)$\n", "\n", "```python\n", "sklearn.metrics.median_absolute_error\n", "```\n", "---\n", "- [Mean Squared Error](http://scikit-learn.org/stable/modules/model_evaluation.html#mean-squared-error): the most commonly used metric that gives a higher penalty to large errors and vice versa, $[0, +\\infty)$\n", "\n", "$MSE = \\frac{1}{n}\\sum\\limits_{i=1}^{n} (y_i - \\hat{y}_i)^2$\n", "\n", "```python\n", "sklearn.metrics.mean_squared_error\n", "```\n", "---\n", "- [Mean Squared Logarithmic Error](http://scikit-learn.org/stable/modules/model_evaluation.html#mean-squared-logarithmic-error): practically, this is the same as MSE, but we take the logarithm of the series. As a result, we give more weight to small mistakes as well. This is usually used when the data has exponential trends, $[0, +\\infty)$\n", "\n", "$MSLE = \\frac{1}{n}\\sum\\limits_{i=1}^{n} (log(1+y_i) - log(1+\\hat{y}_i))^2$\n", "\n", "```python\n", "sklearn.metrics.mean_squared_log_error\n", "```\n", "---\n", "- Mean Absolute Percentage Error: this is the same as MAE but is computed as a percentage, which is very convenient when you want to explain the quality of the model to management, $[0, +\\infty)$\n", "\n", "$MAPE = \\frac{100}{n}\\sum\\limits_{i=1}^{n} \\frac{|y_i - \\hat{y}_i|}{y_i}$ \n", "\n", "```python\n", "def mean_absolute_percentage_error(y_true, y_pred): \n", " return np.mean(np.abs((y_true - y_pred) / y_true)) * 100\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Importing everything from above\n", "\n", "from sklearn.metrics import (mean_absolute_error, mean_squared_error,\n", " mean_squared_log_error, median_absolute_error,\n", " r2_score)\n", "\n", "\n", "def mean_absolute_percentage_error(y_true, y_pred):\n", " return np.mean(np.abs((y_true - y_pred) / y_true)) * 100" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we know how to measure the quality of the forecasts, let's see what metrics we can use and how to translate the results for the boss. After that, one small detail remains - building the model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Move, smoothe, evaluate" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's start with a naive hypothesis: \"tomorrow will be the same as today\". However, instead of a model like $\\hat{y}_{t} = y_{t-1}$ (which is actually a great baseline for any time series prediction problems and sometimes is impossible to beat), we will assume that the future value of our variable depends on the average of its $k$ previous values. Therefore, we will use the **moving average**.\n", "\n", "$\\hat{y}_{t} = \\frac{1}{k} \\displaystyle\\sum^{k}_{n=1} y_{t-n}$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [] }, "outputs": [], "source": [ "def moving_average(series, n):\n", " \"\"\"\n", " Calculate average of last n observations\n", " \"\"\"\n", " return np.average(series[-n:])\n", "\n", "\n", "moving_average(ads, 24) # prediction for the last observed day (past 24 hours)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Unfortunately, we cannot make predictions far in the future - in order to get the value for the next step, we need the previous values to be actually observed. But moving average has another use case - smoothing the original time series to identify trends. Pandas has an implementation available with [`DataFrame.rolling(window).mean()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html). The wider the window, the smoother the trend. In the case of very noisy data, which is often encountered in finance, this procedure can help detect common patterns." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ] }, "outputs": [], "source": [ "def plotMovingAverage(\n", " series, window, plot_intervals=False, scale=1.96, plot_anomalies=False\n", "):\n", "\n", " \"\"\"\n", " series - dataframe with timeseries\n", " window - rolling window size \n", " plot_intervals - show confidence intervals\n", " plot_anomalies - show anomalies \n", "\n", " \"\"\"\n", " rolling_mean = series.rolling(window=window).mean()\n", "\n", " plt.figure(figsize=(15, 5))\n", " plt.title(\"Moving average\\n window size = {}\".format(window))\n", " plt.plot(rolling_mean, \"g\", label=\"Rolling mean trend\")\n", "\n", " # Plot confidence intervals for smoothed values\n", " if plot_intervals:\n", " mae = mean_absolute_error(series[window:], rolling_mean[window:])\n", " deviation = np.std(series[window:] - rolling_mean[window:])\n", " lower_bond = rolling_mean - (mae + scale * deviation)\n", " upper_bond = rolling_mean + (mae + scale * deviation)\n", " plt.plot(upper_bond, \"r--\", label=\"Upper Bond / Lower Bond\")\n", " plt.plot(lower_bond, \"r--\")\n", "\n", " # Having the intervals, find abnormal values\n", " if plot_anomalies:\n", " anomalies = pd.DataFrame(index=series.index, columns=series.columns)\n", " anomalies[series < lower_bond] = series[series < lower_bond]\n", " anomalies[series > upper_bond] = series[series > upper_bond]\n", " plt.plot(anomalies, \"ro\", markersize=10)\n", "\n", " plt.plot(series[window:], label=\"Actual values\")\n", " plt.legend(loc=\"upper left\")\n", " plt.grid(True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's smooth by the previous 4 hours." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotMovingAverage(ads, 4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's try smoothing by the previous 12 hours." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotMovingAverage(ads, 12)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now smoothing with the previous 24 hours, we get the daily trend." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotMovingAverage(ads, 24)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When we applied daily smoothing on hourly data, we could clearly see the dynamics of ads watched. During the weekends, the values are higher (more time to play on the weekends) while fewer ads are watched on weekdays." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also plot confidence intervals for our smoothed values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotMovingAverage(ads, 4, plot_intervals=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's create a simple anomaly detection system with the help of moving average. Unfortunately, in this particular dataset, everything is more or less normal, so we will intentionally make one of the values abnormal in our dataframe `ads_anomaly`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ads_anomaly = ads.copy()\n", "ads_anomaly.iloc[-20] = ads_anomaly.iloc[-20] * 0.2 # say we have 80% drop of ads" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see if this simple method can catch the anomaly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotMovingAverage(ads_anomaly, 4, plot_intervals=True, plot_anomalies=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Neat! What about the second series?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotMovingAverage(\n", " currency, 7, plot_intervals=True, plot_anomalies=True\n", ") # weekly smoothing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Oh no, this was not as great! Here, we can see the downside of our simple approach – it did not capture the monthly seasonality in our data and marked almost all 30-day peaks as anomalies. If you want to avoid false positives, it is best to consider more complex models. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Weighted average** is a simple modification to the moving average. The weights sum up to `1` with larger weights assigned to more recent observations.\n", "\n", "\n", "$\\hat{y}_{t} = \\displaystyle\\sum^{k}_{n=1} \\omega_n y_{t+1-n}$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [] }, "outputs": [], "source": [ "def weighted_average(series, weights):\n", " \"\"\"\n", " Calculate weighted average on the series.\n", " Assuming weights are sorted in descending order\n", " (larger weights are assigned to more recent observations).\n", " \"\"\"\n", " result = 0.0\n", " for n in range(len(weights)):\n", " result += series.iloc[-n - 1] * weights[n]\n", " return float(result)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "weighted_average(ads, [0.6, 0.3, 0.1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# just checking\n", "0.6 * ads.iloc[-1] + 0.3 * ads.iloc[-2] + 0.1 * ads.iloc[-3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exponential smoothing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's see what happens if, instead of weighting the last $k$ values of the time series, we start weighting all available observations while exponentially decreasing the weights as we move further back in time. There exists a formula for **[exponential smoothing](https://en.wikipedia.org/wiki/Exponential_smoothing)** that will help us with this:\n", "\n", "$$\\hat{y}_{t} = \\alpha \\cdot y_t + (1-\\alpha) \\cdot \\hat y_{t-1} $$\n", "\n", "Here the model value is a weighted average between the current true value and the previous model values. The $\\alpha$ weight is called a smoothing factor. It defines how quickly we will \"forget\" the last available true observation. The smaller $\\alpha$ is, the more influence the previous observations have and the smoother the series is.\n", "\n", "Exponentiality is hidden in the recursiveness of the function – we multiply by $(1-\\alpha)$ each time, which already contains a multiplication by $(1-\\alpha)$ of previous model values." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [] }, "outputs": [], "source": [ "def exponential_smoothing(series, alpha):\n", " \"\"\"\n", " series - dataset with timestamps\n", " alpha - float [0.0, 1.0], smoothing parameter\n", " \"\"\"\n", " result = [series[0]] # first value is same as series\n", " for n in range(1, len(series)):\n", " result.append(alpha * series[n] + (1 - alpha) * result[n - 1])\n", " return result" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ] }, "outputs": [], "source": [ "def plotExponentialSmoothing(series, alphas):\n", " \"\"\"\n", " Plots exponential smoothing with different alphas\n", " \n", " series - dataset with timestamps\n", " alphas - list of floats, smoothing parameters\n", " \n", " \"\"\"\n", " with plt.style.context(\"seaborn-white\"):\n", " plt.figure(figsize=(15, 7))\n", " for alpha in alphas:\n", " plt.plot(\n", " exponential_smoothing(series, alpha), label=\"Alpha {}\".format(alpha)\n", " )\n", " plt.plot(series.values, \"c\", label=\"Actual\")\n", " plt.legend(loc=\"best\")\n", " plt.axis(\"tight\")\n", " plt.title(\"Exponential Smoothing\")\n", " plt.grid(True);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotExponentialSmoothing(ads.Ads, [0.3, 0.05])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotExponentialSmoothing(currency.GEMS_GEMS_SPENT, [0.3, 0.05])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Double exponential smoothing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Up to now, the methods that we've discussed have been for a single future point prediction (with some nice smoothing). That is cool, but it is also not enough. Let's extend exponential smoothing so that we can predict two future points (of course, we will also include more smoothing).\n", "\n", "Series decomposition will help us -- we obtain two components: intercept (i.e. level) $\\ell$ and slope (i.e. trend) $b$. We have learnt to predict intercept (or expected series value) with our previous methods; now, we will apply the same exponential smoothing to the trend by assuming that the future direction of the time series changes depends on the previous weighted changes. As a result, we get the following set of functions:\n", "\n", "$$\\ell_x = \\alpha y_x + (1-\\alpha)(\\ell_{x-1} + b_{x-1})$$\n", "\n", "$$b_x = \\beta(\\ell_x - \\ell_{x-1}) + (1-\\beta)b_{x-1}$$\n", "\n", "$$\\hat{y}_{x+1} = \\ell_x + b_x$$\n", "\n", "The first one describes the intercept, which, as before, depends on the current value of the series. The second term is now split into previous values of the level and of the trend. The second function describes the trend, which depends on the level changes at the current step and on the previous value of the trend. In this case, the $\\beta$ coefficient is a weight for exponential smoothing. The final prediction is the sum of the model values of the intercept and trend." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0, 20 ] }, "outputs": [], "source": [ "def double_exponential_smoothing(series, alpha, beta):\n", " \"\"\"\n", " series - dataset with timeseries\n", " alpha - float [0.0, 1.0], smoothing parameter for level\n", " beta - float [0.0, 1.0], smoothing parameter for trend\n", " \"\"\"\n", " # first value is same as series\n", " result = [series[0]]\n", " for n in range(1, len(series) + 1):\n", " if n == 1:\n", " level, trend = series[0], series[1] - series[0]\n", " if n >= len(series): # forecasting\n", " value = result[-1]\n", " else:\n", " value = series[n]\n", " last_level, level = level, alpha * value + (1 - alpha) * (level + trend)\n", " trend = beta * (level - last_level) + (1 - beta) * trend\n", " result.append(level + trend)\n", " return result\n", "\n", "\n", "def plotDoubleExponentialSmoothing(series, alphas, betas):\n", " \"\"\"\n", " Plots double exponential smoothing with different alphas and betas\n", " \n", " series - dataset with timestamps\n", " alphas - list of floats, smoothing parameters for level\n", " betas - list of floats, smoothing parameters for trend\n", " \"\"\"\n", "\n", " with plt.style.context(\"seaborn-white\"):\n", " plt.figure(figsize=(20, 8))\n", " for alpha in alphas:\n", " for beta in betas:\n", " plt.plot(\n", " double_exponential_smoothing(series, alpha, beta),\n", " label=\"Alpha {}, beta {}\".format(alpha, beta),\n", " )\n", " plt.plot(series.values, label=\"Actual\")\n", " plt.legend(loc=\"best\")\n", " plt.axis(\"tight\")\n", " plt.title(\"Double Exponential Smoothing\")\n", " plt.grid(True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotDoubleExponentialSmoothing(ads.Ads, alphas=[0.9, 0.02], betas=[0.9, 0.02])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotDoubleExponentialSmoothing(\n", " currency.GEMS_GEMS_SPENT, alphas=[0.9, 0.02], betas=[0.9, 0.02]\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we have to tune two parameters: $\\alpha$ and $\\beta$. The former is responsible for the series smoothing around the trend, the latter for the smoothing of the trend itself. The larger the values, the more weight the most recent observations will have and the less smoothed the model series will be. Certain combinations of the parameters may produce strange results, especially if set manually. We'll look into choosing parameters automatically in a bit; before that, let's discuss triple exponential smoothing." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Triple exponential smoothing a.k.a. Holt-Winters\n", "\n", "We've looked at exponential smoothing and double exponential smoothing. This time, we're going into _triple_ exponential smoothing.\n", "\n", "As you could have guessed, the idea is to add a third component - seasonality. This means that we should not use this method if our time series is not expected to have seasonality. Seasonal components in the model will explain repeated variations around intercept and trend, and it will be specified by the length of the season, in other words by the period after which the variations repeat. For each observation in the season, there is a separate component; for example, if the length of the season is 7 days (a weekly seasonality), we will have 7 seasonal components, one for each day of the week.\n", "\n", "With this, let's write out a new system of equations:\n", "\n", "$$\\ell_x = \\alpha(y_x - s_{x-L}) + (1-\\alpha)(\\ell_{x-1} + b_{x-1})$$\n", "\n", "$$b_x = \\beta(\\ell_x - \\ell_{x-1}) + (1-\\beta)b_{x-1}$$\n", "\n", "$$s_x = \\gamma(y_x - \\ell_x) + (1-\\gamma)s_{x-L}$$\n", "\n", "$$\\hat{y}_{x+m} = \\ell_x + mb_x + s_{x-L+1+(m-1)modL}$$\n", "\n", "The intercept now depends on the current value of the series minus any corresponding seasonal component. Trend remains unchanged, and the seasonal component depends on the current value of the series minus the intercept and on the previous value of the component. Take into account that the component is smoothed through all the available seasons; for example, if we have a Monday component, then it will only be averaged with other Mondays. You can read more on how averaging works and how the initial approximation of the trend and seasonal components is done [here](http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc435.htm). Now that we have the seasonal component, we can predict not just one or two steps ahead but an arbitrary $m$ future steps ahead, which is very encouraging.\n", "\n", "Below is the code for a triple exponential smoothing model, which is also known by the last names of its creators, Charles Holt and his student Peter Winters. Additionally, the Brutlag method was included in the model to produce confidence intervals:\n", "\n", "$$\\hat y_{max_x}=\\ell_{x−1}+b_{x−1}+s_{x−T}+m⋅d_{t−T}$$\n", "\n", "$$\\hat y_{min_x}=\\ell_{x−1}+b_{x−1}+s_{x−T}-m⋅d_{t−T}$$\n", "\n", "$$d_t=\\gamma∣y_t−\\hat y_t∣+(1−\\gamma)d_{t−T},$$\n", "\n", "where $T$ is the length of the season, $d$ is the predicted deviation. Other parameters were taken from triple exponential smoothing. You can read more about the method and its applicability to anomaly detection in time series [here](http://fedcsis.org/proceedings/2012/pliks/118.pdf)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ] }, "outputs": [], "source": [ "class HoltWinters:\n", "\n", " \"\"\"\n", " Holt-Winters model with the anomalies detection using Brutlag method\n", " \n", " # series - initial time series\n", " # slen - length of a season\n", " # alpha, beta, gamma - Holt-Winters model coefficients\n", " # n_preds - predictions horizon\n", " # scaling_factor - sets the width of the confidence interval by Brutlag (usually takes values from 2 to 3)\n", " \n", " \"\"\"\n", "\n", " def __init__(self, series, slen, alpha, beta, gamma, n_preds, scaling_factor=1.96):\n", " self.series = series\n", " self.slen = slen\n", " self.alpha = alpha\n", " self.beta = beta\n", " self.gamma = gamma\n", " self.n_preds = n_preds\n", " self.scaling_factor = scaling_factor\n", "\n", " def initial_trend(self):\n", " sum = 0.0\n", " for i in range(self.slen):\n", " sum += float(self.series[i + self.slen] - self.series[i]) / self.slen\n", " return sum / self.slen\n", "\n", " def initial_seasonal_components(self):\n", " seasonals = {}\n", " season_averages = []\n", " n_seasons = int(len(self.series) / self.slen)\n", " # let's calculate season averages\n", " for j in range(n_seasons):\n", " season_averages.append(\n", " sum(self.series[self.slen * j : self.slen * j + self.slen])\n", " / float(self.slen)\n", " )\n", " # let's calculate initial values\n", " for i in range(self.slen):\n", " sum_of_vals_over_avg = 0.0\n", " for j in range(n_seasons):\n", " sum_of_vals_over_avg += (\n", " self.series[self.slen * j + i] - season_averages[j]\n", " )\n", " seasonals[i] = sum_of_vals_over_avg / n_seasons\n", " return seasonals\n", "\n", " def triple_exponential_smoothing(self):\n", " self.result = []\n", " self.Smooth = []\n", " self.Season = []\n", " self.Trend = []\n", " self.PredictedDeviation = []\n", " self.UpperBond = []\n", " self.LowerBond = []\n", "\n", " seasonals = self.initial_seasonal_components()\n", "\n", " for i in range(len(self.series) + self.n_preds):\n", " if i == 0: # components initialization\n", " smooth = self.series[0]\n", " trend = self.initial_trend()\n", " self.result.append(self.series[0])\n", " self.Smooth.append(smooth)\n", " self.Trend.append(trend)\n", " self.Season.append(seasonals[i % self.slen])\n", "\n", " self.PredictedDeviation.append(0)\n", "\n", " self.UpperBond.append(\n", " self.result[0] + self.scaling_factor * self.PredictedDeviation[0]\n", " )\n", "\n", " self.LowerBond.append(\n", " self.result[0] - self.scaling_factor * self.PredictedDeviation[0]\n", " )\n", " continue\n", "\n", " if i >= len(self.series): # predicting\n", " m = i - len(self.series) + 1\n", " self.result.append((smooth + m * trend) + seasonals[i % self.slen])\n", "\n", " # when predicting we increase uncertainty on each step\n", " self.PredictedDeviation.append(self.PredictedDeviation[-1] * 1.01)\n", "\n", " else:\n", " val = self.series[i]\n", " last_smooth, smooth = (\n", " smooth,\n", " self.alpha * (val - seasonals[i % self.slen])\n", " + (1 - self.alpha) * (smooth + trend),\n", " )\n", " trend = self.beta * (smooth - last_smooth) + (1 - self.beta) * trend\n", " seasonals[i % self.slen] = (\n", " self.gamma * (val - smooth)\n", " + (1 - self.gamma) * seasonals[i % self.slen]\n", " )\n", " self.result.append(smooth + trend + seasonals[i % self.slen])\n", "\n", " # Deviation is calculated according to Brutlag algorithm.\n", " self.PredictedDeviation.append(\n", " self.gamma * np.abs(self.series[i] - self.result[i])\n", " + (1 - self.gamma) * self.PredictedDeviation[-1]\n", " )\n", "\n", " self.UpperBond.append(\n", " self.result[-1] + self.scaling_factor * self.PredictedDeviation[-1]\n", " )\n", "\n", " self.LowerBond.append(\n", " self.result[-1] - self.scaling_factor * self.PredictedDeviation[-1]\n", " )\n", "\n", " self.Smooth.append(smooth)\n", " self.Trend.append(trend)\n", " self.Season.append(seasonals[i % self.slen])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Time series cross validation\n", "\n", "Before we start building a model, let's first discuss how to estimate model parameters automatically.\n", "\n", "There is nothing unusual here; as always, we have to choose a loss function suitable for the task that will tell us how closely the model approximates the data. Then, using cross-validation, we will evaluate our chosen loss function for the given model parameters, calculate the gradient, adjust the model parameters, and so on, eventually descending to the global minimum.\n", "\n", "You may be asking how to do cross-validation for time series because time series have this temporal structure and one cannot randomly mix values in a fold while preserving this structure. With randomization, all time dependencies between observations will be lost. This is why we will have to use a more tricky approach in optimizing the model parameters. I don't know if there's an official name to this, but on [CrossValidated](https://stats.stackexchange.com/questions/14099/using-k-fold-cross-validation-for-time-series-model-selection), where one can find all answers but the Answer to the Ultimate Question of Life, the Universe, and Everything, the proposed name for this method is \"cross-validation on a rolling basis\".\n", "\n", "The idea is rather simple -- we train our model on a small segment of the time series from the beginning until some $t$, make predictions for the next $t+n$ steps, and calculate an error. Then, we expand our training sample to $t+n$ value, make predictions from $t+n$ until $t+2*n$, and continue moving our test segment of the time series until we hit the last available observation. As a result, we have as many folds as $n$ will fit between the initial training sample and the last observation.\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, knowing how to set up cross-validation, we can find the optimal parameters for the Holt-Winters model. Recall that we have daily seasonality in ads, hence the `slen=24` parameter." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 2 ] }, "outputs": [], "source": [ "from sklearn.model_selection import \\\n", " TimeSeriesSplit # you have everything done for you\n", "\n", "\n", "def timeseriesCVscore(params, series, loss_function=mean_squared_error, slen=24):\n", " \"\"\"\n", " Returns error on CV \n", " \n", " params - vector of parameters for optimization\n", " series - dataset with timeseries\n", " slen - season length for Holt-Winters model\n", " \"\"\"\n", " # errors array\n", " errors = []\n", "\n", " values = series.values\n", " alpha, beta, gamma = params\n", "\n", " # set the number of folds for cross-validation\n", " tscv = TimeSeriesSplit(n_splits=3)\n", "\n", " # iterating over folds, train model on each, forecast and calculate error\n", " for train, test in tscv.split(values):\n", "\n", " model = HoltWinters(\n", " series=values[train],\n", " slen=slen,\n", " alpha=alpha,\n", " beta=beta,\n", " gamma=gamma,\n", " n_preds=len(test),\n", " )\n", " model.triple_exponential_smoothing()\n", "\n", " predictions = model.result[-len(test) :]\n", " actual = values[test]\n", " error = loss_function(predictions, actual)\n", " errors.append(error)\n", "\n", " return np.mean(np.array(errors))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the Holt-Winters model, as well as in the other models of exponential smoothing, there's a constraint on how large the smoothing parameters can be, each of them ranging from 0 to 1. Therefore, in order to minimize our loss function, we have to choose an algorithm that supports constraints on model parameters. In our case, we will use the truncated Newton conjugate gradient." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "data = ads.Ads[:-20] # leave some data for testing\n", "\n", "# initializing model parameters alpha, beta and gamma\n", "x = [0, 0, 0]\n", "\n", "# Minimizing the loss function\n", "opt = minimize(\n", " timeseriesCVscore,\n", " x0=x,\n", " args=(data, mean_squared_log_error),\n", " method=\"TNC\",\n", " bounds=((0, 1), (0, 1), (0, 1)),\n", ")\n", "\n", "# Take optimal values...\n", "alpha_final, beta_final, gamma_final = opt.x\n", "print(alpha_final, beta_final, gamma_final)\n", "\n", "# ...and train the model with them, forecasting for the next 50 hours\n", "model = HoltWinters(\n", " data,\n", " slen=24,\n", " alpha=alpha_final,\n", " beta=beta_final,\n", " gamma=gamma_final,\n", " n_preds=50,\n", " scaling_factor=3,\n", ")\n", "model.triple_exponential_smoothing()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's add some code to render plots." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ] }, "outputs": [], "source": [ "def plotHoltWinters(series, plot_intervals=False, plot_anomalies=False):\n", " \"\"\"\n", " series - dataset with timeseries\n", " plot_intervals - show confidence intervals\n", " plot_anomalies - show anomalies \n", " \"\"\"\n", "\n", " plt.figure(figsize=(20, 10))\n", " plt.plot(model.result, label=\"Model\")\n", " plt.plot(series.values, label=\"Actual\")\n", " error = mean_absolute_percentage_error(series.values, model.result[: len(series)])\n", " plt.title(\"Mean Absolute Percentage Error: {0:.2f}%\".format(error))\n", "\n", " if plot_anomalies:\n", " anomalies = np.array([np.NaN] * len(series))\n", " anomalies[series.values < model.LowerBond[: len(series)]] = series.values[\n", " series.values < model.LowerBond[: len(series)]\n", " ]\n", " anomalies[series.values > model.UpperBond[: len(series)]] = series.values[\n", " series.values > model.UpperBond[: len(series)]\n", " ]\n", " plt.plot(anomalies, \"o\", markersize=10, label=\"Anomalies\")\n", "\n", " if plot_intervals:\n", " plt.plot(model.UpperBond, \"r--\", alpha=0.5, label=\"Up/Low confidence\")\n", " plt.plot(model.LowerBond, \"r--\", alpha=0.5)\n", " plt.fill_between(\n", " x=range(0, len(model.result)),\n", " y1=model.UpperBond,\n", " y2=model.LowerBond,\n", " alpha=0.2,\n", " color=\"grey\",\n", " )\n", "\n", " plt.vlines(\n", " len(series),\n", " ymin=min(model.LowerBond),\n", " ymax=max(model.UpperBond),\n", " linestyles=\"dashed\",\n", " )\n", " plt.axvspan(len(series) - 20, len(model.result), alpha=0.3, color=\"lightgrey\")\n", " plt.grid(True)\n", " plt.axis(\"tight\")\n", " plt.legend(loc=\"best\", fontsize=13);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotHoltWinters(ads.Ads)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotHoltWinters(ads.Ads, plot_intervals=True, plot_anomalies=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Judging by the plots, our model was able to successfully approximate the initial time series, capturing the daily seasonality, overall downwards trend, and even some anomalies. If you look at the model deviations, you can clearly see that the model reacts quite sharply to changes in the structure of the series but then quickly returns the deviation to the normal values, essentially \"forgetting\" the past. This feature of the model allows us to quickly build anomaly detection systems, even for noisy series data, without spending too much time and money on preparing the data and training the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(25, 5))\n", "plt.plot(model.PredictedDeviation)\n", "plt.grid(True)\n", "plt.axis(\"tight\")\n", "plt.title(\"Brutlag's predicted deviation\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll apply the same algorithm for the second series which, as you may recall, has trend and a 30-day seasonality." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "data = currency.GEMS_GEMS_SPENT[:-50]\n", "slen = 30 # 30-day seasonality\n", "\n", "x = [0, 0, 0]\n", "\n", "opt = minimize(\n", " timeseriesCVscore,\n", " x0=x,\n", " args=(data, mean_absolute_percentage_error, slen),\n", " method=\"TNC\",\n", " bounds=((0, 1), (0, 1), (0, 1)),\n", ")\n", "\n", "alpha_final, beta_final, gamma_final = opt.x\n", "print(alpha_final, beta_final, gamma_final)\n", "\n", "model = HoltWinters(\n", " data,\n", " slen=slen,\n", " alpha=alpha_final,\n", " beta=beta_final,\n", " gamma=gamma_final,\n", " n_preds=100,\n", " scaling_factor=3,\n", ")\n", "model.triple_exponential_smoothing()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotHoltWinters(currency.GEMS_GEMS_SPENT)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks good! The model caught both upwards trend and seasonal spikes and fits the data quite nicely." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotHoltWinters(currency.GEMS_GEMS_SPENT, plot_intervals=True, plot_anomalies=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(20, 5))\n", "plt.plot(model.PredictedDeviation)\n", "plt.grid(True)\n", "plt.axis(\"tight\")\n", "plt.title(\"Brutlag's predicted deviation\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Econometric approach" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Stationarity\n", "\n", "Before we start modeling, we should mention such an important property of time series: [**stationarity**](https://en.wikipedia.org/wiki/Stationary_process).\n", "\n", "If a process is stationary, that means it does not change its statistical properties over time, namely its mean and variance. (The constancy of variance is called [homoscedasticity](https://en.wikipedia.org/wiki/Homoscedasticity))The covariance function does not depend on time; it should only depend on the distance between observations. You can see this visually on the images in the post by [Sean Abu](http://www.seanabu.com/2016/03/22/time-series-seasonal-ARIMA-model-in-python/):\n", "\n", "- The red graph below is not stationary because the mean increases over time.\n", "\n", "\n", "\n", "- We were unlucky with the variance and see the varying spread of values over time\n", "\n", "\n", "\n", "- Finally, the covariance of the i th term and the (i + m) th term should not be a function of time. In the following graph, you will notice that the spread becomes closer as time increases. Hence, the covariance is not constant with time in the right chart.\n", "\n", "\n", "\n", "So why is stationarity so important? Because it is easy to make predictions on a stationary series since we can assume that the future statistical properties will not be different from those currently observed. Most of the time-series models, in one way or the other, try to predict those properties (mean or variance, for example). Furture predictions would be wrong if the original series were not stationary. Unfortunately, most of the time series that we see outside of textbooks are non-stationary, but we can (and should) change this.\n", "\n", "So, in order to combat non-stationarity, we have to know our enemy, so to speak. Let's see how we can detect it. We will look at white noise and random walks to learn how to get from one to another for free." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "White noise chart:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "white_noise = np.random.normal(size=1000)\n", "with plt.style.context(\"bmh\"):\n", " plt.figure(figsize=(15, 5))\n", " plt.plot(white_noise)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The process generated by the standard normal distribution is stationary and oscillates around 0 with with deviation of 1. Now, based on this process, we will generate a new one where each subsequent value will depend on the previous one: $x_t = \\rho x_{t-1} + e_t$ " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is the code to render the plots." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ] }, "outputs": [], "source": [ "def plotProcess(n_samples=1000, rho=0):\n", " x = w = np.random.normal(size=n_samples)\n", " for t in range(n_samples):\n", " x[t] = rho * x[t - 1] + w[t]\n", "\n", " with plt.style.context(\"bmh\"):\n", " plt.figure(figsize=(10, 3))\n", " plt.plot(x)\n", " plt.title(\n", " \"Rho {}\\n Dickey-Fuller p-value: {}\".format(\n", " rho, round(sm.tsa.stattools.adfuller(x)[1], 3)\n", " )\n", " )\n", "\n", "\n", "for rho in [0, 0.6, 0.9, 1]:\n", " plotProcess(rho=rho)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On the first plot, you can see the same stationary white noise as before. On the second plot with $\\rho$ increased to 0.6, wider cycles appeared, but it still appears stationary overall. The third plot deviates even more from the 0 mean but still oscillates about the mean. Finally, with $\\rho=1$, we have a random walk process i.e. a non-stationary time series.\n", "\n", "This happens because, after reaching the critical value, the series $x_t = \\rho x_{t-1} + e_t$ does not return to its mean value. If we subtract $x_{t-1}$ from both sides, we will get $x_t - x_{t-1} = (\\rho - 1) x_{t-1} + e_t$, where the expression on the left is referred to as the first difference. If $\\rho=1$, then the first difference gives us stationary white noise $e_t$. This is the main idea behind the [Dickey-Fuller test](https://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test) for stationarity of time series (testing the presence of a unit root). If we can get a stationary series from a non-stationary series using the first difference, we call those series integrated of order 1. The null hypothesis of the test is that the time series is non-stationary, which was rejected on the first three plots and finally accepted on the last one. We have to say that the first difference is not always enough to get a stationary series as the process might be integrated of order d, d > 1 (and have multiple unit roots). In such cases, the augmented Dickey-Fuller test is used, which checks multiple lags at once.\n", "\n", "We can fight non-stationarity using different approaches: various order differences, trend and seasonality removal, smoothing, and transformations like Box-Cox or logarithmic." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Getting rid of non-stationarity and building SARIMA" ] }, { "cell_type": "markdown", "metadata": { "code_folding": [] }, "source": [ "Let's build an ARIMA model by walking through all the ~~circles of hell~~ stages of making a series stationary." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is the code to render plots." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [] }, "outputs": [], "source": [ "def tsplot(y, lags=None, figsize=(12, 7), style=\"bmh\"):\n", " \"\"\"\n", " Plot time series, its ACF and PACF, calculate Dickey–Fuller test\n", " \n", " y - timeseries\n", " lags - how many lags to include in ACF, PACF calculation\n", " \"\"\"\n", " if not isinstance(y, pd.Series):\n", " y = pd.Series(y)\n", "\n", " with plt.style.context(style):\n", " fig = plt.figure(figsize=figsize)\n", " layout = (2, 2)\n", " ts_ax = plt.subplot2grid(layout, (0, 0), colspan=2)\n", " acf_ax = plt.subplot2grid(layout, (1, 0))\n", " pacf_ax = plt.subplot2grid(layout, (1, 1))\n", "\n", " ts_ax.plot(y)\n", " p_value = sm.tsa.stattools.adfuller(y)[1]\n", " ts_ax.set_title(\n", " \"Time Series Analysis Plots\\n Dickey-Fuller: p={0:.5f}\".format(p_value)\n", " )\n", " smt.graphics.plot_acf(y, lags=lags, ax=acf_ax)\n", " smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax)\n", " plt.tight_layout()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tsplot(ads.Ads, lags=60)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_this outlier on partial autocorrelation plot looks like a statsmodels bug, partial autocorrelation shall be <= 1 like any correlation._\n", "\n", "Surprisingly, the initial series are stationary; the Dickey-Fuller test rejected the null hypothesis that a unit root is present. Actually, we can see this on the plot itself – we do not have a visible trend, so the mean is constant and the variance is pretty much stable. The only thing left is seasonality, which we have to deal with prior to modeling. To do so, let's take the \"seasonal difference\", which means a simple subtraction of the series from itself with a lag that equals the seasonal period." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ads_diff = ads.Ads - ads.Ads.shift(24)\n", "tsplot(ads_diff[24:], lags=60)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is now much better with the visible seasonality gone. However, the autocorrelation function still has too many significant lags. To remove them, we'll take first differences, subtracting the series from itself with lag 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ads_diff = ads_diff - ads_diff.shift(1)\n", "tsplot(ads_diff[24 + 1 :], lags=60)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Perfect! Our series now looks like something undescribable, oscillating around zero. The Dickey-Fuller test indicates that it is stationary, and the number of significant peaks in ACF has dropped. We can finally start modeling!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## ARIMA-family Crash-Course\n", "\n", "We will explain this model by building up letter by letter. $SARIMA(p, d, q)(P, D, Q, s)$, Seasonal Autoregression Moving Average model:\n", "\n", "- $AR(p)$ - autoregression model i.e. regression of the time series onto itself. The basic assumption is that the current series values depend on its previous values with some lag (or several lags). The maximum lag in the model is referred to as $p$. To determine the initial $p$, you need to look at the PACF plot and find the biggest significant lag after which **most** other lags become insignificant.\n", "- $MA(q)$ - moving average model. Without going into too much detail, this models the error of the time series, again with the assumption that the current error depends on the previous with some lag, which is referred to as $q$. The initial value can be found on the ACF plot with the same logic as before. \n", "\n", "Let's combine our first 4 letters:\n", "\n", "$AR(p) + MA(q) = ARMA(p, q)$\n", "\n", "What we have here is the Autoregressive–moving-average model! If the series is stationary, it can be approximated with these 4 letters. Let's continue.\n", "\n", "- $I(d)$ - order of integration. This is simply the number of nonseasonal differences needed to make the series stationary. In our case, it's just 1 because we used first differences. \n", "\n", "Adding this letter to the four gives us the $ARIMA$ model which can handle non-stationary data with the help of nonseasonal differences. Great, one more letter to go!\n", "\n", "- $S(s)$ - this is responsible for seasonality and equals the season period length of the series\n", "\n", "With this, we have three parameters: $(P, D, Q)$\n", "\n", "- $P$ - order of autoregression for the seasonal component of the model, which can be derived from PACF. But you need to look at the number of significant lags, which are the multiples of the season period length. For example, if the period equals 24 and we see the 24-th and 48-th lags are significant in the PACF, that means the initial $P$ should be 2.\n", "\n", "- $Q$ - similar logic using the ACF plot instead.\n", "\n", "- $D$ - order of seasonal integration. This can be equal to 1 or 0, depending on whether seasonal differeces were applied or not." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we know how to set the initial parameters, let's have a look at the final plot once again and set the parameters:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tsplot(ads_diff[24 + 1 :], lags=60)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- $p$ is most probably 4 since it is the last significant lag on the PACF, after which, most others are not significant. \n", "- $d$ equals 1 because we had first differences\n", "- $q$ should be somewhere around 4 as well as seen on the ACF\n", "- $P$ might be 2, since 24-th and 48-th lags are somewhat significant on the PACF\n", "- $D$ again equals 1 because we performed seasonal differentiation\n", "- $Q$ is probably 1. The 24-th lag on ACF is significant while the 48-th is not." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's test various models and see which one is better." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# setting initial values and some bounds for them\n", "ps = range(2, 5)\n", "d = 1\n", "qs = range(2, 5)\n", "Ps = range(0, 2)\n", "D = 1\n", "Qs = range(0, 2)\n", "s = 24 # season length is still 24\n", "\n", "# creating list with all the possible combinations of parameters\n", "parameters = product(ps, qs, Ps, Qs)\n", "parameters_list = list(parameters)\n", "len(parameters_list)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def optimizeSARIMA(parameters_list, d, D, s):\n", " \"\"\"\n", " Return dataframe with parameters and corresponding AIC\n", " \n", " parameters_list - list with (p, q, P, Q) tuples\n", " d - integration order in ARIMA model\n", " D - seasonal integration order \n", " s - length of season\n", " \"\"\"\n", "\n", " results = []\n", " best_aic = float(\"inf\")\n", "\n", " for param in tqdm(parameters_list):\n", " # we need try-except because on some combinations model fails to converge\n", " try:\n", " model = sm.tsa.statespace.SARIMAX(\n", " ads.Ads,\n", " order=(param[0], d, param[1]),\n", " seasonal_order=(param[2], D, param[3], s),\n", " ).fit(disp=-1)\n", " except:\n", " continue\n", " aic = model.aic\n", " # saving best model, AIC and parameters\n", " if aic < best_aic:\n", " best_model = model\n", " best_aic = aic\n", " best_param = param\n", " results.append([param, model.aic])\n", "\n", " result_table = pd.DataFrame(results)\n", " result_table.columns = [\"parameters\", \"aic\"]\n", " # sorting in ascending order, the lower AIC is - the better\n", " result_table = result_table.sort_values(by=\"aic\", ascending=True).reset_index(\n", " drop=True\n", " )\n", "\n", " return result_table" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "result_table = optimizeSARIMA(parameters_list, d, D, s)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "result_table.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# set the parameters that give the lowest AIC\n", "p, q, P, Q = result_table.parameters[0]\n", "\n", "best_model = sm.tsa.statespace.SARIMAX(\n", " ads.Ads, order=(p, d, q), seasonal_order=(P, D, Q, s)\n", ").fit(disp=-1)\n", "print(best_model.summary())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's inspect the residuals of the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tsplot(best_model.resid[24 + 1 :], lags=60)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is clear that the residuals are stationary, and there are no apparent autocorrelations. Let's make predictions using our model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ] }, "outputs": [], "source": [ "def plotSARIMA(series, model, n_steps):\n", " \"\"\"\n", " Plots model vs predicted values\n", " \n", " series - dataset with timeseries\n", " model - fitted SARIMA model\n", " n_steps - number of steps to predict in the future\n", " \n", " \"\"\"\n", " # adding model values\n", " data = series.copy()\n", " data.columns = [\"actual\"]\n", " data[\"arima_model\"] = model.fittedvalues\n", " # making a shift on s+d steps, because these values were unobserved by the model\n", " # due to the differentiating\n", " data[\"arima_model\"][: s + d] = np.NaN\n", "\n", " # forecasting on n_steps forward\n", " forecast = model.predict(start=data.shape[0], end=data.shape[0] + n_steps)\n", " forecast = data.arima_model.append(forecast)\n", " # calculate error, again having shifted on s+d steps from the beginning\n", " error = mean_absolute_percentage_error(\n", " data[\"actual\"][s + d :], data[\"arima_model\"][s + d :]\n", " )\n", "\n", " plt.figure(figsize=(15, 7))\n", " plt.title(\"Mean Absolute Percentage Error: {0:.2f}%\".format(error))\n", " plt.plot(forecast, color=\"r\", label=\"model\")\n", " plt.axvspan(data.index[-1], forecast.index[-1], alpha=0.5, color=\"lightgrey\")\n", " plt.plot(data.actual, label=\"actual\")\n", " plt.legend()\n", " plt.grid(True);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotSARIMA(ads, best_model, 50)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the end, we got very adequate predictions. Our model was wrong by 4.01% on average, which is very, very good. However, the overall costs of preparing data, making the series stationary, and selecting parameters might not be worth this accuracy." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Linear (and not quite) models for time series\n", "\n", "Often, in my job, I have to build models with [*fast, good, cheap*](http://fastgood.cheap) as my only guiding principle. That means that some of these models will never be considered \"production ready\" as they demand too much time for data preparation (as in SARIMA) or require frequent re-training on new data (again, SARIMA) or are difficult to tune (good example - SARIMA). Therefore, it's very often much easier to select a few features from the existing time series and build a simple linear regression model or, say, a random forest. It is good and cheap.\n", "\n", "This approach is not backed by theory and breaks several assumptions (e.g. Gauss-Markov theorem, especially for errors being uncorrelated), but it is very useful in practice and is often used in machine learning competitions.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature extraction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model needs features, and all we have is a 1-dimentional time series. What features can we extract? \n", "* Time series lags\n", "* Window statistics:\n", " - Max/min value of series in a window\n", " - Average/median value in a window\n", " - Window variance\n", " - etc.\n", "* Date and time features:\n", " - Minute of an hour, hour of a day, day of the week, and so on\n", " - Is this day a holiday? Maybe there is a special event? Represent that as a boolean feature\n", "* Target encoding \n", "* Forecasts from other models (note that we can lose the speed of prediction this way)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's run through some of the methods and see what we can extract from our ads time series data.\n", "\n", "## Time series lags\n", "\n", "Shifting the series $n$ steps back, we get a feature column where the current value of time series is aligned with its value at time $t-n$. If we make a 1 lag shift and train a model on that feature, the model will be able to forecast 1 step ahead from having observed the current state of the series. Increasing the lag, say, up to 6, will allow the model to make predictions 6 steps ahead; however it will use data observed 6 steps back. If something fundamentally changes the series during that unobserved period, the model will not catch these changes and will return forecasts with a large error. Therefore, during the initial lag selection, one has to find a balance between the optimal prediction quality and the length of the forecasting horizon." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Creating a copy of the initial datagrame to make various transformations\n", "data = pd.DataFrame(ads.Ads.copy())\n", "data.columns = [\"y\"]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Adding the lag of the target variable from 6 steps back up to 24\n", "for i in range(6, 25):\n", " data[\"lag_{}\".format(i)] = data.y.shift(i)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# take a look at the new dataframe\n", "data.tail(7)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great, we have generated a dataset here. Why don't we now train a model? " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LinearRegression\n", "from sklearn.model_selection import cross_val_score\n", "\n", "# for time-series cross-validation set 5 folds\n", "tscv = TimeSeriesSplit(n_splits=5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def timeseries_train_test_split(X, y, test_size):\n", " \"\"\"\n", " Perform train-test split with respect to time series structure\n", " \"\"\"\n", "\n", " # get the index after which test set starts\n", " test_index = int(len(X) * (1 - test_size))\n", "\n", " X_train = X.iloc[:test_index]\n", " y_train = y.iloc[:test_index]\n", " X_test = X.iloc[test_index:]\n", " y_test = y.iloc[test_index:]\n", "\n", " return X_train, X_test, y_train, y_test" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y = data.dropna().y\n", "X = data.dropna().drop([\"y\"], axis=1)\n", "\n", "# reserve 30% of data for testing\n", "X_train, X_test, y_train, y_test = timeseries_train_test_split(X, y, test_size=0.3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# machine learning in two lines\n", "lr = LinearRegression()\n", "lr.fit(X_train, y_train)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0, 38 ] }, "outputs": [], "source": [ "def plotModelResults(\n", " model, X_train=X_train, X_test=X_test, plot_intervals=False, plot_anomalies=False\n", "):\n", " \"\"\"\n", " Plots modelled vs fact values, prediction intervals and anomalies\n", " \n", " \"\"\"\n", "\n", " prediction = model.predict(X_test)\n", "\n", " plt.figure(figsize=(15, 7))\n", " plt.plot(prediction, \"g\", label=\"prediction\", linewidth=2.0)\n", " plt.plot(y_test.values, label=\"actual\", linewidth=2.0)\n", "\n", " if plot_intervals:\n", " cv = cross_val_score(\n", " model, X_train, y_train, cv=tscv, scoring=\"neg_mean_absolute_error\"\n", " )\n", " mae = cv.mean() * (-1)\n", " deviation = cv.std()\n", "\n", " scale = 1.96\n", " lower = prediction - (mae + scale * deviation)\n", " upper = prediction + (mae + scale * deviation)\n", "\n", " plt.plot(lower, \"r--\", label=\"upper bond / lower bond\", alpha=0.5)\n", " plt.plot(upper, \"r--\", alpha=0.5)\n", "\n", " if plot_anomalies:\n", " anomalies = np.array([np.NaN] * len(y_test))\n", " anomalies[y_test < lower] = y_test[y_test < lower]\n", " anomalies[y_test > upper] = y_test[y_test > upper]\n", " plt.plot(anomalies, \"o\", markersize=10, label=\"Anomalies\")\n", "\n", " error = mean_absolute_percentage_error(prediction, y_test)\n", " plt.title(\"Mean absolute percentage error {0:.2f}%\".format(error))\n", " plt.legend(loc=\"best\")\n", " plt.tight_layout()\n", " plt.grid(True)\n", "\n", "\n", "def plotCoefficients(model):\n", " \"\"\"\n", " Plots sorted coefficient values of the model\n", " \"\"\"\n", "\n", " coefs = pd.DataFrame(model.coef_, X_train.columns)\n", " coefs.columns = [\"coef\"]\n", " coefs[\"abs\"] = coefs.coef.apply(np.abs)\n", " coefs = coefs.sort_values(by=\"abs\", ascending=False).drop([\"abs\"], axis=1)\n", "\n", " plt.figure(figsize=(15, 7))\n", " coefs.coef.plot(kind=\"bar\")\n", " plt.grid(True, axis=\"y\")\n", " plt.hlines(y=0, xmin=0, xmax=len(coefs), linestyles=\"dashed\");" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotModelResults(lr, plot_intervals=True)\n", "plotCoefficients(lr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Simple lags and linear regression gave us predictions that are not that far off from SARIMA in terms of quality. There are many unnecessary features, so we'll do feature selection in a little while. For now, let's continue engineering!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll add hour, day of week, and a boolean for `is_weekend`. To do so, we need to transform the current dataframe index into the `datetime` format and extract `hour` and `weekday`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.index = pd.to_datetime(data.index)\n", "data[\"hour\"] = data.index.hour\n", "data[\"weekday\"] = data.index.weekday\n", "data[\"is_weekend\"] = data.weekday.isin([5, 6]) * 1\n", "data.tail()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can visualize the resulting features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(16, 5))\n", "plt.title(\"Encoded features\")\n", "data.hour.plot()\n", "data.weekday.plot()\n", "data.is_weekend.plot()\n", "plt.grid(True);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we now have different scales in our variables, thousands for the lag features and tens for categorical, we need to transform them into same scale for exploring feature importance and, later, regularization. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.preprocessing import StandardScaler\n", "\n", "scaler = StandardScaler()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y = data.dropna().y\n", "X = data.dropna().drop([\"y\"], axis=1)\n", "\n", "X_train, X_test, y_train, y_test = timeseries_train_test_split(X, y, test_size=0.3)\n", "\n", "X_train_scaled = scaler.fit_transform(X_train)\n", "X_test_scaled = scaler.transform(X_test)\n", "\n", "lr = LinearRegression()\n", "lr.fit(X_train_scaled, y_train)\n", "\n", "plotModelResults(lr, X_train=X_train_scaled, X_test=X_test_scaled, plot_intervals=True)\n", "plotCoefficients(lr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The test error goes down a little bit. Judging by the coefficients plot, we can say that `weekday` and `is_weekend` are useful features." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Target encoding\n", "I'd like to add another variant for encoding categorical variables: encoding by mean value. If it is undesirable to explode a dataset by using many dummy variables that can lead to the loss of information and if they cannot be used as real values because of the conflicts like \"0 hours < 23 hours\", then it's possible to encode a variable with slightly more interpretable values. The natural idea is to encode with the mean value of the target variable. In our example, every day of the week and every hour of the day can be encoded by the corresponding average number of ads watched during that day or hour. It's very important to make sure that the mean value is calculated over the training set only (or over the current cross-validation fold only) so that the model is not aware of the future." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def code_mean(data, cat_feature, real_feature):\n", " \"\"\"\n", " Returns a dictionary where keys are unique categories of the cat_feature,\n", " and values are means over real_feature\n", " \"\"\"\n", " return dict(data.groupby(cat_feature)[real_feature].mean())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the averages by hour." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "average_hour = code_mean(data, \"hour\", \"y\")\n", "plt.figure(figsize=(7, 5))\n", "plt.title(\"Hour averages\")\n", "pd.DataFrame.from_dict(average_hour, orient=\"index\")[0].plot()\n", "plt.grid(True);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, let's put all the transformations together in a single function ." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ] }, "outputs": [], "source": [ "def prepareData(series, lag_start, lag_end, test_size, target_encoding=False):\n", " \"\"\"\n", " series: pd.DataFrame\n", " dataframe with timeseries\n", "\n", " lag_start: int\n", " initial step back in time to slice target variable \n", " example - lag_start = 1 means that the model \n", " will see yesterday's values to predict today\n", "\n", " lag_end: int\n", " final step back in time to slice target variable\n", " example - lag_end = 4 means that the model \n", " will see up to 4 days back in time to predict today\n", "\n", " test_size: float\n", " size of the test dataset after train/test split as percentage of dataset\n", "\n", " target_encoding: boolean\n", " if True - add target averages to the dataset\n", " \n", " \"\"\"\n", "\n", " # copy of the initial dataset\n", " data = pd.DataFrame(series.copy())\n", " data.columns = [\"y\"]\n", "\n", " # lags of series\n", " for i in range(lag_start, lag_end):\n", " data[\"lag_{}\".format(i)] = data.y.shift(i)\n", "\n", " # datetime features\n", " data.index = pd.to_datetime(data.index)\n", " data[\"hour\"] = data.index.hour\n", " data[\"weekday\"] = data.index.weekday\n", " data[\"is_weekend\"] = data.weekday.isin([5, 6]) * 1\n", "\n", " if target_encoding:\n", " # calculate averages on train set only\n", " test_index = int(len(data.dropna()) * (1 - test_size))\n", " data[\"weekday_average\"] = list(\n", " map(code_mean(data[:test_index], \"weekday\", \"y\").get, data.weekday)\n", " )\n", " data[\"hour_average\"] = list(\n", " map(code_mean(data[:test_index], \"hour\", \"y\").get, data.hour)\n", " )\n", "\n", " # drop encoded variables\n", " data.drop([\"hour\", \"weekday\"], axis=1, inplace=True)\n", "\n", " # train-test split\n", " y = data.dropna().y\n", " X = data.dropna().drop([\"y\"], axis=1)\n", " X_train, X_test, y_train, y_test = timeseries_train_test_split(\n", " X, y, test_size=test_size\n", " )\n", "\n", " return X_train, X_test, y_train, y_test" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train, X_test, y_train, y_test = prepareData(\n", " ads.Ads, lag_start=6, lag_end=25, test_size=0.3, target_encoding=True\n", ")\n", "\n", "X_train_scaled = scaler.fit_transform(X_train)\n", "X_test_scaled = scaler.transform(X_test)\n", "\n", "lr = LinearRegression()\n", "lr.fit(X_train_scaled, y_train)\n", "\n", "plotModelResults(\n", " lr,\n", " X_train=X_train_scaled,\n", " X_test=X_test_scaled,\n", " plot_intervals=True,\n", " plot_anomalies=True,\n", ")\n", "plotCoefficients(lr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see some **overfitting**! `Hour_average` was so great in the training dataset that the model decided to concentrate all of its forces on it. As a result, the quality of prediction dropped. This problem can be solved in a variety of ways; for example, we can calculate the target encoding not for the whole train set, but for some window instead. That way, encodings from the last observed window will most likely better describe the current series state. Alternatively, we can just drop it manually since we are sure that it makes things only worse in this case. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train, X_test, y_train, y_test = prepareData(\n", " ads.Ads, lag_start=6, lag_end=25, test_size=0.3, target_encoding=False\n", ")\n", "\n", "X_train_scaled = scaler.fit_transform(X_train)\n", "X_test_scaled = scaler.transform(X_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Regularization and feature selection \n", "\n", "As we already know, not all features are equally healthy -- some may lead to overfitting while others should be removed. Besides manual inspection, we can apply regularization. Two of the most popular regression models with regularization are Ridge and Lasso regressions. They both add some more constrains to our loss function. \n", "\n", "In the case of Ridge regression, those constraints are the sum of squares of the coefficients multiplied by the regularization coefficient. The bigger the coefficient a feature has, the bigger our loss will be. Hence, we will try to optimize the model while keeping the coefficients fairly low. \n", "\n", "As a result of this $L2$ regularization, we will have higher bias and lower variance, so the model will generalize better (at least that's what we hope will happen).\n", "\n", "The second regression model, Lasso regression, adds to the loss function, not squares, but absolute values of the coefficients. As a result, during the optimization process, coefficients of unimportant features may become zeroes, which allows for automated feature selection. This regularization type is called $L1$. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's make sure that we have features to drop and that the data has highly correlated features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(10, 8))\n", "sns.heatmap(X_train.corr());" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LassoCV, RidgeCV\n", "\n", "ridge = RidgeCV(cv=tscv)\n", "ridge.fit(X_train_scaled, y_train)\n", "\n", "plotModelResults(\n", " ridge,\n", " X_train=X_train_scaled,\n", " X_test=X_test_scaled,\n", " plot_intervals=True,\n", " plot_anomalies=True,\n", ")\n", "plotCoefficients(ridge)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can clearly see some coefficients are getting closer and closer to zero (though they never actually reach it) as their importance in the model drops." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lasso = LassoCV(cv=tscv)\n", "lasso.fit(X_train_scaled, y_train)\n", "\n", "plotModelResults(\n", " lasso,\n", " X_train=X_train_scaled,\n", " X_test=X_test_scaled,\n", " plot_intervals=True,\n", " plot_anomalies=True,\n", ")\n", "plotCoefficients(lasso)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lasso regression turned out to be more conservative; it removed 23-rd lag from the most important features and dropped 5 features completely, which only made the quality of prediction better." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Boosting \n", "Why shouldn't we try XGBoost now?\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 3 ] }, "outputs": [], "source": [ "from xgboost import XGBRegressor\n", "\n", "xgb = XGBRegressor(verbosity=0)\n", "xgb.fit(X_train_scaled, y_train);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plotModelResults(\n", " xgb,\n", " X_train=X_train_scaled,\n", " X_test=X_test_scaled,\n", " plot_intervals=True,\n", " plot_anomalies=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have a winner! This is the smallest error on the test set among all the models we've tried so far. \n", "\n", "But, this victory is decieving, and it might not be the brightest idea to fit `xgboost` as soon as you get your hands on time series data. Generally, tree-based models handle trends in data poorly when compared with linear models. In that case, you would have to detrend your series first or use some tricks to make the magic happen. Ideally, you can make the series stationary and then use XGBoost. For example, you can forecast trend separately with a linear model and then add predictions from `xgboost` to get a final forecast." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Conclusion\n", "\n", "We discussed different time series analysis and prediction methods. Unfortunately, or maybe luckily, there is no one way to solve these kind of problems. Methods developed in the 1960s (and some even in the beginning of the 21st century) are still popular, along with LSTMs and RNNs (not covered in this article). This is partially related to the fact that the prediction task, like any other data-related task, requires creativity in so many aspects and definitely requires research. In spite of the large number of formal quality metrics and approaches to parameters estimation, it is often necessary to try something different for each time series. Last but not least, the balance between quality and cost is important. As a good example, the SARIMA model can produce spectacular results after tuning but can require many hours of ~~tambourine dancing~~ time series manipulation while a simple linear regression model can be built in 10 minutes and can achieve more or less comparable results." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Useful resources\n", "\n", "* The same notebook as an interactive web-based [Kaggle Kernel](https://www.kaggle.com/kashnitsky/topic-9-part-1-time-series-analysis-in-python)\n", "* \"LSTM (Long Short Term Memory) Networks for predicting Time Series\" - a tutorial by Max Sergei Bulaev within mlcourse.ai (full list of tutorials is [here](https://mlcourse.ai/tutorials))\n", "* Main course [site](https://mlcourse.ai), [course repo](https://github.com/Yorko/mlcourse.ai), and YouTube [channel](https://www.youtube.com/watch?v=QKTuw4PNOsU&list=PLVlY_7IJCMJeRfZ68eVfEcu-UcN9BbwiX)\n", "* Course materials as a [Kaggle Dataset](https://www.kaggle.com/kashnitsky/mlcourse)\n", "* Medium [\"story\"](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-9-time-series-analysis-in-python-a270cb05e0b3?source=collection_home---6------2---------------------) based on this notebook\n", "* If you read Russian: an [article](https://habr.com/ru/company/ods/blog/327242/) on Habr.com with ~ the same material. And a [lecture](https://youtu.be/_9lBwXnbOd8) on YouTube\n", "* [Online textbook](https://people.duke.edu/~rnau/411home.htm) for the advanced statistical forecasting course at Duke University - covers various smoothing techniques in detail along with linear and ARIMA models\n", "* [Comparison of ARIMA and Random Forest time series models for prediction of avian influenza H5N1 outbreaks](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-15-276) - one of a few cases where using random forest for time series forecasting is actively defended\n", "* [Time Series Analysis (TSA) in Python - Linear Models to GARCH](http://www.blackarbs.com/blog/time-series-analysis-in-python-linear-models-to-garch/11/1/2016) - applying the ARIMA models family to the task of modeling financial indicators (by Brian Christopher)" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 1 }