{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Time-related feature engineering\n\nThis notebook introduces different strategies to leverage time-related features\nfor a bike sharing demand regression task that is highly dependent on business\ncycles (days, weeks, months) and yearly season cycles.\n\nIn the process, we introduce how to perform periodic feature engineering using\nthe :class:`sklearn.preprocessing.SplineTransformer` class and its\n`extrapolation=\"periodic\"` option.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data exploration on the Bike Sharing Demand dataset\n\nWe start by loading the data from the OpenML repository.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import fetch_openml\n\nbike_sharing = fetch_openml(\"Bike_Sharing_Demand\", version=2, as_frame=True)\ndf = bike_sharing.frame" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To get a quick understanding of the periodic patterns of the data, let us\nhave a look at the average demand per hour during a week.\n\nNote that the week starts on a Sunday, during the weekend. We can clearly\ndistinguish the commute patterns in the morning and evenings of the work days\nand the leisure use of the bikes on the weekends with a more spread peak\ndemand around the middle of the days:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(figsize=(12, 4))\naverage_week_demand = df.groupby([\"weekday\", \"hour\"])[\"count\"].mean()\naverage_week_demand.plot(ax=ax)\n_ = ax.set(\n title=\"Average hourly bike demand during the week\",\n xticks=[i * 24 for i in range(7)],\n xticklabels=[\"Sun\", \"Mon\", \"Tue\", \"Wed\", \"Thu\", \"Fri\", \"Sat\"],\n xlabel=\"Time of the week\",\n ylabel=\"Number of bike rentals\",\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The target of the prediction problem is the absolute count of bike rentals on\na hourly basis:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "df[\"count\"].max()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us rescale the target variable (number of hourly bike rentals) to predict\na relative demand so that the mean absolute error is more easily interpreted\nas a fraction of the maximum demand.\n\n

Note

The fit method of the models used in this notebook all minimize the\n mean squared error to estimate the conditional mean.\n The absolute error, however, would estimate the conditional median.\n\n Nevertheless, when reporting performance measures on the test set in\n the discussion, we choose to focus on the mean absolute error instead\n of the (root) mean squared error because it is more intuitive to\n interpret. Note, however, that in this study the best models for one\n metric are also the best ones in terms of the other metric.

\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "y = df[\"count\"] / df[\"count\"].max()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(12, 4))\ny.hist(bins=30, ax=ax)\n_ = ax.set(\n xlabel=\"Fraction of rented fleet demand\",\n ylabel=\"Number of hours\",\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The input feature data frame is a time annotated hourly log of variables\ndescribing the weather conditions. It includes both numerical and categorical\nvariables. Note that the time information has already been expanded into\nseveral complementary columns.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X = df.drop(\"count\", axis=\"columns\")\nX" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Note

If the time information was only present as a date or datetime column, we\n could have expanded it into hour-in-the-day, day-in-the-week,\n day-in-the-month, month-in-the-year using pandas:\n https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#time-date-components

\n\nWe now introspect the distribution of the categorical variables, starting\nwith `\"weather\"`:\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X[\"weather\"].value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since there are only 3 `\"heavy_rain\"` events, we cannot use this category to\ntrain machine learning models with cross validation. Instead, we simplify the\nrepresentation by collapsing those into the `\"rain\"` category.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X[\"weather\"] = (\n X[\"weather\"]\n .astype(object)\n .replace(to_replace=\"heavy_rain\", value=\"rain\")\n .astype(\"category\")\n)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X[\"weather\"].value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As expected, the `\"season\"` variable is well balanced:\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X[\"season\"].value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Time-based cross-validation\n\nSince the dataset is a time-ordered event log (hourly demand), we will use a\ntime-sensitive cross-validation splitter to evaluate our demand forecasting\nmodel as realistically as possible. We use a gap of 2 days between the train\nand test side of the splits. We also limit the training set size to make the\nperformance of the CV folds more stable.\n\n1000 test datapoints should be enough to quantify the performance of the\nmodel. This represents a bit less than a month and a half of contiguous test\ndata:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.model_selection import TimeSeriesSplit\n\nts_cv = TimeSeriesSplit(\n n_splits=5,\n gap=48,\n max_train_size=10000,\n test_size=1000,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us manually inspect the various splits to check that the\n`TimeSeriesSplit` works as we expect, starting with the first split:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "all_splits = list(ts_cv.split(X, y))\ntrain_0, test_0 = all_splits[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X.iloc[test_0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X.iloc[train_0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now inspect the last split:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "train_4, test_4 = all_splits[4]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X.iloc[test_4]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X.iloc[train_4]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All is well. We are now ready to do some predictive modeling!\n\n## Gradient Boosting\n\nGradient Boosting Regression with decision trees is often flexible enough to\nefficiently handle heterogeneous tabular data with a mix of categorical and\nnumerical features as long as the number of samples is large enough.\n\nHere, we use the modern\n:class:`~sklearn.ensemble.HistGradientBoostingRegressor` with native support\nfor categorical features. Therefore, we only need to set\n`categorical_features=\"from_dtype\"` such that features with categorical dtype\nare considered categorical features. For reference, we extract the categorical\nfeatures from the dataframe based on the dtype. The internal trees use a dedicated\ntree splitting rule for these features.\n\nThe numerical variables need no preprocessing and, for the sake of simplicity,\nwe only try the default hyper-parameters for this model:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.compose import ColumnTransformer\nfrom sklearn.ensemble import HistGradientBoostingRegressor\nfrom sklearn.model_selection import cross_validate\nfrom sklearn.pipeline import make_pipeline\n\ngbrt = HistGradientBoostingRegressor(categorical_features=\"from_dtype\", random_state=42)\ncategorical_columns = X.columns[X.dtypes == \"category\"]\nprint(\"Categorical features:\", categorical_columns.tolist())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's evaluate our gradient boosting model with the mean absolute error of the\nrelative demand averaged across our 5 time-based cross-validation splits:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n\n\ndef evaluate(model, X, y, cv, model_prop=None, model_step=None):\n cv_results = cross_validate(\n model,\n X,\n y,\n cv=cv,\n scoring=[\"neg_mean_absolute_error\", \"neg_root_mean_squared_error\"],\n return_estimator=model_prop is not None,\n )\n if model_prop is not None:\n if model_step is not None:\n values = [\n getattr(m[model_step], model_prop) for m in cv_results[\"estimator\"]\n ]\n else:\n values = [getattr(m, model_prop) for m in cv_results[\"estimator\"]]\n print(f\"Mean model.{model_prop} = {np.mean(values)}\")\n mae = -cv_results[\"test_neg_mean_absolute_error\"]\n rmse = -cv_results[\"test_neg_root_mean_squared_error\"]\n print(\n f\"Mean Absolute Error: {mae.mean():.3f} +/- {mae.std():.3f}\\n\"\n f\"Root Mean Squared Error: {rmse.mean():.3f} +/- {rmse.std():.3f}\"\n )\n\n\nevaluate(gbrt, X, y, cv=ts_cv, model_prop=\"n_iter_\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that we set `max_iter` large enough such that early stopping took place.\n\nThis model has an average error around 4 to 5% of the maximum demand. This is\nquite good for a first trial without any hyper-parameter tuning! We just had\nto make the categorical variables explicit. Note that the time related\nfeatures are passed as is, i.e. without processing them. But this is not much\nof a problem for tree-based models as they can learn a non-monotonic\nrelationship between ordinal input features and the target.\n\nThis is not the case for linear regression models as we will see in the\nfollowing.\n\n## Naive linear regression\n\nAs usual for linear models, categorical variables need to be one-hot encoded.\nFor consistency, we scale the numerical features to the same 0-1 range using\n:class:`~sklearn.preprocessing.MinMaxScaler`, although in this case it does not\nimpact the results much because they are already on comparable scales:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.linear_model import RidgeCV\nfrom sklearn.preprocessing import MinMaxScaler, OneHotEncoder\n\none_hot_encoder = OneHotEncoder(handle_unknown=\"ignore\", sparse_output=False)\nalphas = np.logspace(-6, 6, 25)\nnaive_linear_pipeline = make_pipeline(\n ColumnTransformer(\n transformers=[\n (\"categorical\", one_hot_encoder, categorical_columns),\n ],\n remainder=MinMaxScaler(),\n ),\n RidgeCV(alphas=alphas),\n)\n\n\nevaluate(\n naive_linear_pipeline, X, y, cv=ts_cv, model_prop=\"alpha_\", model_step=\"ridgecv\"\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is affirmative to see that the selected `alpha_` is in our specified\nrange.\n\nThe performance is not good: the average error is around 14% of the maximum\ndemand. This is more than three times higher than the average error of the\ngradient boosting model. We can suspect that the naive original encoding\n(merely min-max scaled) of the periodic time-related features might prevent\nthe linear regression model to properly leverage the time information: linear\nregression does not automatically model non-monotonic relationships between\nthe input features and the target. Non-linear terms have to be engineered in\nthe input.\n\nFor example, the raw numerical encoding of the `\"hour\"` feature prevents the\nlinear model from recognizing that an increase of hour in the morning from 6\nto 8 should have a strong positive impact on the number of bike rentals while\nan increase of similar magnitude in the evening from 18 to 20 should have a\nstrong negative impact on the predicted number of bike rentals.\n\n## Time-steps as categories\n\nSince the time features are encoded in a discrete manner using integers (24\nunique values in the \"hours\" feature), we could decide to treat those as\ncategorical variables using a one-hot encoding and thereby ignore any\nassumption implied by the ordering of the hour values.\n\nUsing one-hot encoding for the time features gives the linear model a lot\nmore flexibility as we introduce one additional feature per discrete time\nlevel.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "one_hot_linear_pipeline = make_pipeline(\n ColumnTransformer(\n transformers=[\n (\"categorical\", one_hot_encoder, categorical_columns),\n (\"one_hot_time\", one_hot_encoder, [\"hour\", \"weekday\", \"month\"]),\n ],\n remainder=MinMaxScaler(),\n ),\n RidgeCV(alphas=alphas),\n)\n\nevaluate(one_hot_linear_pipeline, X, y, cv=ts_cv)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The average error rate of this model is 10% which is much better than using\nthe original (ordinal) encoding of the time feature, confirming our intuition\nthat the linear regression model benefits from the added flexibility to not\ntreat time progression in a monotonic manner.\n\nHowever, this introduces a very large number of new features. If the time of\nthe day was represented in minutes since the start of the day instead of\nhours, one-hot encoding would have introduced 1440 features instead of 24.\nThis could cause some significant overfitting. To avoid this we could use\n:func:`sklearn.preprocessing.KBinsDiscretizer` instead to re-bin the number\nof levels of fine-grained ordinal or numerical variables while still\nbenefitting from the non-monotonic expressivity advantages of one-hot\nencoding.\n\nFinally, we also observe that one-hot encoding completely ignores the\nordering of the hour levels while this could be an interesting inductive bias\nto preserve to some level. In the following we try to explore smooth,\nnon-monotonic encoding that locally preserves the relative ordering of time\nfeatures.\n\n## Trigonometric features\n\nAs a first attempt, we can try to encode each of those periodic features\nusing a sine and cosine transformation with the matching period.\n\nEach ordinal time feature is transformed into 2 features that together encode\nequivalent information in a non-monotonic way, and more importantly without\nany jump between the first and the last value of the periodic range.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.preprocessing import FunctionTransformer\n\n\ndef sin_transformer(period):\n return FunctionTransformer(lambda x: np.sin(x / period * 2 * np.pi))\n\n\ndef cos_transformer(period):\n return FunctionTransformer(lambda x: np.cos(x / period * 2 * np.pi))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us visualize the effect of this feature expansion on some synthetic hour\ndata with a bit of extrapolation beyond hour=23:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import pandas as pd\n\nhour_df = pd.DataFrame(\n np.arange(26).reshape(-1, 1),\n columns=[\"hour\"],\n)\nhour_df[\"hour_sin\"] = sin_transformer(24).fit_transform(hour_df)[\"hour\"]\nhour_df[\"hour_cos\"] = cos_transformer(24).fit_transform(hour_df)[\"hour\"]\nhour_df.plot(x=\"hour\")\n_ = plt.title(\"Trigonometric encoding for the 'hour' feature\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's use a 2D scatter plot with the hours encoded as colors to better see\nhow this representation maps the 24 hours of the day to a 2D space, akin to\nsome sort of a 24 hour version of an analog clock. Note that the \"25th\" hour\nis mapped back to the 1st hour because of the periodic nature of the\nsine/cosine representation.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(7, 5))\nsp = ax.scatter(hour_df[\"hour_sin\"], hour_df[\"hour_cos\"], c=hour_df[\"hour\"])\nax.set(\n xlabel=\"sin(hour)\",\n ylabel=\"cos(hour)\",\n)\n_ = fig.colorbar(sp)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now build a feature extraction pipeline using this strategy:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "cyclic_cossin_transformer = ColumnTransformer(\n transformers=[\n (\"categorical\", one_hot_encoder, categorical_columns),\n (\"month_sin\", sin_transformer(12), [\"month\"]),\n (\"month_cos\", cos_transformer(12), [\"month\"]),\n (\"weekday_sin\", sin_transformer(7), [\"weekday\"]),\n (\"weekday_cos\", cos_transformer(7), [\"weekday\"]),\n (\"hour_sin\", sin_transformer(24), [\"hour\"]),\n (\"hour_cos\", cos_transformer(24), [\"hour\"]),\n ],\n remainder=MinMaxScaler(),\n)\ncyclic_cossin_linear_pipeline = make_pipeline(\n cyclic_cossin_transformer,\n RidgeCV(alphas=alphas),\n)\nevaluate(cyclic_cossin_linear_pipeline, X, y, cv=ts_cv)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The performance of our linear regression model with this simple feature\nengineering is a bit better than using the original ordinal time features but\nworse than using the one-hot encoded time features. We will further analyze\npossible reasons for this disappointing outcome at the end of this notebook.\n\n## Periodic spline features\n\nWe can try an alternative encoding of the periodic time-related features\nusing spline transformations with a large enough number of splines, and as a\nresult a larger number of expanded features compared to the sine/cosine\ntransformation:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.preprocessing import SplineTransformer\n\n\ndef periodic_spline_transformer(period, n_splines=None, degree=3):\n if n_splines is None:\n n_splines = period\n n_knots = n_splines + 1 # periodic and include_bias is True\n return SplineTransformer(\n degree=degree,\n n_knots=n_knots,\n knots=np.linspace(0, period, n_knots).reshape(n_knots, 1),\n extrapolation=\"periodic\",\n include_bias=True,\n )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, let us visualize the effect of this feature expansion on some\nsynthetic hour data with a bit of extrapolation beyond hour=23:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "hour_df = pd.DataFrame(\n np.linspace(0, 26, 1000).reshape(-1, 1),\n columns=[\"hour\"],\n)\nsplines = periodic_spline_transformer(24, n_splines=12).fit_transform(hour_df)\nsplines_df = pd.DataFrame(\n splines,\n columns=[f\"spline_{i}\" for i in range(splines.shape[1])],\n)\npd.concat([hour_df, splines_df], axis=\"columns\").plot(x=\"hour\", cmap=plt.cm.tab20b)\n_ = plt.title(\"Periodic spline-based encoding for the 'hour' feature\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Thanks to the use of the `extrapolation=\"periodic\"` parameter, we observe\nthat the feature encoding stays smooth when extrapolating beyond midnight.\n\nWe can now build a predictive pipeline using this alternative periodic\nfeature engineering strategy.\n\nIt is possible to use fewer splines than discrete levels for those ordinal\nvalues. This makes spline-based encoding more efficient than one-hot encoding\nwhile preserving most of the expressivity:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "cyclic_spline_transformer = ColumnTransformer(\n transformers=[\n (\"categorical\", one_hot_encoder, categorical_columns),\n (\"cyclic_month\", periodic_spline_transformer(12, n_splines=6), [\"month\"]),\n (\"cyclic_weekday\", periodic_spline_transformer(7, n_splines=3), [\"weekday\"]),\n (\"cyclic_hour\", periodic_spline_transformer(24, n_splines=12), [\"hour\"]),\n ],\n remainder=MinMaxScaler(),\n)\ncyclic_spline_linear_pipeline = make_pipeline(\n cyclic_spline_transformer,\n RidgeCV(alphas=alphas),\n)\nevaluate(cyclic_spline_linear_pipeline, X, y, cv=ts_cv)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Spline features make it possible for the linear model to successfully\nleverage the periodic time-related features and reduce the error from ~14% to\n~10% of the maximum demand, which is similar to what we observed with the\none-hot encoded features.\n\n## Qualitative analysis of the impact of features on linear model predictions\n\nHere, we want to visualize the impact of the feature engineering choices on\nthe time related shape of the predictions.\n\nTo do so we consider an arbitrary time-based split to compare the predictions\non a range of held out data points.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "naive_linear_pipeline.fit(X.iloc[train_0], y.iloc[train_0])\nnaive_linear_predictions = naive_linear_pipeline.predict(X.iloc[test_0])\n\none_hot_linear_pipeline.fit(X.iloc[train_0], y.iloc[train_0])\none_hot_linear_predictions = one_hot_linear_pipeline.predict(X.iloc[test_0])\n\ncyclic_cossin_linear_pipeline.fit(X.iloc[train_0], y.iloc[train_0])\ncyclic_cossin_linear_predictions = cyclic_cossin_linear_pipeline.predict(X.iloc[test_0])\n\ncyclic_spline_linear_pipeline.fit(X.iloc[train_0], y.iloc[train_0])\ncyclic_spline_linear_predictions = cyclic_spline_linear_pipeline.predict(X.iloc[test_0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We visualize those predictions by zooming on the last 96 hours (4 days) of\nthe test set to get some qualitative insights:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "last_hours = slice(-96, None)\nfig, ax = plt.subplots(figsize=(12, 4))\nfig.suptitle(\"Predictions by linear models\")\nax.plot(\n y.iloc[test_0].values[last_hours],\n \"x-\",\n alpha=0.2,\n label=\"Actual demand\",\n color=\"black\",\n)\nax.plot(naive_linear_predictions[last_hours], \"x-\", label=\"Ordinal time features\")\nax.plot(\n cyclic_cossin_linear_predictions[last_hours],\n \"x-\",\n label=\"Trigonometric time features\",\n)\nax.plot(\n cyclic_spline_linear_predictions[last_hours],\n \"x-\",\n label=\"Spline-based time features\",\n)\nax.plot(\n one_hot_linear_predictions[last_hours],\n \"x-\",\n label=\"One-hot time features\",\n)\n_ = ax.legend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can draw the following conclusions from the above plot:\n\n- The **raw ordinal time-related features** are problematic because they do\n not capture the natural periodicity: we observe a big jump in the\n predictions at the end of each day when the hour features goes from 23 back\n to 0. We can expect similar artifacts at the end of each week or each year.\n\n- As expected, the **trigonometric features** (sine and cosine) do not have\n these discontinuities at midnight, but the linear regression model fails to\n leverage those features to properly model intra-day variations.\n Using trigonometric features for higher harmonics or additional\n trigonometric features for the natural period with different phases could\n potentially fix this problem.\n\n- the **periodic spline-based features** fix those two problems at once: they\n give more expressivity to the linear model by making it possible to focus\n on specific hours thanks to the use of 12 splines. Furthermore the\n `extrapolation=\"periodic\"` option enforces a smooth representation between\n `hour=23` and `hour=0`.\n\n- The **one-hot encoded features** behave similarly to the periodic\n spline-based features but are more spiky: for instance they can better\n model the morning peak during the week days since this peak lasts shorter\n than an hour. However, we will see in the following that what can be an\n advantage for linear models is not necessarily one for more expressive\n models.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also compare the number of features extracted by each feature\nengineering pipeline:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "naive_linear_pipeline[:-1].transform(X).shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "one_hot_linear_pipeline[:-1].transform(X).shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "cyclic_cossin_linear_pipeline[:-1].transform(X).shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "cyclic_spline_linear_pipeline[:-1].transform(X).shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This confirms that the one-hot encoding and the spline encoding strategies\ncreate a lot more features for the time representation than the alternatives,\nwhich in turn gives the downstream linear model more flexibility (degrees of\nfreedom) to avoid underfitting.\n\nFinally, we observe that none of the linear models can approximate the true\nbike rentals demand, especially for the peaks that can be very sharp at rush\nhours during the working days but much flatter during the week-ends: the most\naccurate linear models based on splines or one-hot encoding tend to forecast\npeaks of commuting-related bike rentals even on the week-ends and\nunder-estimate the commuting-related events during the working days.\n\nThese systematic prediction errors reveal a form of under-fitting and can be\nexplained by the lack of interactions terms between features, e.g.\n\"workingday\" and features derived from \"hours\". This issue will be addressed\nin the following section.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Modeling pairwise interactions with splines and polynomial features\n\nLinear models do not automatically capture interaction effects between input\nfeatures. It does not help that some features are marginally non-linear as is\nthe case with features constructed by `SplineTransformer` (or one-hot\nencoding or binning).\n\nHowever, it is possible to use the `PolynomialFeatures` class on coarse\ngrained spline encoded hours to model the \"workingday\"/\"hours\" interaction\nexplicitly without introducing too many new variables:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.pipeline import FeatureUnion\nfrom sklearn.preprocessing import PolynomialFeatures\n\nhour_workday_interaction = make_pipeline(\n ColumnTransformer(\n [\n (\"cyclic_hour\", periodic_spline_transformer(24, n_splines=8), [\"hour\"]),\n (\"workingday\", FunctionTransformer(lambda x: x == \"True\"), [\"workingday\"]),\n ]\n ),\n PolynomialFeatures(degree=2, interaction_only=True, include_bias=False),\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Those features are then combined with the ones already computed in the\nprevious spline-base pipeline. We can observe a nice performance improvement\nby modeling this pairwise interaction explicitly:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "cyclic_spline_interactions_pipeline = make_pipeline(\n FeatureUnion(\n [\n (\"marginal\", cyclic_spline_transformer),\n (\"interactions\", hour_workday_interaction),\n ]\n ),\n RidgeCV(alphas=alphas),\n)\nevaluate(cyclic_spline_interactions_pipeline, X, y, cv=ts_cv)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Modeling non-linear feature interactions with kernels\n\nThe previous analysis highlighted the need to model the interactions between\n`\"workingday\"` and `\"hours\"`. Another example of a such a non-linear\ninteraction that we would like to model could be the impact of the rain that\nmight not be the same during the working days and the week-ends and holidays\nfor instance.\n\nTo model all such interactions, we could either use a polynomial expansion on\nall marginal features at once, after their spline-based expansion. However,\nthis would create a quadratic number of features which can cause overfitting\nand computational tractability issues.\n\nAlternatively, we can use the Nystr\u00f6m method to compute an approximate\npolynomial kernel expansion. Let us try the latter:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.kernel_approximation import Nystroem\n\ncyclic_spline_poly_pipeline = make_pipeline(\n cyclic_spline_transformer,\n Nystroem(kernel=\"poly\", degree=2, n_components=300, random_state=0),\n RidgeCV(alphas=alphas),\n)\nevaluate(cyclic_spline_poly_pipeline, X, y, cv=ts_cv)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We observe that this model can almost rival the performance of the gradient\nboosted trees with an average error around 5% of the maximum demand.\n\nNote that while the final step of this pipeline is a linear regression model,\nthe intermediate steps such as the spline feature extraction and the Nystr\u00f6m\nkernel approximation are highly non-linear. As a result the compound pipeline\nis much more expressive than a simple linear regression model with raw features.\n\nFor the sake of completeness, we also evaluate the combination of one-hot\nencoding and kernel approximation:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "one_hot_poly_pipeline = make_pipeline(\n ColumnTransformer(\n transformers=[\n (\"categorical\", one_hot_encoder, categorical_columns),\n (\"one_hot_time\", one_hot_encoder, [\"hour\", \"weekday\", \"month\"]),\n ],\n remainder=\"passthrough\",\n ),\n Nystroem(kernel=\"poly\", degree=2, n_components=300, random_state=0),\n RidgeCV(alphas=alphas),\n)\nevaluate(one_hot_poly_pipeline, X, y, cv=ts_cv)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While one-hot encoded features were competitive with spline-based features\nwhen using linear models, this is no longer the case when using a low-rank\napproximation of a non-linear kernel: this can be explained by the fact that\nspline features are smoother and allow the kernel approximation to find a\nmore expressive decision function.\n\nLet us now have a qualitative look at the predictions of the kernel models\nand of the gradient boosted trees that should be able to better model\nnon-linear interactions between features:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "gbrt.fit(X.iloc[train_0], y.iloc[train_0])\ngbrt_predictions = gbrt.predict(X.iloc[test_0])\n\none_hot_poly_pipeline.fit(X.iloc[train_0], y.iloc[train_0])\none_hot_poly_predictions = one_hot_poly_pipeline.predict(X.iloc[test_0])\n\ncyclic_spline_poly_pipeline.fit(X.iloc[train_0], y.iloc[train_0])\ncyclic_spline_poly_predictions = cyclic_spline_poly_pipeline.predict(X.iloc[test_0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again we zoom on the last 4 days of the test set:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "last_hours = slice(-96, None)\nfig, ax = plt.subplots(figsize=(12, 4))\nfig.suptitle(\"Predictions by non-linear regression models\")\nax.plot(\n y.iloc[test_0].values[last_hours],\n \"x-\",\n alpha=0.2,\n label=\"Actual demand\",\n color=\"black\",\n)\nax.plot(\n gbrt_predictions[last_hours],\n \"x-\",\n label=\"Gradient Boosted Trees\",\n)\nax.plot(\n one_hot_poly_predictions[last_hours],\n \"x-\",\n label=\"One-hot + polynomial kernel\",\n)\nax.plot(\n cyclic_spline_poly_predictions[last_hours],\n \"x-\",\n label=\"Splines + polynomial kernel\",\n)\n_ = ax.legend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, note that trees can naturally model non-linear feature interactions\nsince, by default, decision trees are allowed to grow beyond a depth of 2\nlevels.\n\nHere, we can observe that the combinations of spline features and non-linear\nkernels works quite well and can almost rival the accuracy of the gradient\nboosting regression trees.\n\nOn the contrary, one-hot encoded time features do not perform that well with\nthe low rank kernel model. In particular, they significantly over-estimate\nthe low demand hours more than the competing models.\n\nWe also observe that none of the models can successfully predict some of the\npeak rentals at the rush hours during the working days. It is possible that\naccess to additional features would be required to further improve the\naccuracy of the predictions. For instance, it could be useful to have access\nto the geographical repartition of the fleet at any point in time or the\nfraction of bikes that are immobilized because they need servicing.\n\nLet us finally get a more quantitative look at the prediction errors of those\nthree models using the true vs predicted demand scatter plots:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.metrics import PredictionErrorDisplay\n\nfig, axes = plt.subplots(nrows=2, ncols=3, figsize=(13, 7), sharex=True, sharey=\"row\")\nfig.suptitle(\"Non-linear regression models\", y=1.0)\npredictions = [\n one_hot_poly_predictions,\n cyclic_spline_poly_predictions,\n gbrt_predictions,\n]\nlabels = [\n \"One hot +\\npolynomial kernel\",\n \"Splines +\\npolynomial kernel\",\n \"Gradient Boosted\\nTrees\",\n]\nplot_kinds = [\"actual_vs_predicted\", \"residual_vs_predicted\"]\nfor axis_idx, kind in enumerate(plot_kinds):\n for ax, pred, label in zip(axes[axis_idx], predictions, labels):\n disp = PredictionErrorDisplay.from_predictions(\n y_true=y.iloc[test_0],\n y_pred=pred,\n kind=kind,\n scatter_kwargs={\"alpha\": 0.3},\n ax=ax,\n )\n ax.set_xticks(np.linspace(0, 1, num=5))\n if axis_idx == 0:\n ax.set_yticks(np.linspace(0, 1, num=5))\n ax.legend(\n [\"Best model\", label],\n loc=\"upper center\",\n bbox_to_anchor=(0.5, 1.3),\n ncol=2,\n )\n ax.set_aspect(\"equal\", adjustable=\"box\")\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This visualization confirms the conclusions we draw on the previous plot.\n\nAll models under-estimate the high demand events (working day rush hours),\nbut gradient boosting a bit less so. The low demand events are well predicted\non average by gradient boosting while the one-hot polynomial regression\npipeline seems to systematically over-estimate demand in that regime. Overall\nthe predictions of the gradient boosted trees are closer to the diagonal than\nfor the kernel models.\n\n## Concluding remarks\n\nWe note that we could have obtained slightly better results for kernel models\nby using more components (higher rank kernel approximation) at the cost of\nlonger fit and prediction durations. For large values of `n_components`, the\nperformance of the one-hot encoded features would even match the spline\nfeatures.\n\nThe `Nystroem` + `RidgeCV` regressor could also have been replaced by\n:class:`~sklearn.neural_network.MLPRegressor` with one or two hidden layers\nand we would have obtained quite similar results.\n\nThe dataset we used in this case study is sampled on a hourly basis. However\ncyclic spline-based features could model time-within-day or time-within-week\nvery efficiently with finer-grained time resolutions (for instance with\nmeasurements taken every minute instead of every hours) without introducing\nmore features. One-hot encoding time representations would not offer this\nflexibility.\n\nFinally, in this notebook we used `RidgeCV` because it is very efficient from\na computational point of view. However, it models the target variable as a\nGaussian random variable with constant variance. For positive regression\nproblems, it is likely that using a Poisson or Gamma distribution would make\nmore sense. This could be achieved by using\n`GridSearchCV(TweedieRegressor(power=2), param_grid({\"alpha\": alphas}))`\ninstead of `RidgeCV`.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }