{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "

# Demand forecasting with BigQuery and TensorFlow

\n", "\n", "In this notebook, we will develop a machine learning model to predict the demand for taxi cabs in New York.\n", "\n", "To develop the model, we will need to get historical data of taxicab usage. This data exists in BigQuery. Let's start by looking at the schema." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "import google.datalab.bigquery as bq\n", "import pandas as pd\n", "import numpy as np\n", "import shutil" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/html": [ "\n", "
\n", " \n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%bq tables describe --name bigquery-public-data.new_york.tlc_yellow_trips_2015" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "

## Analyzing taxicab demand

\n", "\n", "Let's pull the number of trips for each day in the 2015 dataset using Standard SQL." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/html": [ "\n", "
daynumber
301
178
308
336
150
\n", "
(rows: 5, time: 2.1s, 1GB processed, job: job_1wwNxbANH1IvI01gjbGXlkZ_lBcX)
\n", " \n", " \n", " " ], "text/plain": [ "QueryResultsTable job_1wwNxbANH1IvI01gjbGXlkZ_lBcX" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%bq query\n", "SELECT \n", " EXTRACT (DAYOFYEAR from pickup_datetime) AS daynumber\n", "FROM bigquery-public-data.new_york.tlc_yellow_trips_2015 \n", "LIMIT 5" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "

### Modular queries and Pandas dataframe

\n", "\n", "Let's use the total number of trips as our proxy for taxicab demand (other reasonable alternatives are total trip_distance or total fare_amount). It is possible to predict multiple variables using Tensorflow, but for simplicity, we will stick to just predicting the number of trips.\n", "\n", "We will give our query a name 'taxiquery' and have it use an input variable '\$YEAR'. We can then invoke the 'taxiquery' by giving it a YEAR. The to_dataframe() converts the BigQuery result into a Pandas dataframe." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "%bq query -n taxiquery\n", "WITH trips AS (\n", " SELECT EXTRACT (DAYOFYEAR from pickup_datetime) AS daynumber \n", " FROM bigquery-public-data.new_york.tlc_yellow_trips_*\n", " where _TABLE_SUFFIX = @YEAR\n", ")\n", "SELECT daynumber, COUNT(1) AS numtrips FROM trips\n", "GROUP BY daynumber ORDER BY daynumber" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
daynumbernumtrips
01382014
12345296
23406769
34328848
45363454
\n", "
" ], "text/plain": [ " daynumber numtrips\n", "0 1 382014\n", "1 2 345296\n", "2 3 406769\n", "3 4 328848\n", "4 5 363454" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "query_parameters = [\n", " {\n", " 'name': 'YEAR',\n", " 'parameterType': {'type': 'STRING'},\n", " 'parameterValue': {'value': 2015}\n", " }\n", "]\n", "trips = taxiquery.execute(query_params=query_parameters).result().to_dataframe()\n", "trips[:5]" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "

### Benchmark

\n", "\n", "Often, a reasonable estimate of something is its historical average. We can therefore benchmark our machine learning model against the historical average." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Just using average=400309.558904 has RMSE of 51613.6516905\n" ] } ], "source": [ "avg = np.mean(trips['numtrips'])\n", "print 'Just using average={0} has RMSE of {1}'.format(avg, np.sqrt(np.mean((trips['numtrips'] - avg)**2)))" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "The mean here is about 400,000 and the root-mean-square-error (RMSE) in this case is about 52,000. In other words, if we were to estimate that there are 400,000 taxi trips on any given day, that estimate is will be off on average by about 52,000 in either direction.\n", " \n", "Let's see if we can do better than this -- our goal is to make predictions of taxicab demand whose RMSE is lower than 52,000.\n", "\n", "What kinds of things affect people's use of taxicabs?" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "

## Weather data

\n", "\n", "We suspect that weather influences how often people use a taxi. Perhaps someone who'd normally walk to work would take a taxi if it is very cold or rainy.\n", "\n", "One of the advantages of using a global data warehouse like BigQuery is that you get to mash up unrelated datasets quite easily." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/html": [ "\n", "
usafwbannamecountrystatecalllatlonelevbeginend
72503014732LA GUARDIA AIRPORTUSNYKLGA40.779-73.88+0003.41973010120180618
\n", "
(rows: 1, time: 1.3s, 2MB processed, job: job_jZ5uuQzxmiVMZvxxPOOyKnZldCcE)
\n", " \n", " \n", " " ], "text/plain": [ "QueryResultsTable job_jZ5uuQzxmiVMZvxxPOOyKnZldCcE" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%bq query\n", "SELECT * FROM bigquery-public-data.noaa_gsod.stations\n", "WHERE state = 'NY' AND wban != '99999' AND name LIKE '%LA GUARDIA%'" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "

### Variables

\n", "\n", "Let's pull out the minimum and maximum daily temperature (in Fahrenheit) as well as the amount of rain (in inches) for La Guardia airport." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "%bq query -n wxquery\n", "SELECT EXTRACT (DAYOFYEAR FROM CAST(CONCAT(@YEAR,'-',mo,'-',da) AS TIMESTAMP)) AS daynumber,\n", " MIN(EXTRACT (DAYOFWEEK FROM CAST(CONCAT(@YEAR,'-',mo,'-',da) AS TIMESTAMP))) dayofweek,\n", " MIN(min) mintemp, MAX(max) maxtemp, MAX(IF(prcp=99.99,0,prcp)) rain\n", "FROM bigquery-public-data.noaa_gsod.gsod*\n", "WHERE stn='725030' AND _TABLE_SUFFIX = @YEAR\n", "GROUP BY 1 ORDER BY daynumber DESC" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
daynumberdayofweekmintempmaxtemprain
0365546.048.20.17
1364434.048.00.13
2363333.846.90.37
3362239.062.10.02
4361146.062.60.14
\n", "
" ], "text/plain": [ " daynumber dayofweek mintemp maxtemp rain\n", "0 365 5 46.0 48.2 0.17\n", "1 364 4 34.0 48.0 0.13\n", "2 363 3 33.8 46.9 0.37\n", "3 362 2 39.0 62.1 0.02\n", "4 361 1 46.0 62.6 0.14" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "query_parameters = [\n", " {\n", " 'name': 'YEAR',\n", " 'parameterType': {'type': 'STRING'},\n", " 'parameterValue': {'value': 2015}\n", " }\n", "]\n", "weather = wxquery.execute(query_params=query_parameters).result().to_dataframe()\n", "weather[:5]" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "

### Merge datasets

\n", "\n", "Let's use Pandas to merge (combine) the taxi cab and weather datasets day-by-day." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
daynumberdayofweekmintempmaxtemprainnumtrips
0365546.048.20.17339939
1364434.048.00.13319649
2363333.846.90.37311730
3362239.062.10.02301398
4361146.062.60.14268841
\n", "
" ], "text/plain": [ " daynumber dayofweek mintemp maxtemp rain numtrips\n", "0 365 5 46.0 48.2 0.17 339939\n", "1 364 4 34.0 48.0 0.13 319649\n", "2 363 3 33.8 46.9 0.37 311730\n", "3 362 2 39.0 62.1 0.02 301398\n", "4 361 1 46.0 62.6 0.14 268841" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = pd.merge(weather, trips, on='daynumber')\n", "data[:5]" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "

### Adding 2014 and 2016 data

\n", "\n", "Let's add in 2014 and 2016 data to the Pandas dataframe. Note how useful it was for us to modularize our queries around the YEAR." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
daynumberdayofweekmintempmaxtemprainnumtrips
count1096.0000001096.0000001096.0000001096.0000001096.0000001096.000000
mean183.1669714.00547448.19507366.1518250.117272403642.694343
std105.5109272.00044918.03122818.4840650.32083663767.524397
min1.0000001.0000001.00000021.0000000.00000078133.000000
25%92.0000002.00000035.10000051.9500000.000000363809.000000
50%183.0000004.00000048.90000068.0000000.000000402184.500000
75%274.2500006.00000064.40000082.9000000.050000447099.000000
max366.0000007.00000082.00000099.0000004.880000574530.000000
\n", "

## Machine Learning with Tensorflow

\n", "\n", "We'll use 80% of our dataset for training and 20% of the data for testing the model we have trained. Let's shuffle the rows of the Pandas dataframe so that this division is random. The predictor (or input) columns will be every column in the database other than the number-of-trips (which is our target, or what we want to predict).\n", "\n", "The machine learning models that we will use -- linear regression and neural networks -- both require that the input variables are numeric in nature.\n", "\n", "The day of the week, however, is a categorical variable (i.e. Tuesday is not really greater than Monday). So, we should create separate columns for whether it is a Monday (with values 0 or 1), Tuesday, etc.\n", "\n", "Against that, we do have limited data (remember: the more columns you use as input features, the more rows you need to have in your training dataset), and it appears that there is a clear linear trend by day of the week. So, we will opt for simplicity here and use the data as-is. Try uncommenting the code that creates separate columns for the days of the week and re-run the notebook if you are curious about the impact of this simplification." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/usr/local/envs/py2env/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.\n", " from ._conv import register_converters as _register_converters\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
dayofweekmintempmaxtemprain
9232.043.00.00
279637.957.00.52
163571.191.00.00
225655.978.10.00
218655.091.90.00
\n", "
" ], "text/plain": [ " dayofweek mintemp maxtemp rain\n", "9 2 32.0 43.0 0.00\n", "279 6 37.9 57.0 0.52\n", "163 5 71.1 91.0 0.00\n", "225 6 55.9 78.1 0.00\n", "218 6 55.0 91.9 0.00" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import tensorflow as tf\n", "shuffled = data2.sample(frac=1, random_state=13)\n", "# It would be a good idea, if we had more data, to treat the days as categorical variables\n", "# with the small amount of data, we have though, the model tends to overfit\n", "#predictors = shuffled.iloc[:,2:5]\n", "#for day in xrange(1,8):\n", "# matching = shuffled['dayofweek'] == day\n", "# key = 'day_' + str(day)\n", "# predictors[key] = pd.Series(matching, index=predictors.index, dtype=float)\n", "predictors = shuffled.iloc[:,1:5]\n", "predictors[:5]" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
daynumberdayofweekmintempmaxtemprainnumtrips
9356232.043.00.00382112
27986637.957.00.52465493
163203571.191.00.00363728
225141655.978.10.00414711
218148655.091.90.00364951
\n", "
" ], "text/plain": [ " daynumber dayofweek mintemp maxtemp rain numtrips\n", "9 356 2 32.0 43.0 0.00 382112\n", "279 86 6 37.9 57.0 0.52 465493\n", "163 203 5 71.1 91.0 0.00 363728\n", "225 141 6 55.9 78.1 0.00 414711\n", "218 148 6 55.0 91.9 0.00 364951" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "shuffled[:5]" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/plain": [ "9 382112\n", "279 465493\n", "163 363728\n", "225 414711\n", "218 364951\n", "Name: numtrips, dtype: int64" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "targets = shuffled.iloc[:,5]\n", "targets[:5]" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Let's update our benchmark based on the 80-20 split and the larger dataset." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Just using average=402667.682648 has RMSE of 62394.1123208\n" ] } ], "source": [ "trainsize = int(len(shuffled['numtrips']) * 0.8)\n", "avg = np.mean(shuffled['numtrips'][:trainsize])\n", "rmse = np.sqrt(np.mean((targets[trainsize:] - avg)**2))\n", "print 'Just using average={0} has RMSE of {1}'.format(avg, rmse)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "

## Linear regression with tf.contrib.learn

\n", "\n", "We scale the number of taxicab rides by 400,000 so that the model can keep its predicted values in the [0-1] range. The optimization goes a lot faster when the weights are small numbers. We save the weights into ./trained_model_linear and display the root mean square error on the test dataset." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From :9: infer_real_valued_columns_from_input (from tensorflow.contrib.learn.python.learn.estimators.estimator) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Please specify feature columns explicitly.\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py:142: setup_train_data_feeder (from tensorflow.contrib.learn.python.learn.learn_io.data_feeder) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Please use tensorflow/transform or tf.data.\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_io/data_feeder.py:96: extract_dask_data (from tensorflow.contrib.learn.python.learn.learn_io.dask_io) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Please feed input to tf.data to support dask.\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_io/data_feeder.py:100: extract_pandas_data (from tensorflow.contrib.learn.python.learn.learn_io.pandas_io) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Please access pandas data directly.\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_io/data_feeder.py:159: __init__ (from tensorflow.contrib.learn.python.learn.learn_io.data_feeder) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Please use tensorflow/transform or tf.data.\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_io/data_feeder.py:340: check_array (from tensorflow.contrib.learn.python.learn.learn_io.data_feeder) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Please convert numpy dtypes explicitly.\n", "WARNING:tensorflow:float64 is not supported by many models, consider casting to float32.\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py:182: infer_real_valued_columns_from_input_fn (from tensorflow.contrib.learn.python.learn.estimators.estimator) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Please specify feature columns explicitly.\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/linear.py:738: regression_head (from tensorflow.contrib.learn.python.learn.estimators.head) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Please switch to tf.contrib.estimator.*_head.\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py:1179: __init__ (from tensorflow.contrib.learn.python.learn.estimators.estimator) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Please replace uses of any Estimator from tf.contrib.learn with an Estimator from tf.estimator.*\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py:427: __init__ (from tensorflow.contrib.learn.python.learn.estimators.run_config) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead.\n", "starting to train ... this will take a while ... use verbosity=INFO to get more verbose output\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/head.py:678: __new__ (from tensorflow.contrib.learn.python.learn.estimators.model_fn) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "When switching to tf.estimator.Estimator, use tf.estimator.EstimatorSpec. You can use the estimator_spec method to create an equivalent one.\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py:497: calling predict (from tensorflow.contrib.learn.python.learn.estimators.linear) with outputs=None is deprecated and will be removed after 2017-03-01.\n", "Instructions for updating:\n", "Please switch to predict_scores, or set outputs argument.\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/linear.py:843: calling predict (from tensorflow.contrib.learn.python.learn.estimators.estimator) with x is deprecated and will be removed after 2016-12-01.\n", "Instructions for updating:\n", "Estimator is decoupled from Scikit Learn interface by moving into\n", "separate class SKCompat. Arguments x, y and batch_size are only\n", "available in the SKCompat class, Estimator will only accept input_fn.\n", "Example conversion:\n", " est = Estimator(...) -> est = SKCompat(Estimator(...))\n", "WARNING:tensorflow:float64 is not supported by many models, consider casting to float32.\n", "LinearRegression has RMSE of 56643.9536391\n" ] } ], "source": [ "SCALE_NUM_TRIPS = 600000.0\n", "trainsize = int(len(shuffled['numtrips']) * 0.8)\n", "testsize = len(shuffled['numtrips']) - trainsize\n", "npredictors = len(predictors.columns)\n", "noutputs = 1\n", "tf.logging.set_verbosity(tf.logging.WARN) # change to INFO to get output every 100 steps ...\n", "shutil.rmtree('./trained_model_linear', ignore_errors=True) # so that we don't load weights from previous runs\n", "estimator = tf.contrib.learn.LinearRegressor(model_dir='./trained_model_linear',\n", " feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(predictors.values))\n", "\n", "print \"starting to train ... this will take a while ... use verbosity=INFO to get more verbose output\"\n", "def input_fn(features, targets):\n", " return tf.constant(features.values), tf.constant(targets.values.reshape(len(targets), noutputs)/SCALE_NUM_TRIPS)\n", "estimator.fit(input_fn=lambda: input_fn(predictors[:trainsize], targets[:trainsize]), steps=10000)\n", "\n", "pred = np.multiply(list(estimator.predict(predictors[trainsize:].values)), SCALE_NUM_TRIPS )\n", "rmse = np.sqrt(np.mean(np.power((targets[trainsize:].values - pred), 2)))\n", "print 'LinearRegression has RMSE of {0}'.format(rmse)\n" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "The RMSE here (57K) is lower than the benchmark (62K) indicates that we are doing about 10% better with the machine learning model than we would be if we were to just use the historical average (our benchmark)." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "

## Neural network with tf.contrib.learn

\n", "\n", "Let's make a more complex model with a few hidden nodes." ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:float64 is not supported by many models, consider casting to float32.\n", "starting to train ... this will take a while ... use verbosity=INFO to get more verbose output\n", "WARNING:tensorflow:From /usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py:497: calling predict (from tensorflow.contrib.learn.python.learn.estimators.dnn) with outputs=None is deprecated and will be removed after 2017-03-01.\n", "Instructions for updating:\n", "Please switch to predict_scores, or set outputs argument.\n", "WARNING:tensorflow:float64 is not supported by many models, consider casting to float32.\n", "Neural Network Regression has RMSE of 61920.4869713\n" ] } ], "source": [ "SCALE_NUM_TRIPS = 600000.0\n", "trainsize = int(len(shuffled['numtrips']) * 0.8)\n", "testsize = len(shuffled['numtrips']) - trainsize\n", "npredictors = len(predictors.columns)\n", "noutputs = 1\n", "tf.logging.set_verbosity(tf.logging.WARN) # change to INFO to get output every 100 steps ...\n", "shutil.rmtree('./trained_model', ignore_errors=True) # so that we don't load weights from previous runs\n", "estimator = tf.contrib.learn.DNNRegressor(model_dir='./trained_model',\n", " hidden_units=[5, 5], \n", " feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(predictors.values))\n", "\n", "print \"starting to train ... this will take a while ... use verbosity=INFO to get more verbose output\"\n", "def input_fn(features, targets):\n", " return tf.constant(features.values), tf.constant(targets.values.reshape(len(targets), noutputs)/SCALE_NUM_TRIPS)\n", "estimator.fit(input_fn=lambda: input_fn(predictors[:trainsize], targets[:trainsize]), steps=10000)\n", "\n", "pred = np.multiply(list(estimator.predict(predictors[trainsize:].values)), SCALE_NUM_TRIPS )\n", "rmse = np.sqrt(np.mean((targets[trainsize:].values - pred)**2))\n", "print 'Neural Network Regression has RMSE of {0}'.format(rmse)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Using a neural network results in similar performance to the linear model when I ran it -- it might be because there isn't enough data for the NN to do much better. (NN training is a non-convex optimization, and you will get different results each time you run the above code)." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "