{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Statistical Learning\n", "\n", "Excercises from **Chapter 2** of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani.\n", "\n", "I've elected to use Python instead of R." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conceptual" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Q1 \n", "For each of parts (a) through (d), indicate whether we would generally expect the performance of a flexible statistical learning method to be better or worse than an inflexible method. Justify your answer.\n", "\n", "a) The sample size n is extremely large, and the number of predictors p is small. \n", ">**Flexible**: we have enough observations to avoid overfitting, so assuming some there are non-linear relationships in our data a more flexible model should provide an improved fit. \n", "\n", "b) The number of predictors p is extremely large, and the number of observations n is small. \n", ">**Inflexible**: we *don't* have enough observations to avoid overfitting \n", "\n", "c) The relationship between the predictors and response is highly non-linear. \n", ">**Flexible**: a high variance model affords a better fit to non-linear relationships \n", "\n", "d) The variance of the error terms, i.e. σ2 = Var(ε), is extremely high. \n", ">**Inflexible**: a high bias model avoids overfitting to the noise in our dataset " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Q2 \n", "Explain whether each scenario is a classification or regression problem, and indicate whether we are most interested in inference or prediction. Finally, provide n and p.\n", "\n", "(a) We collect a set of data on the top 500 firms in the US. For each firm we record profit, number of employees, industry and the CEO salary. We are interested in understanding which factors affect CEO salary. \n", "\n", ">regression, inference, n=500, p=4 \n", "\n", "(b) We are considering launching a new product and wish to know whether it will be a success or a failure. We collect data on 20 similar products that were previously launched. For each product we have recorded whether it was a success or failure, price charged for the product, marketing budget, competition price, and ten other variables. \n", "\n", ">classification, prediction, n=20, p=14\n", "\n", "(c) We are interested in predicting the % change in the USD/Euro exchange rate in relation to the weekly changes in the world stock markets. Hence we collect weekly data for all of 2012. For each week we record the % change in the USD/Euro, the % change in the US market, the % change in the British market, and the % change in the German market. \n", "\n", ">regression, prediction, n=52, p=4" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Q3\n", "\n", "We now revisit the bias-variance decomposition.\n", "\n", "(a) Provide a sketch of typical (squared) bias, variance, training error, test error, and Bayes (or irreducible) error curves, on a single plot, as we go from less flexible statistical learning methods towards more flexible approaches. The x-axis should represent the amount of flexibility in the method, and the y-axis should represent the values for each curve. There should be five curves. Make sure to label each one.\n", "\n", "![IMG_1908.jpg](./images/2_3.jpg)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "(b) Explain why each of the five curves has the shape displayed in part (a).\n", "\n", "- **Bayes error**: the irreducible error which is a constant irrespective of model flexibility\n", "- **Variance**: the variance of a model increases with flexibility as the model picks up variation between training sets rsulting in more variaiton in f(X)\n", "- **Bias**: bias tends to decrease with flexibility as the model casn fit more complex relationships\n", "- **Test error**: tends to decreases as reduced bias allows the model to better fit non-linear relationships but then increases as an increasingly flexible model begins to fit the noise in the dataset (overfitting)\n", "- **Training error**: decreases monotonically with increaed flexibility as the model 'flexes' towards individual datapoints in the training set" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Q4 \n", "You will now think of some real-life applications for statistical learning\n", "\n", "(a) Describe three real-life applications in which classification might be useful. Describe the response, as well as the predictors. Is the goal of each application inference or prediction? Explain your answer. \n", "\n", " 1. Is this tumor malignant or benign?\n", " - response: boolean (is malign)\n", " - predictors (naive examples): tumor size, white blood cell count, change in adrenal mass, position in body\n", " - goal: prediction\n", " 2. What animals are in this image?\n", " - response: 'Cat', 'Dog', 'Fish'\n", " - predictors: image pixel values\n", " - goal: prediction\n", " 3. What test bench metrics are most indicative of a faulty Printed Circuit Board (PCB)?\n", " - response: boolean (is faulty)\n", " - predictors: current draw, voltage drawer, output noise, operating temp.\n", " - goal: inference\n", "\n", "\n", "(b) Describe three real-life applications in which regression might be useful. Describe the response, as well as the predictors. Is the goal of each application inference or prediction? Explain your answer. \n", "\n", " 1. How much is this house worth?\n", " - responce: SalePrice\n", " - predictors: LivingArea, BathroomCount, GarageCount, Neighbourhood, CrimeRate\n", " - goal: prediction\n", " 2. What attributes most affect the market cap. of a company?\n", " - response: MarketCap\n", " - predictors: Sector, Employees, FounderIsCEO, Age, TotalInvestment, Profitability, RONA\n", " - goal: inference\n", " 3. How long is this dairy cow likely to live?\n", " - response: years\n", " - predictors: past medical conditions, current weight, milk yield\n", " - goal: prediction\n", "\n", "(c) Describe three real-life applications in which cluster analysis might be useful. \n", "\n", " 1. This dataset contains observations of 3 different species of flower. Estimate which observations belong to the same species.\n", " - response: a, b, c (species class)\n", " - predictors: sepal length, petal length, number of petals\n", " - goal: prediction\n", " 2. which attributes of the flowers in dataset described above are most predictive of species?\n", " - response: a, b, c (species class)\n", " - predictors: sepal length, petal length, number of petals\n", " - goal: inference\n", " 3. Group these audio recordings of birdsong by species.\n", " - responce: (species classes)\n", " - predictors: audio sample values\n", " - goal: prediction\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Q5 \n", "What are the advantages and disadvantages of a very flexible (versus a less flexible) approach for regression or classification? Under what circumstances might a more flexible approach be preferred to a less flexible approach? When might a less flexible approach be preferred?\n", "\n", "Less flexible\n", "\n", "+ (+) gives better results with few observations\n", "+ (+) simpler inference: the effect of each feature can be more easily understood\n", "+ (+) fewer parameters, faster optimisation\n", "- (-) performs poorly if observations contain highly non-linear relationships\n", "\n", "More flexible\n", "+ (+) gives better fit if observations contain non-linear relationships\n", "- (-) can overfit the data providing poor predictions for new observations\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Q6 \n", "Describe the differences between a parametric and a non-parametric statistical learning approach. What are the advantages of a parametric approach to regression or classification (as opposed to a non-parametric approach)? What are its disadvantages?\n", "\n", "\n", ">A parametric approach simplifies the problem of estimating the best fit to the training data f(x) by making some assumptions about the functional form of f(x), this reduces the problem to estimating the parameters of the model. A non-parametric approach make no such assumptions and so f(x) can take any arbitrary shape.\n", "\n", ">The advantage of the parametric approach is that it simplifies the problem of estimating f(x) because it is easier to estimate paramters than an arbitrary function. The disadvantage of this approach is that the assumed form of the function f(X) could limit the degree of accuracy with which the model can fit the training data. If too many parameters are used, in an attempt to increase the models flexibility, then overfitting can occur – meaning that the model begins to fit noise in the training data that is not representive of unseen observations." ] }, { "cell_type": "code", "execution_count": 148, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import seaborn as sns\n", "import warnings\n", "warnings.filterwarnings('ignore')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Q7 \n", "The table below provides a training data set containing six observations, three predictors, and one qualitative response variable." ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ObsX1X2X3Y
01030Red
12200Red
23013Red
34012Green
45-101Green
561-11Red
\n", "
" ], "text/plain": [ " Obs X1 X2 X3 Y\n", "0 1 0 3 0 Red\n", "1 2 2 0 0 Red\n", "2 3 0 1 3 Red\n", "3 4 0 1 2 Green\n", "4 5 -1 0 1 Green\n", "5 6 1 -1 1 Red" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.DataFrame({'Obs': [1, 2, 3, 4, 5, 6],\n", " 'X1': [0, 2, 0, 0, -1, 1],\n", " 'X2': [3, 0, 1, 1, 0, -1],\n", " 'X3': [0, 0, 3, 2, 1, 1],\n", " 'Y': ['Red', 'Red', 'Red', 'Green', 'Green', 'Red']})\n", "\n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose we wish to use this data set to make a prediction for Y when X1 = X2 = X3 = 0 using K-nearest neighbors." ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ObsX1X2X3YEuclideanDist
01030Red3.000000
12200Red2.000000
23013Red3.162278
34012Green2.236068
45-101Green1.414214
561-11Red1.732051
\n", "
" ], "text/plain": [ " Obs X1 X2 X3 Y EuclideanDist\n", "0 1 0 3 0 Red 3.000000\n", "1 2 2 0 0 Red 2.000000\n", "2 3 0 1 3 Red 3.162278\n", "3 4 0 1 2 Green 2.236068\n", "4 5 -1 0 1 Green 1.414214\n", "5 6 1 -1 1 Red 1.732051" ] }, "execution_count": 55, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# (a) Compute the Euclidean distance between each observation and the test point,X1 =X2 =X3 =0. \n", "\n", "def euclidian_dist(x):\n", " \"\"\"Compute the row-wise euclidean distance\n", " from the origin\"\"\"\n", " return (np.sum(x**2, axis=1))**0.5\n", "\n", "euc_dist = pd.DataFrame({'EuclideanDist': euclidian_dist(df[['X1', 'X2', 'X3']])})\n", "df_euc = pd.concat([df, euc_dist], axis=1)\n", "df_euc" ] }, { "cell_type": "code", "execution_count": 78, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ObsX1X2X3YEuclideanDist
45-101Green1.414214
\n", "
" ], "text/plain": [ " Obs X1 X2 X3 Y EuclideanDist\n", "4 5 -1 0 1 Green 1.414214" ] }, "execution_count": 78, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# (b) What is our prediction with K = 1? Why? \n", "\n", "K = 1\n", "df_euc.nsmallest(K, 'EuclideanDist')\n", "\n", "# Our prediction is Y=Green because that is the response value of the \n", "# first nearest neighbour to the point X1 = X2 = X3 = 0" ] }, { "cell_type": "code", "execution_count": 80, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ObsX1X2X3YEuclideanDist
45-101Green1.414214
561-11Red1.732051
12200Red2.000000
\n", "
" ], "text/plain": [ " Obs X1 X2 X3 Y EuclideanDist\n", "4 5 -1 0 1 Green 1.414214\n", "5 6 1 -1 1 Red 1.732051\n", "1 2 2 0 0 Red 2.000000" ] }, "execution_count": 80, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# (c) What is our prediction with K = 3? Why? \n", "\n", "K = 3\n", "df_euc.nsmallest(K, 'EuclideanDist')\n", "\n", "# Red, because majority of the 3 nearest neighbours are Red." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "(d) If the Bayes decision boundary in this problem is highly non-linear, then would we expect the best value for K to be large or small? Why? \n", "\n", "Small. A smaller value of K results in a more flexible classification model because the prediciton is based upon a smaller subset of all observations in the dataset." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }