{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Selecting features for modeling\n", "> This chapter goes over a few different techniques for selecting the most important features from your dataset. You'll learn how to drop redundant features, work with text vectors, and reduce the number of features in your dataset using principal component analysis (PCA). This is the Summary of lecture \"Preprocessing for Machine Learning in Python\", via datacamp.\n", "\n", "- toc: true \n", "- badges: true\n", "- comments: true\n", "- author: Chanseok Kang\n", "- categories: [Python, Datacamp, Machine_Learning]\n", "- image: " ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature selection\n", "- Selecting features to be used for modeling\n", "- Doesn't create new features\n", "- Improve model's performance" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Identifying areas for feature selection\n", "Take an exploratory look at the post-feature engineering `hiking` dataset. " ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Prop_IDNameLocationPark_NameLengthDifficultyOther_DetailsAccessibleLimited_Accesslatlon
0B057Salt Marsh Nature TrailEnter behind the Salt Marsh Nature Center, loc...Marine Park0.8 milesNone<p>The first half of this mile-long trail foll...YNNaNNaN
1B073LullwaterEnter Park at Lincoln Road and Ocean Avenue en...Prospect Park1.0 mileEasyExplore the Lullwater to see how nature thrive...NNNaNNaN
2B073MidwoodEnter Park at Lincoln Road and Ocean Avenue en...Prospect Park0.75 milesEasyStep back in time with a walk through Brooklyn...NNNaNNaN
3B073PeninsulaEnter Park at Lincoln Road and Ocean Avenue en...Prospect Park0.5 milesEasyDiscover how the Peninsula has changed over th...NNNaNNaN
4B073WaterfallEnter Park at Lincoln Road and Ocean Avenue en...Prospect Park0.5 milesEasyTrace the source of the Lake on the Waterfall ...NNNaNNaN
\n", "
" ], "text/plain": [ " Prop_ID Name \\\n", "0 B057 Salt Marsh Nature Trail \n", "1 B073 Lullwater \n", "2 B073 Midwood \n", "3 B073 Peninsula \n", "4 B073 Waterfall \n", "\n", " Location Park_Name \\\n", "0 Enter behind the Salt Marsh Nature Center, loc... Marine Park \n", "1 Enter Park at Lincoln Road and Ocean Avenue en... Prospect Park \n", "2 Enter Park at Lincoln Road and Ocean Avenue en... Prospect Park \n", "3 Enter Park at Lincoln Road and Ocean Avenue en... Prospect Park \n", "4 Enter Park at Lincoln Road and Ocean Avenue en... Prospect Park \n", "\n", " Length Difficulty Other_Details \\\n", "0 0.8 miles None

The first half of this mile-long trail foll... \n", "1 1.0 mile Easy Explore the Lullwater to see how nature thrive... \n", "2 0.75 miles Easy Step back in time with a walk through Brooklyn... \n", "3 0.5 miles Easy Discover how the Peninsula has changed over th... \n", "4 0.5 miles Easy Trace the source of the Lake on the Waterfall ... \n", "\n", " Accessible Limited_Access lat lon \n", "0 Y N NaN NaN \n", "1 N N NaN NaN \n", "2 N N NaN NaN \n", "3 N N NaN NaN \n", "4 N N NaN NaN " ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hiking = pd.read_json('./dataset/hiking.json')\n", "hiking.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Removing redundant features\n", "- Remove noisy features\n", "- Remove correlated features\n", " - Statistically correlated: features move together directionally\n", " - Linear models assume feature independence\n", " - Pearson correlation coefficient\n", "- Remove duplicated features" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Selecting relevant features\n", "Now let's identify the redundant columns in the `volunteer` dataset and perform feature selection on the dataset to return a DataFrame of the relevant features.\n", "\n", "For example, if you explore the `volunteer` dataset in the console, you'll see three features which are related to location: `locality`, `region`, and `postalcode`. They contain repeated information, so it would make sense to keep only one of the features.\n", "\n", "There are also features that have gone through the feature engineering process: columns like `Education` and `Emergency Preparedness` are a product of encoding the categorical variable `category_desc`, so `category_desc` itself is redundant now.\n", "\n", "Take a moment to examine the features of volunteer in the console, and try to identify the redundant features." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
vol_requeststitlehitscategory_desclocalityregionpostalcodecreated_datevol_requests_lognormcreated_monthEducationEmergency PreparednessEnvironmentHealthHelping Neighbors in NeedStrengthening Communities
02Web designer22Strengthening Communities5 22nd St\\nNew York, NY 10010\\n(40.74053152272...NY10010.02011-01-140.6931471000001
120Urban Adventures - Ice Skating at Lasker Rink62Strengthening CommunitiesNaNNY10026.02011-01-192.9957321000001
2500Fight global hunger and support women farmers ...14Strengthening CommunitiesNaNNY2114.02011-01-216.2146081000001
315Stop 'N' Swap31EnvironmentNaNNY10455.02011-01-282.7080501001000
415Queens Stop 'N' Swap135EnvironmentNaNNY11372.02011-01-282.7080501001000
\n", "
" ], "text/plain": [ " vol_requests title hits \\\n", "0 2 Web designer 22 \n", "1 20 Urban Adventures - Ice Skating at Lasker Rink 62 \n", "2 500 Fight global hunger and support women farmers ... 14 \n", "3 15 Stop 'N' Swap 31 \n", "4 15 Queens Stop 'N' Swap 135 \n", "\n", " category_desc \\\n", "0 Strengthening Communities \n", "1 Strengthening Communities \n", "2 Strengthening Communities \n", "3 Environment \n", "4 Environment \n", "\n", " locality region postalcode \\\n", "0 5 22nd St\\nNew York, NY 10010\\n(40.74053152272... NY 10010.0 \n", "1 NaN NY 10026.0 \n", "2 NaN NY 2114.0 \n", "3 NaN NY 10455.0 \n", "4 NaN NY 11372.0 \n", "\n", " created_date vol_requests_lognorm created_month Education \\\n", "0 2011-01-14 0.693147 1 0 \n", "1 2011-01-19 2.995732 1 0 \n", "2 2011-01-21 6.214608 1 0 \n", "3 2011-01-28 2.708050 1 0 \n", "4 2011-01-28 2.708050 1 0 \n", "\n", " Emergency Preparedness Environment Health Helping Neighbors in Need \\\n", "0 0 0 0 0 \n", "1 0 0 0 0 \n", "2 0 0 0 0 \n", "3 0 1 0 0 \n", "4 0 1 0 0 \n", "\n", " Strengthening Communities \n", "0 1 \n", "1 1 \n", "2 1 \n", "3 0 \n", "4 0 " ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "volunteer = pd.read_csv('./dataset/volunteer_sample.csv')\n", "volunteer.dropna(subset=['category_desc'], axis=0, inplace=True)\n", "volunteer.head()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Index(['vol_requests', 'title', 'hits', 'category_desc', 'locality', 'region',\n", " 'postalcode', 'created_date', 'vol_requests_lognorm', 'created_month',\n", " 'Education', 'Emergency Preparedness', 'Environment', 'Health',\n", " 'Helping Neighbors in Need', 'Strengthening Communities'],\n", " dtype='object')" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "volunteer.columns" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
titlehitspostalcodevol_requests_lognormcreated_monthEducationEmergency PreparednessEnvironmentHealthHelping Neighbors in NeedStrengthening Communities
0Web designer2210010.00.6931471000001
1Urban Adventures - Ice Skating at Lasker Rink6210026.02.9957321000001
2Fight global hunger and support women farmers ...142114.06.2146081000001
3Stop 'N' Swap3110455.02.7080501001000
4Queens Stop 'N' Swap13511372.02.7080501001000
\n", "
" ], "text/plain": [ " title hits postalcode \\\n", "0 Web designer 22 10010.0 \n", "1 Urban Adventures - Ice Skating at Lasker Rink 62 10026.0 \n", "2 Fight global hunger and support women farmers ... 14 2114.0 \n", "3 Stop 'N' Swap 31 10455.0 \n", "4 Queens Stop 'N' Swap 135 11372.0 \n", "\n", " vol_requests_lognorm created_month Education Emergency Preparedness \\\n", "0 0.693147 1 0 0 \n", "1 2.995732 1 0 0 \n", "2 6.214608 1 0 0 \n", "3 2.708050 1 0 0 \n", "4 2.708050 1 0 0 \n", "\n", " Environment Health Helping Neighbors in Need Strengthening Communities \n", "0 0 0 0 1 \n", "1 0 0 0 1 \n", "2 0 0 0 1 \n", "3 1 0 0 0 \n", "4 1 0 0 0 " ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Create a list of redundant column names to drop\n", "to_drop = [\"locality\", \"region\", \"category_desc\", \"created_date\", \"vol_requests\"]\n", "\n", "# Drop those columns from the dataset\n", "volunteer_subset = volunteer.drop(to_drop, axis=1)\n", "\n", "# Print out the head of the new dataset\n", "volunteer_subset.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Checking for correlated features\n", "Let's take a look at the `wine` dataset again, which is made up of continuous, numerical features. Run Pearson's correlation coefficient on the dataset to determine which columns are good candidates for eliminating. Then, remove those columns from the DataFrame.\n", "\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
FlavanoidsTotal phenolsMalic acidOD280/OD315 of diluted winesHue
03.062.801.713.921.04
12.762.651.783.401.05
23.242.802.363.171.03
33.493.851.953.450.86
42.692.802.592.931.04
\n", "
" ], "text/plain": [ " Flavanoids Total phenols Malic acid OD280/OD315 of diluted wines Hue\n", "0 3.06 2.80 1.71 3.92 1.04\n", "1 2.76 2.65 1.78 3.40 1.05\n", "2 3.24 2.80 2.36 3.17 1.03\n", "3 3.49 3.85 1.95 3.45 0.86\n", "4 2.69 2.80 2.59 2.93 1.04" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "wine = pd.read_csv('./dataset/wine_sample.csv')\n", "wine.head()" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Flavanoids Total phenols Malic acid \\\n", "Flavanoids 1.000000 0.864564 -0.411007 \n", "Total phenols 0.864564 1.000000 -0.335167 \n", "Malic acid -0.411007 -0.335167 1.000000 \n", "OD280/OD315 of diluted wines 0.787194 0.699949 -0.368710 \n", "Hue 0.543479 0.433681 -0.561296 \n", "\n", " OD280/OD315 of diluted wines Hue \n", "Flavanoids 0.787194 0.543479 \n", "Total phenols 0.699949 0.433681 \n", "Malic acid -0.368710 -0.561296 \n", "OD280/OD315 of diluted wines 1.000000 0.565468 \n", "Hue 0.565468 1.000000 \n" ] } ], "source": [ "# Print out the column correlations of the wine dataset\n", "print(wine.corr())" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "# Take a minute to find the column where the correlation value is greater than 0.75 at least twice\n", "to_drop = \"Flavanoids\"\n", "\n", "# Drop that column from the DataFrame\n", "wine = wine.drop(to_drop, axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Selecting features using text vectors\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exploring text vectors, part 1\n", "Let's expand on the text vector exploration method we just learned about, using the `volunteer` dataset's title tf/idf vectors. In this first part of text vector exploration, we're going to add to that function we learned about in the slides. We'll return a list of numbers with the function. In the next exercise, we'll write another function to collect the top words across all documents, extract them, and then use that list to filter down our `text_tfidf` vector." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "vocab_csv = pd.read_csv('./dataset/vocab_volunteer.csv', index_col=0).to_dict()\n", "vocab = vocab_csv['0']" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "volunteer = volunteer[['category_desc', 'title']]\n", "volunteer = volunteer.dropna(subset=['category_desc'], axis=0)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "from sklearn.feature_extraction.text import TfidfVectorizer\n", "\n", "# Take the title text\n", "title_text = volunteer['title']\n", "\n", "# Create the vectorizer method\n", "tfidf_vec = TfidfVectorizer()\n", "\n", "# Transform the text into tf-idf vectors\n", "text_tfidf = tfidf_vec.fit_transform(title_text)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[189, 942, 466]\n" ] } ], "source": [ "# Add in the rest of the parameters\n", "def return_weights(vocab, original_vocab, vector, vector_index, top_n):\n", " zipped = dict(zip(vector[vector_index].indices, vector[vector_index].data))\n", " \n", " # Let's transform that zipped dict into a series\n", " zipped_series = pd.Series({vocab[i]:zipped[i] for i in vector[vector_index].indices})\n", " \n", " # Let's sort the series to pull out the top n weighted words\n", " zipped_index = zipped_series.sort_values(ascending=False)[:top_n].index\n", " return [original_vocab[i] for i in zipped_index]\n", "\n", "# Print out the weighted words\n", "print(return_weights(vocab, tfidf_vec.vocabulary_, text_tfidf, vector_index=8, top_n=3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exploring text vectors, part 2\n", "Using the function we wrote in the previous exercise, we're going to extract the top words from each document in the text vector, return a list of the word indices, and use that list to filter the text vector down to those top words." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "def words_to_filter(vocab, original_vocab, vector, top_n):\n", " filter_list = []\n", " for i in range(0, vector.shape[0]):\n", " # here we'll call the function from the previous exercise, \n", " # and extend the list we're creating\n", " filtered = return_weights(vocab, original_vocab, vector, i, top_n)\n", " filter_list.extend(filtered)\n", " # Return the list in a set, so we don't get duplicate word indices\n", " return set(filter_list)\n", "\n", "# Call the function to get the list of word indices\n", "filtered_words = words_to_filter(vocab, tfidf_vec.vocabulary_, text_tfidf, top_n=3)\n", "\n", "# By converting filtered_words back to a list, \n", "# we can use it to filter the columns in the text vector\n", "filtered_text = text_tfidf[:, list(filtered_words)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training Naive Bayes with feature selection\n", "Let's re-run the Naive Bayes text classification model we ran at the end of chapter 3, with our selection choices from the previous exercise, on the `volunteer` dataset's `title` and `category_desc` columns.\n", "\n" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.5483870967741935\n" ] } ], "source": [ "from sklearn.model_selection import train_test_split\n", "from sklearn.naive_bayes import GaussianNB\n", "\n", "nb = GaussianNB()\n", "y = volunteer['category_desc']\n", "\n", "# Split the dataset according to the class distribution of category_desc,\n", "# using the filtered_text vector\n", "X_train, X_test, y_train, y_test = train_test_split(filtered_text.toarray(), y, stratify=y)\n", "\n", "# Fit the model to the training data\n", "nb.fit(X_train, y_train)\n", "\n", "# Print out the model's accuracy\n", "print(nb.score(X_test, y_test))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see that our accuracy score wasn't that different from the score at the end of chapter 3. That's okay; the `title` field is a very small text field, appropriate for demonstrating how filtering vectors works." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dimensionality reduction\n", "- Unsupervised learning method\n", "- Combine/decomposes a feature space\n", "- Feature extraction\n", "- Principal component analysis\n", " - Linear transformation to uncorrelated space\n", " - Captures as much variance as possible in each component\n", "- PCA caveats\n", " - Difficult to interpret components\n", " - End of preprocessing journey" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using PCA\n", "Let's apply PCA to the `wine` dataset, to see if we can get an increase in our model's accuracy." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
TypeAlcoholMalic acidAshAlcalinity of ashMagnesiumTotal phenolsFlavanoidsNonflavanoid phenolsProanthocyaninsColor intensityHueOD280/OD315 of diluted winesProline
0114.231.712.4315.61272.803.060.282.295.641.043.921065
1113.201.782.1411.21002.652.760.261.284.381.053.401050
2113.162.362.6718.61012.803.240.302.815.681.033.171185
3114.371.952.5016.81133.853.490.242.187.800.863.451480
4113.242.592.8721.01182.802.690.391.824.321.042.93735
\n", "
" ], "text/plain": [ " Type Alcohol Malic acid Ash Alcalinity of ash Magnesium \\\n", "0 1 14.23 1.71 2.43 15.6 127 \n", "1 1 13.20 1.78 2.14 11.2 100 \n", "2 1 13.16 2.36 2.67 18.6 101 \n", "3 1 14.37 1.95 2.50 16.8 113 \n", "4 1 13.24 2.59 2.87 21.0 118 \n", "\n", " Total phenols Flavanoids Nonflavanoid phenols Proanthocyanins \\\n", "0 2.80 3.06 0.28 2.29 \n", "1 2.65 2.76 0.26 1.28 \n", "2 2.80 3.24 0.30 2.81 \n", "3 3.85 3.49 0.24 2.18 \n", "4 2.80 2.69 0.39 1.82 \n", "\n", " Color intensity Hue OD280/OD315 of diluted wines Proline \n", "0 5.64 1.04 3.92 1065 \n", "1 4.38 1.05 3.40 1050 \n", "2 5.68 1.03 3.17 1185 \n", "3 7.80 0.86 3.45 1480 \n", "4 4.32 1.04 2.93 735 " ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "wine = pd.read_csv('./dataset/wine_types.csv')\n", "wine.head()" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[9.98091230e-01 1.73591562e-03 9.49589576e-05 5.02173562e-05\n", " 1.23636847e-05 8.46213034e-06 2.80681456e-06 1.52308053e-06\n", " 1.12783044e-06 7.21415811e-07 3.78060267e-07 2.12013755e-07\n", " 8.25392788e-08]\n" ] } ], "source": [ "from sklearn.decomposition import PCA\n", "\n", "# Set up PCA and the X vector for dimensionality reduction\n", "pca = PCA()\n", "wine_X = wine.drop('Type', axis=1)\n", "\n", "# Apply PCA to the wine dataset X vector\n", "transformed_X = pca.fit_transform(wine_X)\n", "\n", "# Look at the percentage of variance explained by the different components\n", "print(pca.explained_variance_ratio_)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training a model with PCA\n", "Now that we have run PCA on the `wine` dataset, let's try training a model with it." ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.7555555555555555\n" ] } ], "source": [ "from sklearn.neighbors import KNeighborsClassifier\n", "\n", "y = wine['Type']\n", "\n", "knn = KNeighborsClassifier()\n", "\n", "# Split the transformed X and the y labels into training and test sets\n", "X_wine_train, X_wine_test, y_wine_train, y_wine_test = train_test_split(transformed_X, y)\n", "\n", "# Fit knn to the training data\n", "knn.fit(X_wine_train, y_wine_train)\n", "\n", "# Score knn on the test data and print it out\n", "print(knn.score(X_wine_test, y_wine_test))" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 4 }