{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "##
Open Machine Learning Course mlcourse.ai. English session #2\n", "\n", "###
Autor: Valentin Kovalev (@ValentineKvLove)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##
Individual data analysis project
\n", "###
Predict rain tomorrow in Australia
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Research plan**\n", " - Feature and data explanation\n", " - Primary data analysis\n", " - Primary visual data analysis\n", " - Insights and found dependencies\n", " - Metrics selection\n", " - Model selection\n", " - Data preprocessing\n", " - Cross-validation and adjustment of model hyperparameters\n", " - Creation of new features and description of this process\n", " - Plotting training and validation curves\n", " - Prediction for test or hold-out samples\n", " - Conclusions\n", " \n", "Read more here." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "\n", "from sklearn.preprocessing import StandardScaler\n", "from sklearn.model_selection import train_test_split\n", "from scipy import stats\n", "\n", "import operator\n", "\n", "import warnings\n", "warnings.filterwarnings('ignore')\n", "\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###
1. Feature and data explanation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this dataset there are two targets we can try to predict: **RainTomorrow** or **Risk-MM**.
\n", "\n", "**RainTomorrow** means: Did it rain the next day? Yes or No.
\n", "\n", "**Risk-MM** means: The amount of rain the next day.
\n", "\n", "In this project we'll try to predict **whether or not it will rain tomorrow** by training a binary classification model on target **RainTomorrow**.
\n", "
\n", "Challenges (some spoilers) in this dataset:
\n", "- Unbalanced target RainTomorrow
\n", "- Missing data (even in the target)
\n", "- Numeric and categorical variables

\n", "\n", "### Content
\n", "The content is daily weather observations from numerous Australian weather stations.
\n", "There is target we can try to predict: **RainTomorrow**.
\n", "**RainTomorrow** means: Did it rain the next day? Yes or No.
\n", "### Source & Acknowledgements
\n", "Original dataset was obtained from https://www.kaggle.com/jsphyg/weather-dataset-rattle-package
\n", "Observations were drawn from numerous weather stations. The daily observations are available from http://www.bom.gov.au/climate/data. Copyright Commonwealth of Australia 2010, Bureau of Meteorology.
\n", "
\n", "Definitions adapted from http://www.bom.gov.au/climate/dwo/IDCJDW0000.shtml
\n", "
\n", "Package home page: http://rattle.togaware.com.\n", "
Data source: http://www.bom.gov.au/climate/dwo/ and http://www.bom.gov.au/climate/data.
\n", "
\n", "And to see some great examples of how to use this data: https://togaware.com/onepager/
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Feature descriptions in dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Date**
The date of observation.
Type: Datetime

\n", "**Location**
The common name of the location of the weather station.
Type: String

\n", "**MinTemp**
The minimum temperature in degrees celsius.
Type: Numeric

\n", "**MaxTemp**
The maximum temperature in degrees celsius.
Type: Numeric

\n", "**Rainfall**
The amount of rainfall recorded for the day in mm.
Type: Numeric

\n", "**Evaporation**
The so-called Class A pan evaporation (mm) in the 24 hours to 9am.
Type: String

\n", "**Sunshine**
The number of hours of bright sunshine in the day.
Type: String

\n", "**WindGustDir**
The direction of the strongest wind gust in the 24 hours to midnight.
Type: String

\n", "**WindGustSpeed**
The speed (km/h) of the strongest wind gust in the 24 hours to midnight.
Type: Numeric

\n", "**WindDir9am**
Direction of the wind at 9am.
Type: String

\n", "**WindDir3pm**
Direction of the wind at 3pm.
Type: String

\n", "**WindSpeed9am**
Wind speed (km/hr) averaged over 10 minutes prior to 9am.
Type: Numeric

\n", "**WindSpeed3pm**
Wind speed (km/hr) averaged over 10 minutes prior to 3pm.
Type: Numeric

\n", "**Humidity9am**
Humidity (percent) at 9am.
Type: Numeric

\n", "**Humidity3pm**
Humidity (percent) at 3pm.
Type: Numeric

\n", "**Pressure9am**
Atmospheric pressure (hpa) reduced to mean sea level at 9am.
Type: Numeric

\n", "**Pressure3pm**
Atmospheric pressure (hpa) reduced to mean sea level at 3pm.
Type: Numeric

\n", "**Cloud9am**
Fraction of sky obscured by cloud at 9am. This is measured in \"oktas\", which are a unit of eigths. It records how many eigths of the sky are obscured by cloud. A 0 measure indicates completely clear sky whilst an 8 indicates that it is completely overcast.
Type: Numeric

\n", "**Cloud3pm**
Fraction of sky obscured by cloud (in \"oktas\": eighths) at 3pm. See Cload9am for a description of the values.
Type: Numeric

\n", "**Temp9am**
Temperature (degrees C) at 9am.
Type: Numeric

\n", "**Temp3pm**
Temperature (degrees C) at 3pm.
Type: Numeric

\n", "**RainToday**
1 if precipitation (mm) in the 24 hours to 9am exceeds 1mm, otherwise 0.
Type: Boolean

\n", "**RISK_MM**
The amount of rain. A kind of measure of the \"risk\".
Type: Digital

\n", "**RainTomorrow**
The target variable. Did it rain tomorrow?
Type: Boolean

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###
2. Primary data analysis" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At first, import libraries to work" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "from sklearn import preprocessing\n", "import matplotlib.pyplot as plt\n", "\n", "# Input data files are available in the \"../input/\" directory.\n", "# For example, running this will list the files in the input directory\n", "\n", "import os\n", "\n", "# Any results you write to the current directory are saved as output." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Assign datatypes of all columns that we use in DataFrame object." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dtype = {'Date': str,\n", " 'Location': str,\n", " 'MinTemp': np.float16,\n", " 'MaxTemp': np.float16, \n", " 'Rainfall': np.float16,\n", " 'Evaporation': np.float16,\n", " 'Sunshine': np.float16, \n", " 'WindGustDir': str,\n", " 'WindGustSpeed': np.int64,\n", " 'WindDir9am': str,\n", " 'WindDir3am': str,\n", " 'WindSpeed9am': np.uint16,\n", " 'WindSpeed3am': np.uint16,\n", " 'Humidity9am': np.uint16,\n", " 'Humidity3am': np.uint16,\n", " 'Pressure9am': np.float16,\n", " 'Pressure3am': np.float16,\n", " 'Cloud9am': np.uint16,\n", " 'Cloud9am': np.uint16,\n", " 'Temp9am': np.float16,\n", " 'Temp3am': np.float16,\n", " 'RainToday': str,\n", " 'RISK_MM': np.float16,\n", " 'RainTomorrow': str}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Reading data into memory and creating a Pandas DataFrame object." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all = pd.read_csv(\"input/weatherAUS.csv\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check the number of rows and columns and print column names." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(df_all.shape)\n", "print(df_all.columns)\n", "df_all.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Drop 'RISK_MM' column, because we want to solve classification task, and print 5 rows of our new DataFrame object." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = df_all.drop(columns=['RISK_MM'])\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Transpose the frame to see all features at once." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.head().T" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Examine data types of all features and total dataframe size in memory." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the characteristics of the distribution of categorical features. Immediately we bring the integer signs **Date, Location, WindGustDir, WindDir9am, WindDir3pm, RainToday, RainTomorrow** (which are categorical) to the string type for correct processing by describe () method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "int_features = ['Date', 'Location', 'WindGustDir', 'WindDir9am', 'WindDir3pm', 'RainToday', 'RainTomorrow']\n", "\n", "for feat in int_features:\n", " df[feat] = df[feat].astype('str')\n", " \n", "df.describe(exclude=['float64']).T" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The sample includes 49 settlement.
\n", "There are no anomalous values in the date, Location and WindDirs.
\n", "WindDir (all) - 16 values for directions, and one is for windless.
\n", "RainToday - 'Yes'/'No'/NaN - can represented as bool-type.
\n", "RainTomorrow - 'Yes'/'No' - can represented as bool-type.
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us highlight the float features and look at the characteristics of their distribution." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.describe(include=['float64']).T" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Можно сделать предварительные выводы по признакам:
\n", "- **MinTemp, MaxTemp, Temp9am, Temp3pm** (The minimum/maximum temperature at this day, or temperature at 9am/3pm in degrees celsius) - values probably do not include emissions due to errors of measurement or their number is few; it is possible to do verification using geo-services.\n", "- **Rainfall** (the amount of rainfall recorded for the day in mm) - want to believe that there are no measurement errors, otherwise we have questions to the data layout; more than 50% of the data contain information about the absence of precipitation, and 75% - about the absence or minor precipitation.\n", "- **Evaporation** (The so-called Class A pan evaporation (mm) in the 24 hours to 9am) - looks like that there are no measurement errors, at more days this indicator is > 0, got many (>60%) skipped values.\n", "- **Sunshine** (The number of hours of bright sunshine in the day) - got many skipped values.\n", "- **WindGustSpeed, WindSpeed9am, WindSpeed3pm** (Wind speed) - got NaN values at some positions.\n", "- **Humidity9am, Humidity3pm** (Humidity (percent) at 9am/3pm) - values got some NaN values.\n", "- **Pressure9am, Pressure3pm** (Atmospheric pressure (hpa) reduced to mean sea level at 9am/3pm) - about 10% of NaNs values.\n", "- **Cloud9am, Cloud3pm** (Fraction of sky obscured by cloud at 9am/3pm. This is measured in \"oktas\", which are a unit of eigths. It records how many eigths of the sky are obscured by cloud. A 0 measure indicates completely clear sky whilst an 8 indicates that it is completely overcast) - got many NaN values (about 40%), we can try to drop it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see there are some columns with null values.
\n", "Let's find out which of the columns have maximum null values" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.count().sort_values()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see the first four columns have less than 60% data, we can ignore these four columns. It can be, because the observer was not able to check these features. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Consider the correlation of real features, excluding the target. We map the distribution and top 20 correlating pairs of features by two methods: Pearson and Spearman (estimating the strength of linear and monotonic relationships, respectively)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "numeric_columns = [col for col in df if (df[col].dtype == 'float64')]\n", "\n", "pearson_corr = df[numeric_columns].corr(method='pearson')\n", "spearman_corr = df[numeric_columns].corr(method='spearman')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "no_target_unique_pearson_corr = dict()\n", "no_target_unique_spearman_corr = dict()\n", "\n", "for (row, r_name) in enumerate(pearson_corr.index):\n", " for (col, c_name) in enumerate(pearson_corr.columns[row+1:]):\n", " if('RainTomorrow' in [r_name, c_name]):\n", " continue\n", " no_target_unique_pearson_corr['{}-{}'.format(r_name, c_name)] = abs(pearson_corr.iloc[row, col])\n", " \n", "for (row, r_name) in enumerate(spearman_corr.index):\n", " for (col, c_name) in enumerate(spearman_corr.columns[row+1:]):\n", " if('RainTomorrow' in [r_name, c_name]):\n", " continue\n", " no_target_unique_spearman_corr['{}-{}'.format(r_name, c_name)] = abs(spearman_corr.iloc[row, col])\n", "\n", "pearson_corr_df = pd.DataFrame(sorted(no_target_unique_pearson_corr.items(), key=operator.itemgetter(1), reverse=True))\n", "pearson_corr_df.columns = ['', 'pearson corr']\n", "\n", "spearman_corr_df = pd.DataFrame(sorted(no_target_unique_spearman_corr.items(), key=operator.itemgetter(1), reverse=True))\n", "spearman_corr_df.columns = ['', 'spearman corr']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "corr_df = pd.concat([pearson_corr_df, spearman_corr_df], axis=1)\n", "corr_df.hist()\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "corr_df = pd.concat([pearson_corr_df.head(20), spearman_corr_df.head(20)], axis=1)\n", "corr_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that both methods on the top results give very similar indicators. It can be assumed that there is a correlation between these features, and the relationship is linear. But dependence is not due to the physical nature of the signs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check balance between positive and negative classes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df['RainTomorrow'].value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see, that classes unbalanced." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###
3. Primary visual data analysis" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plotting balance between positive and negative classes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_sb = sns.countplot(df['RainTomorrow'], label='Total')\n", "Rain, NotRain = df['RainTomorrow'].value_counts()\n", "print('Rain: ',Rain)\n", "print('Not Rain : ',NotRain)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we saw before, classes are unbalanced." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Look at heatmap by pearson correlation, that we made before." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.subplots(figsize=(11, 9))\n", "sns.heatmap(pearson_corr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see, that temperature, pressure, windspeed and humidity features at 9am and 3pm are very correlated.
\n", "And we can see, that rainfall and all wind features, and wind and cloud features correlated too." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###
4. Insights and found dependencies" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Summarise previous results, we can see that:\n", "- temperature, pressure, windspeed and humidity features at one day, but on 9am and 3pm are very correlated. It's usuall and talks, that this features are stable at each day.
\n", "- Rainfall and wind features correlated, because, existance of wind talks, that rain is near, and if we seen rain at this day, we got big chance to face with wind.
\n", "- Cloud and wind features are correlated, because, its normal nature law." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###
5. Metrics selection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the classes are very unbalanced. The usual choice of metric for classification tasks with a strong imbalance of classes is a **ROC-AUC** metric. Unlike simpler metrics (for example, the proportion of correct answers), ROC-AUC takes into account both TPR (True Positive Rate) and FPR (False Positive Rate). So it's not sensitive to class imbalance. Also, this metric allows you to assess the quality of the classification, based on the probabilistic assumptions of belonging to the class, without tying to any particular threshold of classification." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###
6. Model selection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For our task we will consider the following model:\n", "- **LogisticRegression**. A simple linear model is a good baseline in almost any classification problem. Its linear coefficients will also allow to assess the importance of a particular feature in the dataset, with the help of L1-regularization it will be possible to get rid of linearly dependent features." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###
7. Data preprocessing" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.count().sort_values()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see the first four columns have less than 60% data, we can ignore these four columns
\n", "and don't need the location column because we are going to find if it will rain in Australia(not location specific)
\n", "We are going to drop the date column too." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = df.drop(columns=['Sunshine','Evaporation','Cloud3pm','Cloud9am','Location','Date'],axis=1)\n", "df.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us get rid of all null values in df as baseline. Students are invited to fill NaN values with mean (temperature features) or zero (wind features)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = df.dropna(how='any')\n", "df.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Its time to remove the outliers in our data - we are using Z-score to detect and remove the outliers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy import stats\n", "z = np.abs(stats.zscore(df._get_numeric_data()))\n", "print(z)\n", "df= df[(z < 3).all(axis=1)]\n", "print(df.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets deal with the categorical cloumns.
\n", "Simply change yes/no to 1/0 for RainToday and RainTomorrow" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df['RainToday'].replace({'No': 0, 'Yes': 1},inplace = True)\n", "df['RainTomorrow'].replace({'No': 0, 'Yes': 1},inplace = True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See unique values and convert them to int using pd.getDummies()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "categorical_columns = ['WindGustDir', 'WindDir3pm', 'WindDir9am']\n", "for col in categorical_columns:\n", " print(np.unique(df[col]))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Transform the categorical columns" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.get_dummies(df, columns=categorical_columns)\n", "df.iloc[4:9]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next step is to standardize our data - using MinMaxScaler" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import preprocessing\n", "scaler = preprocessing.MinMaxScaler()\n", "scaler.fit(df)\n", "df = pd.DataFrame(scaler.transform(df), index=df.index, columns=df.columns)\n", "df.iloc[4:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Feature Selection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we are done with the pre-processing part, let's see which are the important features for RainTomorrow!
\n", "Using SelectKBest to get the top features!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.feature_selection import SelectKBest, chi2\n", "X = df.loc[:,df.columns!='RainTomorrow']\n", "y = df['RainTomorrow']\n", "selector = SelectKBest(chi2, k=10)\n", "selector.fit(X, y)\n", "X_new = selector.transform(X)\n", "print(X.columns[selector.get_support(indices=True)]) #top 3 columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's get hold of the important features as assign them as X" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = df[['Rainfall', 'WindGustSpeed', 'Humidity9am', 'Humidity3pm',\n", " 'Pressure9am', 'Pressure3pm', 'RainToday', 'WindDir9am_E',\n", " 'WindDir9am_N', 'WindDir9am_NNW']]\n", "X = np.array(df) # let's use only one feature Humidity3pm\n", "#y = np.array(y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's time to divide data to training and hold-out sets. We will use StratifiedKFold, this cross-validation object is a variation of KFold that returns stratified folds. The folds are made by preserving the percentage of samples for each class." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import StratifiedKFold" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Optimal value for splits by time and quality is 5." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "skf = StratifiedKFold(n_splits = 5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###
8. Cross-validation and adjustment of model hyperparameters" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we want to tune parameters in future for biggest amount of features, we need to:\n", "**Remember** that we use paramter C as our regularization parameter. Parameter C = 1/λ.
\n", "Lambda (λ) controls the trade-off between allowing the model to increase it's complexity as much as it wants with trying to keep it simple. For example, if λ is very low or 0, the model will have enough power to increase it's complexity (overfit) by assigning big values to the weights for each parameter. If, in the other hand, we increase the value of λ, the model will tend to underfit, as the model will become too simple.
\n", "Parameter C will work the other way around. For small values of C, we increase the regularization strength which will create simple models which underfit the data. For big values of C, we low the power of regularization which imples the model is allowed to increase it's complexity, and therefore, overfit the data.
\n", "Parameter 'penalty' is for type of regularization (l1, l2 or elastic_net)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LogisticRegression\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.metrics import roc_auc_score\n", "from sklearn.model_selection import GridSearchCV\n", "\n", "param_grid = {'C': [0.01, 0.1, 1, 10, 100, 1000, 10000, 100000], 'penalty': ['l1', 'l2']}\n", "gridsearch = GridSearchCV(clf_logreg, param_grid, cv=skf, scoring='roc_auc')\n", "gridsearch.fit(X, y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, our qulity don't got big changes on each cv_split." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###
9. Plotting training and validation curves and prediction for test or hold-out samples" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next plotting graph of gridsearch score on test y." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "score_l1 = list(pd.Series(gridsearch.cv_results_).mean_test_score[::2])\n", "score_l2 = list(pd.Series(gridsearch.cv_results_).mean_test_score[1::2])\n", "param = list(param_grid['C'])\n", "\n", "plt.figure(figsize=(15,5))\n", "plt.title('Coarse mesh of parameters')\n", "plt.plot(range(len(score_l1)), score_l1, label='l1-regularization')\n", "plt.plot(range(len(score_l2)), score_l2, label='l2-regularization')\n", "plt.xticks(range(len(score_l1)), param)\n", "plt.xlabel('C')\n", "plt.ylabel('score')\n", "plt.legend()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Big values of regularizations tells, that our top-10 features very correlated, and we can use only 3-5 of them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###
10. Conclusions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, we got much correlated features, and as possible case is drop them from input dataframe. In the end of this work I just got sadness, because here we got much correlated features, and don't have others features. We can't generate new features.
\n", "But we can see, that we gotta some killer features, because, just only them gives us 0.85 auc-roc score, and it is not bad baseline for this task.
\n", "Thank you for listening!
" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }