{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Predicting Credit Card Approvals\n", "> A Summary of project \"Predicting Credit Card Approvals\", via datacamp\n", "\n", "- toc: true \n", "- badges: true\n", "- comments: true\n", "- author: Chanseok Kang\n", "- categories: [Python, Datacamp, Machine_Learning]\n", "- image: images/credit_card.jpg" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "3" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 1. Credit card applications\n", "

Commercial banks receive a lot of applications for credit cards. Many of them get rejected for many reasons, like high loan balances, low income levels, or too many inquiries on an individual's credit report, for example. Manually analyzing these applications is mundane, error-prone, and time-consuming (and time is money!). Luckily, this task can be automated with the power of machine learning and pretty much every commercial bank does so nowadays. In this notebook, we will build an automatic credit card approval predictor using machine learning techniques, just like the real banks do.

\n", "

\"Credit

\n", "

We'll use the Credit Card Approval dataset from the UCI Machine Learning Repository. The structure of this notebook is as follows:

\n", "\n", "

First, loading and viewing the dataset. We find that since this data is confidential, the contributor of the dataset has anonymized the feature names.

" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "dc": { "key": "3" }, "tags": [ "sample_code" ] }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0123456789101112131415
0b30.830.000ugwv1.25tt1fg002020+
1a58.674.460ugqh3.04tt6fg00043560+
2a24.500.500ugqh1.50tf0fg00280824+
3b27.831.540ugwv3.75tt5tg001003+
4b20.175.625ugwv1.71tf0fs001200+
\n", "
" ], "text/plain": [ " 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15\n", "0 b 30.83 0.000 u g w v 1.25 t t 1 f g 00202 0 +\n", "1 a 58.67 4.460 u g q h 3.04 t t 6 f g 00043 560 +\n", "2 a 24.50 0.500 u g q h 1.50 t f 0 f g 00280 824 +\n", "3 b 27.83 1.540 u g w v 3.75 t t 5 t g 00100 3 +\n", "4 b 20.17 5.625 u g w v 1.71 t f 0 f s 00120 0 +" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Import pandas\n", "import pandas as pd\n", "\n", "# Load dataset\n", "cc_apps = pd.read_csv('./dataset/cc_approvals.data', header=None)\n", "\n", "# Inspect data\n", "cc_apps.head()" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "10" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 2. Inspecting the applications\n", "

The output may appear a bit confusing at its first sight, but let's try to figure out the most important features of a credit card application. The features of this dataset have been anonymized to protect the privacy, but this blog gives us a pretty good overview of the probable features. The probable features in a typical credit card application are Gender, Age, Debt, Married, BankCustomer, EducationLevel, Ethnicity, YearsEmployed, PriorDefault, Employed, CreditScore, DriversLicense, Citizen, ZipCode, Income and finally the ApprovalStatus. This gives us a pretty good starting point, and we can map these features with respect to the columns in the output.

\n", "

As we can see from our first glance at the data, the dataset has a mixture of numerical and non-numerical features. This can be fixed with some preprocessing, but before we do that, let's learn about the dataset a bit more to see if there are other dataset issues that need to be fixed.

" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "dc": { "key": "10" }, "tags": [ "sample_code" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 2 7 10 14\n", "count 690.000000 690.000000 690.00000 690.000000\n", "mean 4.758725 2.223406 2.40000 1017.385507\n", "std 4.978163 3.346513 4.86294 5210.102598\n", "min 0.000000 0.000000 0.00000 0.000000\n", "25% 1.000000 0.165000 0.00000 0.000000\n", "50% 2.750000 1.000000 0.00000 5.000000\n", "75% 7.207500 2.625000 3.00000 395.500000\n", "max 28.000000 28.500000 67.00000 100000.000000\n", "\n", "\n", "\n", "RangeIndex: 690 entries, 0 to 689\n", "Data columns (total 16 columns):\n", " # Column Non-Null Count Dtype \n", "--- ------ -------------- ----- \n", " 0 0 690 non-null object \n", " 1 1 690 non-null object \n", " 2 2 690 non-null float64\n", " 3 3 690 non-null object \n", " 4 4 690 non-null object \n", " 5 5 690 non-null object \n", " 6 6 690 non-null object \n", " 7 7 690 non-null float64\n", " 8 8 690 non-null object \n", " 9 9 690 non-null object \n", " 10 10 690 non-null int64 \n", " 11 11 690 non-null object \n", " 12 12 690 non-null object \n", " 13 13 690 non-null object \n", " 14 14 690 non-null int64 \n", " 15 15 690 non-null object \n", "dtypes: float64(2), int64(2), object(12)\n", "memory usage: 86.4+ KB\n", "None\n", "\n", "\n", " 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15\n", "616 b 22.67 0.750 u g i v 1.585 f t 1 t g 00400 9 -\n", "617 b 32.25 14.000 y p ff ff 0.000 f t 2 f g 00160 1 -\n", "618 b 29.58 4.750 u g m v 2.000 f t 1 t g 00460 68 -\n", "619 b 18.42 10.415 y p aa v 0.125 t f 0 f g 00120 375 -\n", "620 b 22.17 2.250 u g i v 0.125 f f 0 f g 00160 10 -\n", ".. .. ... ... .. .. .. .. ... .. .. .. .. .. ... ... ..\n", "685 b 21.08 10.085 y p e h 1.250 f f 0 f g 00260 0 -\n", "686 a 22.67 0.750 u g c v 2.000 f t 2 t g 00200 394 -\n", "687 a 25.25 13.500 y p ff ff 2.000 f t 1 t g 00200 1 -\n", "688 b 17.92 0.205 u g aa v 0.040 f f 0 f g 00280 750 -\n", "689 b 35.00 3.375 u g c h 8.290 f f 0 t g 00000 0 -\n", "\n", "[74 rows x 16 columns]\n" ] } ], "source": [ "# Print summary statistics\n", "cc_apps_description = cc_apps.describe()\n", "print(cc_apps_description)\n", "\n", "print(\"\\n\")\n", "\n", "# Print DataFrame information\n", "cc_apps_info = cc_apps.info()\n", "print(cc_apps_info)\n", "\n", "print(\"\\n\")\n", "\n", "# Inspect missing values in the dataset\n", "print(cc_apps.tail(74))" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "17" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 3. Handling the missing values (part i)\n", "

We've uncovered some issues that will affect the performance of our machine learning model(s) if they go unchanged:

\n", "\n", "

Now, let's temporarily replace these missing value question marks with NaN.

" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "dc": { "key": "17" }, "tags": [ "sample_code" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15\n", "673 ? 29.50 2.000 y p e h 2.000 f f 0 f g 00256 17 -\n", "674 a 37.33 2.500 u g i h 0.210 f f 0 f g 00260 246 -\n", "675 a 41.58 1.040 u g aa v 0.665 f f 0 f g 00240 237 -\n", "676 a 30.58 10.665 u g q h 0.085 f t 12 t g 00129 3 -\n", "677 b 19.42 7.250 u g m v 0.040 f t 1 f g 00100 1 -\n", "678 a 17.92 10.210 u g ff ff 0.000 f f 0 f g 00000 50 -\n", "679 a 20.08 1.250 u g c v 0.000 f f 0 f g 00000 0 -\n", "680 b 19.50 0.290 u g k v 0.290 f f 0 f g 00280 364 -\n", "681 b 27.83 1.000 y p d h 3.000 f f 0 f g 00176 537 -\n", "682 b 17.08 3.290 u g i v 0.335 f f 0 t g 00140 2 -\n", "683 b 36.42 0.750 y p d v 0.585 f f 0 f g 00240 3 -\n", "684 b 40.58 3.290 u g m v 3.500 f f 0 t s 00400 0 -\n", "685 b 21.08 10.085 y p e h 1.250 f f 0 f g 00260 0 -\n", "686 a 22.67 0.750 u g c v 2.000 f t 2 t g 00200 394 -\n", "687 a 25.25 13.500 y p ff ff 2.000 f t 1 t g 00200 1 -\n", "688 b 17.92 0.205 u g aa v 0.040 f f 0 f g 00280 750 -\n", "689 b 35.00 3.375 u g c h 8.290 f f 0 t g 00000 0 -\n", " 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15\n", "673 NaN 29.50 2.000 y p e h 2.000 f f 0 f g 00256 17 -\n", "674 a 37.33 2.500 u g i h 0.210 f f 0 f g 00260 246 -\n", "675 a 41.58 1.040 u g aa v 0.665 f f 0 f g 00240 237 -\n", "676 a 30.58 10.665 u g q h 0.085 f t 12 t g 00129 3 -\n", "677 b 19.42 7.250 u g m v 0.040 f t 1 f g 00100 1 -\n", "678 a 17.92 10.210 u g ff ff 0.000 f f 0 f g 00000 50 -\n", "679 a 20.08 1.250 u g c v 0.000 f f 0 f g 00000 0 -\n", "680 b 19.50 0.290 u g k v 0.290 f f 0 f g 00280 364 -\n", "681 b 27.83 1.000 y p d h 3.000 f f 0 f g 00176 537 -\n", "682 b 17.08 3.290 u g i v 0.335 f f 0 t g 00140 2 -\n", "683 b 36.42 0.750 y p d v 0.585 f f 0 f g 00240 3 -\n", "684 b 40.58 3.290 u g m v 3.500 f f 0 t s 00400 0 -\n", "685 b 21.08 10.085 y p e h 1.250 f f 0 f g 00260 0 -\n", "686 a 22.67 0.750 u g c v 2.000 f t 2 t g 00200 394 -\n", "687 a 25.25 13.500 y p ff ff 2.000 f t 1 t g 00200 1 -\n", "688 b 17.92 0.205 u g aa v 0.040 f f 0 f g 00280 750 -\n", "689 b 35.00 3.375 u g c h 8.290 f f 0 t g 00000 0 -\n" ] } ], "source": [ "# Import numpy\n", "import numpy as np\n", "\n", "# Inspect missing values in the dataset\n", "print(cc_apps.tail(17))\n", "\n", "# Replace the '?'s with NaN\n", "cc_apps = cc_apps.replace('?', np.nan)\n", "\n", "# Inspect the missing values again\n", "print(cc_apps.tail(17))" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "24" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 4. Handling the missing values (part ii)\n", "

We replaced all the question marks with NaNs. This is going to help us in the next missing value treatment that we are going to perform.

\n", "

An important question that gets raised here is why are we giving so much importance to missing values? Can't they be just ignored? Ignoring missing values can affect the performance of a machine learning model heavily. While ignoring the missing values our machine learning model may miss out on information about the dataset that may be useful for its training. Then, there are many models which cannot handle missing values implicitly such as LDA.

\n", "

So, to avoid this problem, we are going to impute the missing values with a strategy called mean imputation.

" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "dc": { "key": "24" }, "tags": [ "sample_code" ] }, "outputs": [ { "data": { "text/plain": [ "67" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Impute the missing values with mean imputation\n", "cc_apps.fillna(cc_apps.mean(), inplace=True)\n", "\n", "# Count the number of NaNs in the dataset to verify\n", "cc_apps.isnull().values.sum()" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "31" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 5. Handling the missing values (part iii)\n", "

We have successfully taken care of the missing values present in the numeric columns. There are still some missing values to be imputed for columns 0, 1, 3, 4, 5, 6 and 13. All of these columns contain non-numeric data and this why the mean imputation strategy would not work here. This needs a different treatment.

\n", "

We are going to impute these missing values with the most frequent values as present in the respective columns. This is good practice when it comes to imputing missing values for categorical data in general.

" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "dc": { "key": "31" }, "tags": [ "sample_code" ] }, "outputs": [ { "data": { "text/plain": [ "0" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Iterate over each column of cc_apps\n", "for col in cc_apps.columns:\n", " # Check if the column is of object type\n", " if cc_apps[col].dtypes == 'object':\n", " # Impute with the most frequent value\n", " cc_apps = cc_apps.fillna(cc_apps[col].value_counts().index[0])\n", "\n", "# Count the number of NaNs in the dataset and print the counts to verify\n", "cc_apps.isnull().values.sum()" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "38" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 6. Preprocessing the data (part i)\n", "

The missing values are now successfully handled.

\n", "

There is still some minor but essential data preprocessing needed before we proceed towards building our machine learning model. We are going to divide these remaining preprocessing steps into three main tasks:

\n", "
    \n", "
  1. Convert the non-numeric data into numeric.
  2. \n", "
  3. Split the data into train and test sets.
  4. \n", "
  5. Scale the feature values to a uniform range.
  6. \n", "
\n", "

First, we will be converting all the non-numeric values into numeric ones. We do this because not only it results in a faster computation but also many machine learning models (like XGBoost) (and especially the ones developed using scikit-learn) require the data to be in a strictly numeric format. We will do this by using a technique called label encoding.

" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "dc": { "key": "38" }, "tags": [ "sample_code" ] }, "outputs": [], "source": [ "# Import LabelEncoder\n", "from sklearn.preprocessing import LabelEncoder\n", "\n", "# Instantiate LabelEncoder\n", "le = LabelEncoder()\n", "\n", "# Iterate over all the values of each column and extract their dtypes\n", "for col in cc_apps.columns:\n", " # Compare if the dtype is object\n", " if cc_apps[col].dtype=='object':\n", " # Use LabelEncoder to do the numeric transformation\n", " cc_apps[col]=le.fit_transform(cc_apps[col])" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "45" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 7. Splitting the dataset into train and test sets\n", "

We have successfully converted all the non-numeric values to numeric ones.

\n", "

Now, we will split our data into train set and test set to prepare our data for two different phases of machine learning modeling: training and testing. Ideally, no information from the test data should be used to scale the training data or should be used to direct the training process of a machine learning model. Hence, we first split the data and then apply the scaling.

\n", "

Also, features like DriversLicense and ZipCode are not as important as the other features in the dataset for predicting credit card approvals. We should drop them to design our machine learning model with the best set of features. In Data Science literature, this is often referred to as feature selection.

" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "dc": { "key": "45" }, "tags": [ "sample_code" ] }, "outputs": [], "source": [ "# Import train_test_split\n", "from sklearn.model_selection import train_test_split\n", "\n", "# Drop the features 11 and 13 and convert the DataFrame to a NumPy array\n", "cc_apps = cc_apps.drop([11, 13], axis=1)\n", "cc_apps = cc_apps.values\n", "\n", "# Segregate features and labels into separate variables\n", "X,y = cc_apps[:,0:13] , cc_apps[:,13]\n", "\n", "# Split into train and test sets\n", "X_train, X_test, y_train, y_test = train_test_split(X,\n", " y,\n", " test_size=0.33,\n", " random_state=42)" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "52" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 8. Preprocessing the data (part ii)\n", "

The data is now split into two separate sets - train and test sets respectively. We are only left with one final preprocessing step of scaling before we can fit a machine learning model to the data.

\n", "

Now, let's try to understand what these scaled values mean in the real world. Let's use CreditScore as an example. The credit score of a person is their creditworthiness based on their credit history. The higher this number, the more financially trustworthy a person is considered to be. So, a CreditScore of 1 is the highest since we're rescaling all the values to the range of 0-1.

" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "dc": { "key": "52" }, "tags": [ "sample_code" ] }, "outputs": [], "source": [ "# Import MinMaxScaler\n", "from sklearn.preprocessing import MinMaxScaler\n", "\n", "# Instantiate MinMaxScaler and use it to rescale X_train and X_test\n", "scaler = MinMaxScaler(feature_range=(0, 1))\n", "rescaledX_train = scaler.fit_transform(X_train)\n", "rescaledX_test = scaler.fit_transform(X_test)" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "59" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 9. Fitting a logistic regression model to the train set\n", "

Essentially, predicting if a credit card application will be approved or not is a classification task. According to UCI, our dataset contains more instances that correspond to \"Denied\" status than instances corresponding to \"Approved\" status. Specifically, out of 690 instances, there are 383 (55.5%) applications that got denied and 307 (44.5%) applications that got approved.

\n", "

This gives us a benchmark. A good machine learning model should be able to accurately predict the status of the applications with respect to these statistics.

\n", "

Which model should we pick? A question to ask is: are the features that affect the credit card approval decision process correlated with each other? Although we can measure correlation, that is outside the scope of this notebook, so we'll rely on our intuition that they indeed are correlated for now. Because of this correlation, we'll take advantage of the fact that generalized linear models perform well in these cases. Let's start our machine learning modeling with a Logistic Regression model (a generalized linear model).

" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "dc": { "key": "59" }, "tags": [ "sample_code" ] }, "outputs": [ { "data": { "text/plain": [ "LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, l1_ratio=None, max_iter=100,\n", " multi_class='auto', n_jobs=None, penalty='l2',\n", " random_state=None, solver='lbfgs', tol=0.0001, verbose=0,\n", " warm_start=False)" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Import LogisticRegression\n", "from sklearn.linear_model import LogisticRegression\n", "\n", "# Instantiate a LogisticRegression classifier with default parameter values\n", "logreg = LogisticRegression()\n", "\n", "# Fit logreg to the train set\n", "logreg.fit(rescaledX_train, y_train)" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "66" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 10. Making predictions and evaluating performance\n", "

But how well does our model perform?

\n", "

We will now evaluate our model on the test set with respect to classification accuracy. But we will also take a look the model's confusion matrix. In the case of predicting credit card applications, it is equally important to see if our machine learning model is able to predict the approval status of the applications as denied that originally got denied. If our model is not performing well in this aspect, then it might end up approving the application that should have been approved. The confusion matrix helps us to view our model's performance from these aspects.

" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "dc": { "key": "66" }, "tags": [ "sample_code" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Accuracy of logistic regression classifier: 0.8377192982456141\n", "[[92 11]\n", " [26 99]]\n" ] } ], "source": [ "# Import confusion_matrix\n", "from sklearn.metrics import confusion_matrix\n", "\n", "# Use logreg to predict instances from the test set and store it\n", "y_pred = logreg.predict(rescaledX_test)\n", "\n", "# Get the accuracy score of logreg model and print it\n", "print(\"Accuracy of logistic regression classifier: \", logreg.score(rescaledX_test, y_test))\n", "\n", "# Print the confusion matrix of the logreg model\n", "print(confusion_matrix(y_test, y_pred))" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "73" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 11. Grid searching and making the model perform better\n", "

Our model was pretty good! It was able to yield an accuracy score of almost 84%.

\n", "

For the confusion matrix, the first element of the of the first row of the confusion matrix denotes the true negatives meaning the number of negative instances (denied applications) predicted by the model correctly. And the last element of the second row of the confusion matrix denotes the true positives meaning the number of positive instances (approved applications) predicted by the model correctly.

\n", "

Let's see if we can do better. We can perform a grid search of the model parameters to improve the model's ability to predict credit card approvals.

\n", "

scikit-learn's implementation of logistic regression consists of different hyperparameters but we will grid search over the following two:

\n", "" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "dc": { "key": "73" }, "tags": [ "sample_code" ] }, "outputs": [], "source": [ "# Import GridSearchCV\n", "from sklearn.model_selection import GridSearchCV\n", "\n", "# Define the grid of values for tol and max_iter\n", "tol = [0.01, 0.001, 0.0001]\n", "max_iter = [100, 150, 200]\n", "\n", "# Create a dictionary where tol and max_iter are keys and the lists of their values are corresponding values\n", "param_grid = dict(tol=tol, max_iter=max_iter)" ] }, { "cell_type": "markdown", "metadata": { "dc": { "key": "80" }, "deletable": false, "editable": false, "run_control": { "frozen": true }, "tags": [ "context" ] }, "source": [ "## 12. Finding the best performing model\n", "

We have defined the grid of hyperparameter values and converted them into a single dictionary format which GridSearchCV() expects as one of its parameters. Now, we will begin the grid search to see which values perform best.

\n", "

We will instantiate GridSearchCV() with our earlier logreg model with all the data we have. Instead of passing train and test sets separately, we will supply X (scaled version) and y. We will also instruct GridSearchCV() to perform a cross-validation of five folds.

\n", "

We'll end the notebook by storing the best-achieved score and the respective best parameters.

\n", "

While building this credit card predictor, we tackled some of the most widely-known preprocessing steps such as scaling, label encoding, and missing value imputation. We finished with some machine learning to predict if a person's application for a credit card would get approved or not given some information about that person.

" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "dc": { "key": "80" }, "tags": [ "sample_code" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best: 0.850725 using {'max_iter': 100, 'tol': 0.01}\n" ] } ], "source": [ "# Instantiate GridSearchCV with the required parameters\n", "grid_model = GridSearchCV(estimator=logreg, param_grid=param_grid, cv=5)\n", "\n", "# Use scaler to rescale X and assign it to rescaledX\n", "rescaledX = scaler.fit_transform(X)\n", "\n", "# Fit data to grid_model\n", "grid_model_result = grid_model.fit(rescaledX, y)\n", "\n", "# Summarize results\n", "best_score, best_params = grid_model_result.best_score_, grid_model_result.best_params_\n", "print(\"Best: %f using %s\" % (best_score, best_params))" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 2 }