{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", " \n", "## [mlcourse.ai](https://mlcourse.ai) - Open Machine Learning Course\n", "\n", "\n", "Author: Vitaly Radchenko. All content is distributed under the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#
Assignment # 5 (demo)
\n", "##
Logistic Regression and Random Forest in the credit scoring problem
\n", "\n", "**Same assignment as a [Kaggle Kernel](https://www.kaggle.com/kashnitsky/a5-demo-logit-and-rf-for-credit-scoring) + [solution](https://www.kaggle.com/kashnitsky/a5-demo-logit-and-rf-for-credit-scoring-sol).**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this assignment, you will build models and answer questions using data on credit scoring.\n", "\n", "Please write your code in the cells with the \"Your code here\" placeholder. Then, answer the questions in the [form](https://docs.google.com/forms/d/1gKt0DA4So8ohKAHZNCk58ezvg7K_tik26d9QND7WC6M/edit).\n", "\n", "Let's start with a warm-up exercise." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question 1.** There are 5 jurors in a courtroom. Each of them can correctly identify the guilt of the defendant with 70% probability, independent of one another. What is the probability that the jurors will jointly reach the correct verdict if the final decision is by majority vote?\n", "\n", "1. 70.00%\n", "2. 83.20%\n", "3. 83.70%\n", "4. 87.50%" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great! Let's move on to machine learning.\n", "\n", "## Credit scoring problem setup\n", "\n", "#### Problem\n", "\n", "Predict whether the customer will repay their credit within 90 days. This is a binary classification problem; we will assign customers into good or bad categories based on our prediction.\n", "\n", "#### Data description\n", "\n", "| Feature | Variable Type | Value Type | Description |\n", "|:--------|:--------------|:-----------|:------------|\n", "| age | Input Feature | integer | Customer age |\n", "| DebtRatio | Input Feature | real | Total monthly loan payments (loan, alimony, etc.) / Total monthly income percentage |\n", "| NumberOfTime30-59DaysPastDueNotWorse | Input Feature | integer | The number of cases when client has overdue 30-59 days (not worse) on other loans during the last 2 years |\n", "| NumberOfTimes90DaysLate | Input Feature | integer | Number of cases when customer had 90+dpd overdue on other credits |\n", "| NumberOfTime60-89DaysPastDueNotWorse | Input Feature | integer | Number of cased when customer has 60-89dpd (not worse) during the last 2 years |\n", "| NumberOfDependents | Input Feature | integer | The number of customer dependents |\n", "| SeriousDlqin2yrs | Target Variable | binary:
0 or 1 | Customer hasn't paid the loan debt within 90 days |\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's set up our environment:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Disable warnings in Anaconda\n", "import warnings\n", "\n", "warnings.filterwarnings(\"ignore\")\n", "\n", "import numpy as np\n", "import pandas as pd\n", "\n", "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "\n", "sns.set()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib import rcParams\n", "\n", "rcParams[\"figure.figsize\"] = 11, 8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's write the function that will replace *NaN* values with the median for each column." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def fill_nan(table):\n", " for col in table.columns:\n", " table[col] = table[col].fillna(table[col].median())\n", " return table" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, read the data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pd.read_csv(\"../../data/credit_scoring_sample.csv\", sep=\";\")\n", "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Look at the variable types:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.dtypes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check the class balance:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ax = data[\"SeriousDlqin2yrs\"].hist(orientation=\"horizontal\", color=\"red\")\n", "ax.set_xlabel(\"number_of_observations\")\n", "ax.set_ylabel(\"unique_value\")\n", "ax.set_title(\"Target distribution\")\n", "\n", "print(\"Distribution of the target:\")\n", "data[\"SeriousDlqin2yrs\"].value_counts() / data.shape[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Separate the input variable names by excluding the target:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "independent_columns_names = [x for x in data if x != \"SeriousDlqin2yrs\"]\n", "independent_columns_names" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Apply the function to replace *NaN* values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "table = fill_nan(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Separate the target variable and input features:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X = table[independent_columns_names]\n", "y = table[\"SeriousDlqin2yrs\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bootstrapping" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question 2.** Make an interval estimate of the average age for the customers who delayed repayment at the 90% confidence level. Use the example from the article as reference, if needed. Also, use `np.random.seed(0)` as before. What is the resulting interval estimate?\n", "\n", "1. 52.59 – 52.86\n", "2. 45.71 – 46.13\n", "3. 45.68 – 46.17\n", "4. 52.56 – 52.88" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Logistic regression" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's set up to use logistic regression:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LogisticRegression\n", "from sklearn.model_selection import GridSearchCV, StratifiedKFold" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we will create a `LogisticRegression` model and use `class_weight='balanced'` to make up for our unbalanced classes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lr = LogisticRegression(random_state=5, class_weight=\"balanced\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try to find the best regularization coefficient, which is the coefficient `C` for logistic regression. Then, we will have an optimal model that is not overfit and is a good predictor of the target variable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "parameters = {\"C\": (0.0001, 0.001, 0.01, 0.1, 1, 10)}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to find the optimal value of `C`, let's apply stratified 5-fold validation and look at the *ROC AUC* against different values of the parameter `C`. Use the `StratifiedKFold` function for this: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One of the important metrics of model quality is the *Area Under the Curve (AUC)*. *ROC AUC* varies from 0 to 1. The closer ROC AUC is to 1, the better the quality of the classification model." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "**Question 3.** Perform a *Grid Search* with the scoring metric \"roc_auc\" for the parameter `C`. Which value of the parameter `C` is optimal? \n", "\n", "1. 0.0001\n", "2. 0.001\n", "3. 0.01\n", "4. 0.1\n", "5. 1\n", "6. 10" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question 4.** Can we consider the best model stable? The model is *stable* if the standard deviation on validation is less than 0.5%. Save the *ROC AUC* value of the best model; it will be useful for the following tasks.\n", "\n", "1. Yes\n", "2. No" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature importance\n", "\n", "**Question 5.** *Feature importance* is defined by the absolute value of its corresponding coefficient. First, you need to normalize all of the feature values so that it will be valid to compare them. What is the most important feature for the best logistic regression model?\n", "\n", "1. age\n", "2. NumberOfTime30-59DaysPastDueNotWorse\n", "3. DebtRatio\n", "4. NumberOfTimes90DaysLate\n", "5. NumberOfTime60-89DaysPastDueNotWorse\n", "6. MonthlyIncome\n", "7. NumberOfDependents" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question 6.** Calculate how much `DebtRatio` affects our prediction using the [softmax function](https://en.wikipedia.org/wiki/Softmax_function). What is its value?\n", "\n", "1. 0.38\n", "2. -0.02\n", "3. 0.11\n", "4. 0.24" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question 7.** Let's see how we can interpret the impact of our features. For this, recalculate the logistic regression with absolute values, that is without scaling. Next, modify the customer's age by adding 20 years, keeping the other features unchanged. How many times will the chance that the customer will not repay their debt increase? You can find an example of the theoretical calculation [here](https://www.unm.edu/~schrader/biostat/bio2/Spr06/lec11.pdf).\n", "\n", "1. -0.01\n", "2. 0.70\n", "3. 8.32\n", "4. 0.66" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Random Forest" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Import the Random Forest classifier:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.ensemble import RandomForestClassifier" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Initialize Random Forest with 100 trees and balance target classes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rf = RandomForestClassifier(\n", " n_estimators=100, n_jobs=-1, random_state=42, class_weight=\"balanced\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will search for the best parameters among the following values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "parameters = {\n", " \"max_features\": [1, 2, 4],\n", " \"min_samples_leaf\": [3, 5, 7, 9],\n", " \"max_depth\": [5, 10, 15],\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Also, we will use the stratified k-fold validation again. You should still have the `skf` variable." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question 8.** How much higher is the *ROC AUC* of the best random forest model than that of the best logistic regression on validation?\n", "\n", "1. 4%\n", "2. 3%\n", "3. 2%\n", "4. 1%" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question 9.** What feature has the weakest impact in the Random Forest model?\n", "\n", "1. age\n", "2. NumberOfTime30-59DaysPastDueNotWorse\n", "3. DebtRatio\n", "4. NumberOfTimes90DaysLate\n", "5. NumberOfTime60-89DaysPastDueNotWorse\n", "6. MonthlyIncome\n", "7. NumberOfDependents" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question 10.** What is the most significant advantage of using *Logistic Regression* versus *Random Forest* for this problem?\n", "\n", "1. Spent less time for model fitting;\n", "2. Fewer variables to iterate;\n", "3. Feature interpretability;\n", "4. Linear properties of the algorithm." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bagging" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Import modules and set up the parameters for bagging:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.ensemble import BaggingClassifier\n", "from sklearn.model_selection import RandomizedSearchCV, cross_val_score\n", "\n", "parameters = {\n", " \"max_features\": [2, 3, 4],\n", " \"max_samples\": [0.5, 0.7, 0.9],\n", " \"base_estimator__C\": [0.0001, 0.001, 0.01, 1, 10, 100],\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question 11.** Fit a bagging classifier with `random_state=42`. For the base classifiers, use 100 logistic regressors and use `RandomizedSearchCV` instead of `GridSearchCV`. It will take a lot of time to iterate over all 54 variants, so set the maximum number of iterations for `RandomizedSearchCV` to 20. Don't forget to set the parameters `cv` and `random_state=1`. What is the best *ROC AUC* you achieve?\n", "\n", "1. 80.75%\n", "2. 80.12%\n", "3. 79.62%\n", "4. 76.50%" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "**Question 12.** Give an interpretation of the best parameters for bagging. Why are these values of `max_features` and `max_samples` the best?\n", "\n", "1. For bagging it's important to use as few features as possible;\n", "2. Bagging works better on small samples;\n", "3. Less correlation between single models;\n", "4. The higher the number of features, the lower the loss of information." ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 1 }