{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Baseline for the challenge DCRCL\n", "### Author - Pulkit Gera" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ayushshivani/aicrowd_educational_baselines/blob/master/DCRCL_baseline.ipynb)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "!pip install numpy\n", "!pip install pandas\n", "!pip install sklearn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Import necessary packages\n" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "from sklearn.model_selection import train_test_split \n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn import metrics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Download data\n", "The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_dcrcl/data/public/test.csv\n", "!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_dcrcl/data/public/train.csv" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Data\n", "We use pandas library to load our data. Pandas loads them into dataframes which helps us analyze our data easily. Learn more about it [here](https://www.tutorialspoint.com/python_pandas/index.htm)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "train_data = pd.read_csv('train.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Analyse Data" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
LIMIT_BALSEXEDUCATIONMARRIAGEAGEPAY_0PAY_2PAY_3PAY_4PAY_5...BILL_AMT4BILL_AMT5BILL_AMT6PAY_AMT1PAY_AMT2PAY_AMT3PAY_AMT4PAY_AMT5PAY_AMT6default payment next month
0300002213800000...2281025772263601650170014003355114600
117000014128000-1-1...1176004902140005695117600490260000
234000011238000-1-1...168019209151500077851699192091511870000
31400002222900020...6586164848649363000860062500250025000
41300002214222200...126792103497969916400045353900430037001
\n", "

5 rows × 24 columns

\n", "
" ], "text/plain": [ " LIMIT_BAL SEX EDUCATION MARRIAGE AGE PAY_0 PAY_2 PAY_3 PAY_4 \\\n", "0 30000 2 2 1 38 0 0 0 0 \n", "1 170000 1 4 1 28 0 0 0 -1 \n", "2 340000 1 1 2 38 0 0 0 -1 \n", "3 140000 2 2 2 29 0 0 0 2 \n", "4 130000 2 2 1 42 2 2 2 0 \n", "\n", " PAY_5 ... BILL_AMT4 BILL_AMT5 BILL_AMT6 \\\n", "0 0 ... 22810 25772 26360 \n", "1 -1 ... 11760 0 4902 \n", "2 -1 ... 1680 1920 9151 \n", "3 0 ... 65861 64848 64936 \n", "4 0 ... 126792 103497 96991 \n", "\n", " PAY_AMT1 PAY_AMT2 PAY_AMT3 PAY_AMT4 PAY_AMT5 PAY_AMT6 \\\n", "0 1650 1700 1400 3355 1146 0 \n", "1 14000 5695 11760 0 4902 6000 \n", "2 5000 7785 1699 1920 9151 187000 \n", "3 3000 8600 6 2500 2500 2500 \n", "4 6400 0 4535 3900 4300 3700 \n", "\n", " default payment next month \n", "0 0 \n", "1 0 \n", "2 0 \n", "3 0 \n", "4 1 \n", "\n", "[5 rows x 24 columns]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we use the `describe` function to get an understanding of the data. It shows us the distribution for all the columns. You can use more functions like `info()` to get useful info." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
LIMIT_BALSEXEDUCATIONMARRIAGEAGEPAY_0PAY_2PAY_3PAY_4PAY_5...BILL_AMT4BILL_AMT5BILL_AMT6PAY_AMT1PAY_AMT2PAY_AMT3PAY_AMT4PAY_AMT5PAY_AMT6default payment next month
count25500.00000025500.00000025500.00000025500.00000025500.00000025500.00000025500.00000025500.00000025500.00000025500.000000...25500.00000025500.00000025500.00000025500.0000002.550000e+0425500.00000025500.00000025500.00000025500.00000025500.000000
mean167436.4580391.6046671.8528241.55196135.503333-0.016275-0.131882-0.166706-0.218667-0.264157...43139.22494140252.92058838846.4155295690.8013735.986709e+035246.6052944829.7900784810.2967065187.0165490.220902
std129837.1186390.4889320.7918030.5227549.2350481.1268131.1967101.1928831.1683751.132166...64214.50863660789.10139359397.44360417070.7333482.402498e+0418117.23673816021.33664515505.87349817568.4505570.414863
min10000.0000001.0000000.0000000.00000021.000000-2.000000-2.000000-2.000000-2.000000-2.000000...-170000.000000-81334.000000-209051.0000000.0000000.000000e+000.0000000.0000000.0000000.0000000.000000
25%50000.0000001.0000001.0000001.00000028.000000-1.000000-1.000000-1.000000-1.000000-1.000000...2360.0000001779.2500001280.0000001000.0000008.635000e+02390.000000292.750000256.750000113.7500000.000000
50%140000.0000002.0000002.0000002.00000034.0000000.0000000.0000000.0000000.0000000.000000...19033.00000018085.00000017129.0000002100.0000002.010000e+031800.0000001500.0000001500.0000001500.0000000.000000
75%240000.0000002.0000002.0000002.00000042.0000000.0000000.0000000.0000000.0000000.000000...54084.75000050080.75000049110.5000005006.0000005.000000e+034507.0000004001.2500004024.0000004000.0000000.000000
max1000000.0000002.0000006.0000003.00000079.0000008.0000008.0000008.0000008.0000008.000000...891586.000000927171.000000961664.000000873552.0000001.684259e+06896040.000000621000.000000426529.000000527143.0000001.000000
\n", "

8 rows × 24 columns

\n", "
" ], "text/plain": [ " LIMIT_BAL SEX EDUCATION MARRIAGE AGE \\\n", "count 25500.000000 25500.000000 25500.000000 25500.000000 25500.000000 \n", "mean 167436.458039 1.604667 1.852824 1.551961 35.503333 \n", "std 129837.118639 0.488932 0.791803 0.522754 9.235048 \n", "min 10000.000000 1.000000 0.000000 0.000000 21.000000 \n", "25% 50000.000000 1.000000 1.000000 1.000000 28.000000 \n", "50% 140000.000000 2.000000 2.000000 2.000000 34.000000 \n", "75% 240000.000000 2.000000 2.000000 2.000000 42.000000 \n", "max 1000000.000000 2.000000 6.000000 3.000000 79.000000 \n", "\n", " PAY_0 PAY_2 PAY_3 PAY_4 PAY_5 \\\n", "count 25500.000000 25500.000000 25500.000000 25500.000000 25500.000000 \n", "mean -0.016275 -0.131882 -0.166706 -0.218667 -0.264157 \n", "std 1.126813 1.196710 1.192883 1.168375 1.132166 \n", "min -2.000000 -2.000000 -2.000000 -2.000000 -2.000000 \n", "25% -1.000000 -1.000000 -1.000000 -1.000000 -1.000000 \n", "50% 0.000000 0.000000 0.000000 0.000000 0.000000 \n", "75% 0.000000 0.000000 0.000000 0.000000 0.000000 \n", "max 8.000000 8.000000 8.000000 8.000000 8.000000 \n", "\n", " ... BILL_AMT4 BILL_AMT5 \\\n", "count ... 25500.000000 25500.000000 \n", "mean ... 43139.224941 40252.920588 \n", "std ... 64214.508636 60789.101393 \n", "min ... -170000.000000 -81334.000000 \n", "25% ... 2360.000000 1779.250000 \n", "50% ... 19033.000000 18085.000000 \n", "75% ... 54084.750000 50080.750000 \n", "max ... 891586.000000 927171.000000 \n", "\n", " BILL_AMT6 PAY_AMT1 PAY_AMT2 PAY_AMT3 \\\n", "count 25500.000000 25500.000000 2.550000e+04 25500.000000 \n", "mean 38846.415529 5690.801373 5.986709e+03 5246.605294 \n", "std 59397.443604 17070.733348 2.402498e+04 18117.236738 \n", "min -209051.000000 0.000000 0.000000e+00 0.000000 \n", "25% 1280.000000 1000.000000 8.635000e+02 390.000000 \n", "50% 17129.000000 2100.000000 2.010000e+03 1800.000000 \n", "75% 49110.500000 5006.000000 5.000000e+03 4507.000000 \n", "max 961664.000000 873552.000000 1.684259e+06 896040.000000 \n", "\n", " PAY_AMT4 PAY_AMT5 PAY_AMT6 default payment next month \n", "count 25500.000000 25500.000000 25500.000000 25500.000000 \n", "mean 4829.790078 4810.296706 5187.016549 0.220902 \n", "std 16021.336645 15505.873498 17568.450557 0.414863 \n", "min 0.000000 0.000000 0.000000 0.000000 \n", "25% 292.750000 256.750000 113.750000 0.000000 \n", "50% 1500.000000 1500.000000 1500.000000 0.000000 \n", "75% 4001.250000 4024.000000 4000.000000 0.000000 \n", "max 621000.000000 426529.000000 527143.000000 1.000000 \n", "\n", "[8 rows x 24 columns]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_data.describe()\n", "#train_data.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Split Data into Train and Validation\n", "Now we want to see how well our classifier is performing, but we dont have the test data labels with us to check. What do we do ? So we split our dataset into train and validation. The idea is that we test our classifier on validation set in order to get an idea of how well our classifier works. This way we can also ensure that we dont [overfit](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) on the train dataset. There are many ways to do validation like [k-fold](https://machinelearningmastery.com/k-fold-cross-validation/),[leave one out](https://en.wikipedia.org/wiki/Cross-validation_(statistics), etc" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = train_data.drop('default payment next month',1)\n", "y = train_data['default payment next month']\n", "# Validation testing\n", "X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we have selected the size of the validation data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function [click here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define the Classifier and Train\n", "Now we come to the juicy part. We have fixed our data and now we train a classifier. The classifier will learn the function by looking at the inputs and corresponding outputs. There are a ton of classifiers to choose from some being [Logistic Regression](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc), [SVM](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Random Forests](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Decision Trees](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052), etc. \n", "Tip: A good model doesnt depend solely on the classifier but on the features(columns) you choose. So make sure to play with your data and keep only whats important. " ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/gera/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n", " FutureWarning)\n" ] }, { "data": { "text/plain": [ "LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, l1_ratio=None, max_iter=100,\n", " multi_class='warn', n_jobs=None, penalty='l2',\n", " random_state=None, solver='warn', tol=0.0001, verbose=0,\n", " warm_start=False)" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "classifier = LogisticRegression()\n", "classifier.fit(X_train,y_train)\n", "\n", "# from sklearn.svm import SVC\n", "# clf = SVC(gamma='auto')\n", "# clf.fit(X_train, y_train)\n", "\n", "# from sklearn import tree\n", "# clf = tree.DecisionTreeClassifier()\n", "# clf = clf.fit(X_train, y_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have used [Logistic Regression](https://en.wikipedia.org/wiki/Logistic_regression) as a classifier here and set few of the parameteres. But one can set more parameters and increase the performance. To see the list of parameters visit [here](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html). \n", "Also given are SVM and Decision Tree examples. Check out SVM's parameters [here](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) and Decision Tree's [here](https://scikit-learn.org/stable/modules/tree.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Predict on Validation\n", "Now we predict our trained classifier on the validation set and evaluate our model" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": true }, "outputs": [], "source": [ "y_pred = classifier.predict(X_val)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ActualPredicted
691300
1112400
2510010
276400
2321600
1726900
307300
818400
259500
548300
650800
1177600
530600
1884600
1985400
246300
530400
2373900
2042700
2026300
957800
1416400
510700
516000
845000
\n", "
" ], "text/plain": [ " Actual Predicted\n", "6913 0 0\n", "11124 0 0\n", "25100 1 0\n", "2764 0 0\n", "23216 0 0\n", "17269 0 0\n", "3073 0 0\n", "8184 0 0\n", "2595 0 0\n", "5483 0 0\n", "6508 0 0\n", "11776 0 0\n", "5306 0 0\n", "18846 0 0\n", "19854 0 0\n", "2463 0 0\n", "5304 0 0\n", "23739 0 0\n", "20427 0 0\n", "20263 0 0\n", "9578 0 0\n", "14164 0 0\n", "5107 0 0\n", "5160 0 0\n", "8450 0 0" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.DataFrame({'Actual': y_val, 'Predicted': y_pred})\n", "df1 = df.head(25)\n", "df1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluate the Performance\n", "We use the same metrics as that will be used for the test set. \n", "[F1 score](https://en.wikipedia.org/wiki/F1_score) and [ROC AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) are the metrics for this challenge" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "F1 score Error: 0.0\n", "ROC AUC Error: 0.49975062344139654\n" ] } ], "source": [ "print('F1 score Score:', metrics.f1_score(y_val, y_pred)) \n", "print('ROC AUC Score:', metrics.roc_auc_score(y_val, y_pred)) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Test Set\n", "Load the test data now" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [], "source": [ "test_data = pd.read_csv('test.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Predict Test Set\n", "Time for the moment of truth! Predict on test set and time to make the submission." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "y_test = classifier.predict(test_data)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": true }, "outputs": [], "source": [ "df = pd.DataFrame(y_test,columns=['default payment next month'])\n", "df.to_csv('submission.csv',index=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## To download the generated csv in collab run the below command" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from google.colab import files\n", "files.download('submission.csv') " ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "To participate in the challenge click [here](https://www.aicrowd.com/challenges/dcrcl-default-of-credit-card-clients/)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" } }, "nbformat": 4, "nbformat_minor": 2 }