{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Baseline for the challenge DOTAW\n", "### Author - Pulkit Gera" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ayushshivani/aicrowd_educational_baselines/blob/master/DOTAW_baseline.ipynb)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "!pip install numpy\n", "!pip install pandas\n", "!pip install sklearn" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "from sklearn.model_selection import train_test_split \n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn import metrics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Download data\n", "The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_dotaw/data/public/test.zip\n", "!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_dotaw/data/public/train.zip\n", "!unzip train.zip\n", "!unzip test.zip" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Data\n", "We use pandas library to load our data. Pandas loads them into dataframes which helps us analyze our data easily. Learn more about it [here](https://www.tutorialspoint.com/python_pandas/index.htm)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "train_data = pd.read_csv('train.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Analyse Data" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
winnercluster_idgame_modegame_typehero_0hero_1hero_2hero_3hero_4hero_5...hero_103hero_104hero_105hero_106hero_107hero_108hero_109hero_110hero_111hero_112
0-122322000000...0000000000
111522200010-1...0000000000
211312200010-1...0000000000
3115422000000...-1000000000
4-11712300000-1...0000000000
\n", "

5 rows × 117 columns

\n", "
" ], "text/plain": [ " winner cluster_id game_mode game_type hero_0 hero_1 hero_2 hero_3 \\\n", "0 -1 223 2 2 0 0 0 0 \n", "1 1 152 2 2 0 0 0 1 \n", "2 1 131 2 2 0 0 0 1 \n", "3 1 154 2 2 0 0 0 0 \n", "4 -1 171 2 3 0 0 0 0 \n", "\n", " hero_4 hero_5 ... hero_103 hero_104 hero_105 hero_106 hero_107 \\\n", "0 0 0 ... 0 0 0 0 0 \n", "1 0 -1 ... 0 0 0 0 0 \n", "2 0 -1 ... 0 0 0 0 0 \n", "3 0 0 ... -1 0 0 0 0 \n", "4 0 -1 ... 0 0 0 0 0 \n", "\n", " hero_108 hero_109 hero_110 hero_111 hero_112 \n", "0 0 0 0 0 0 \n", "1 0 0 0 0 0 \n", "2 0 0 0 0 0 \n", "3 0 0 0 0 0 \n", "4 0 0 0 0 0 \n", "\n", "[5 rows x 117 columns]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we use the `describe` function to get an understanding of the data. It shows us the distribution for all the columns. You can use more functions like `info()` to get useful info." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
winnercluster_idgame_modegame_typehero_0hero_1hero_2hero_3hero_4hero_5...hero_103hero_104hero_105hero_106hero_107hero_108hero_109hero_110hero_111hero_112
count92650.00000092650.00000092650.00000092650.00000092650.00000092650.00000092650.00000092650.00000092650.00000092650.000000...92650.00000092650.00000092650.00000092650.00000092650.092650.00000092650.00000092650.00000092650.00000092650.000000
mean0.053038175.8641453.3175722.384587-0.001630-0.0009710.000691-0.000799-0.0020080.003173...-0.001371-0.0009500.0008850.0005940.00.0010250.000648-0.000227-0.0000430.000896
std0.99859835.6582142.6330700.4868330.4020040.4676720.1650520.3553930.3293480.483950...0.5350240.2061120.2839850.1559400.00.2207030.2041660.1687070.1898680.139033
min-1.000000111.0000001.0000001.000000-1.000000-1.000000-1.000000-1.000000-1.000000-1.000000...-1.000000-1.000000-1.000000-1.0000000.0-1.000000-1.000000-1.000000-1.000000-1.000000
25%-1.000000152.0000002.0000002.0000000.0000000.0000000.0000000.0000000.0000000.000000...0.0000000.0000000.0000000.0000000.00.0000000.0000000.0000000.0000000.000000
50%1.000000156.0000002.0000002.0000000.0000000.0000000.0000000.0000000.0000000.000000...0.0000000.0000000.0000000.0000000.00.0000000.0000000.0000000.0000000.000000
75%1.000000223.0000002.0000003.0000000.0000000.0000000.0000000.0000000.0000000.000000...0.0000000.0000000.0000000.0000000.00.0000000.0000000.0000000.0000000.000000
max1.000000261.0000009.0000003.0000001.0000001.0000001.0000001.0000001.0000001.000000...1.0000001.0000001.0000001.0000000.01.0000001.0000001.0000001.0000001.000000
\n", "

8 rows × 117 columns

\n", "
" ], "text/plain": [ " winner cluster_id game_mode game_type hero_0 \\\n", "count 92650.000000 92650.000000 92650.000000 92650.000000 92650.000000 \n", "mean 0.053038 175.864145 3.317572 2.384587 -0.001630 \n", "std 0.998598 35.658214 2.633070 0.486833 0.402004 \n", "min -1.000000 111.000000 1.000000 1.000000 -1.000000 \n", "25% -1.000000 152.000000 2.000000 2.000000 0.000000 \n", "50% 1.000000 156.000000 2.000000 2.000000 0.000000 \n", "75% 1.000000 223.000000 2.000000 3.000000 0.000000 \n", "max 1.000000 261.000000 9.000000 3.000000 1.000000 \n", "\n", " hero_1 hero_2 hero_3 hero_4 hero_5 \\\n", "count 92650.000000 92650.000000 92650.000000 92650.000000 92650.000000 \n", "mean -0.000971 0.000691 -0.000799 -0.002008 0.003173 \n", "std 0.467672 0.165052 0.355393 0.329348 0.483950 \n", "min -1.000000 -1.000000 -1.000000 -1.000000 -1.000000 \n", "25% 0.000000 0.000000 0.000000 0.000000 0.000000 \n", "50% 0.000000 0.000000 0.000000 0.000000 0.000000 \n", "75% 0.000000 0.000000 0.000000 0.000000 0.000000 \n", "max 1.000000 1.000000 1.000000 1.000000 1.000000 \n", "\n", " ... hero_103 hero_104 hero_105 hero_106 \\\n", "count ... 92650.000000 92650.000000 92650.000000 92650.000000 \n", "mean ... -0.001371 -0.000950 0.000885 0.000594 \n", "std ... 0.535024 0.206112 0.283985 0.155940 \n", "min ... -1.000000 -1.000000 -1.000000 -1.000000 \n", "25% ... 0.000000 0.000000 0.000000 0.000000 \n", "50% ... 0.000000 0.000000 0.000000 0.000000 \n", "75% ... 0.000000 0.000000 0.000000 0.000000 \n", "max ... 1.000000 1.000000 1.000000 1.000000 \n", "\n", " hero_107 hero_108 hero_109 hero_110 hero_111 \\\n", "count 92650.0 92650.000000 92650.000000 92650.000000 92650.000000 \n", "mean 0.0 0.001025 0.000648 -0.000227 -0.000043 \n", "std 0.0 0.220703 0.204166 0.168707 0.189868 \n", "min 0.0 -1.000000 -1.000000 -1.000000 -1.000000 \n", "25% 0.0 0.000000 0.000000 0.000000 0.000000 \n", "50% 0.0 0.000000 0.000000 0.000000 0.000000 \n", "75% 0.0 0.000000 0.000000 0.000000 0.000000 \n", "max 0.0 1.000000 1.000000 1.000000 1.000000 \n", "\n", " hero_112 \n", "count 92650.000000 \n", "mean 0.000896 \n", "std 0.139033 \n", "min -1.000000 \n", "25% 0.000000 \n", "50% 0.000000 \n", "75% 0.000000 \n", "max 1.000000 \n", "\n", "[8 rows x 117 columns]" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_data.describe()\n", "#train_data.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Split Data into Train and Validation\n", "Now we want to see how well our classifier is performing, but we dont have the test data labels with us to check. What do we do ? So we split our dataset into train and validation. The idea is that we test our classifier on validation set in order to get an idea of how well our classifier works. This way we can also ensure that we dont [overfit](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) on the train dataset. There are many ways to do validation like [k-fold](https://machinelearningmastery.com/k-fold-cross-validation/),[leave one out](https://en.wikipedia.org/wiki/Cross-validation_(statistics), etc" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = train_data.drop('winner',1)\n", "y = train_data['winner']\n", "# Validation testing\n", "X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we have selected the size of the validation data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function [click here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define the Classifier and Train\n", "Now we come to the juicy part. We have fixed our data and now we train a classifier. The classifier will learn the function by looking at the inputs and corresponding outputs. There are a ton of classifiers to choose from some being [Logistic Regression](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc), [SVM](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Random Forests](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Decision Trees](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052), etc. \n", "Tip: A good model doesnt depend solely on the classifier but on the features(columns) you choose. So make sure to play with your data and keep only whats important. " ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "classifier = LogisticRegression()\n", "classifier.fit(X_train,y_train)\n", "\n", "# from sklearn.svm import SVC\n", "# clf = SVC(gamma='auto')\n", "# clf.fit(X_train, y_train)\n", "\n", "# from sklearn import tree\n", "# clf = tree.DecisionTreeClassifier()\n", "# clf = clf.fit(X_train, y_train)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have used [Logistic Regression](https://en.wikipedia.org/wiki/Logistic_regression) as a classifier here and set few of the parameteres. But one can set more parameters and increase the performance. To see the list of parameters visit [here](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html). \n", "Also given are SVM and Decision Tree examples. Check out SVM's parameters [here](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) and Decision Tree's [here](https://scikit-learn.org/stable/modules/tree.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Predict on Validation\n", "Now we predict our trained classifier on the validation set and evaluate our model" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true }, "outputs": [], "source": [ "y_pred = classifier.predict(X_val)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ActualPredicted
263891-1
55196-11
51250-11
255081-1
241281-1
2442-1-1
5638-1-1
3714-11
36579-11
10399-1-1
13464-1-1
71600-11
801621-1
707711
63431-11
785841-1
3141311
1339311
9084511
23339-1-1
13756-11
63563-1-1
81880-11
77591-1-1
2331111
\n", "
" ], "text/plain": [ " Actual Predicted\n", "26389 1 -1\n", "55196 -1 1\n", "51250 -1 1\n", "25508 1 -1\n", "24128 1 -1\n", "2442 -1 -1\n", "5638 -1 -1\n", "3714 -1 1\n", "36579 -1 1\n", "10399 -1 -1\n", "13464 -1 -1\n", "71600 -1 1\n", "80162 1 -1\n", "7077 1 1\n", "63431 -1 1\n", "78584 1 -1\n", "31413 1 1\n", "13393 1 1\n", "90845 1 1\n", "23339 -1 -1\n", "13756 -1 1\n", "63563 -1 -1\n", "81880 -1 1\n", "77591 -1 -1\n", "23311 1 1" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.DataFrame({'Actual': y_val, 'Predicted': y_pred})\n", "df1 = df.head(25)\n", "df1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluate the Performance\n", "We use the same metrics as that will be used for the test set. \n", "[F1 score](https://en.wikipedia.org/wiki/F1_score) and [ROC AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) are the metrics for this challenge" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "F1 score Error: 0.638888888888889\n", "ROC AUC Error: 0.5928579002999843\n" ] } ], "source": [ "print('F1 score Score:', metrics.f1_score(y_val, y_pred)) \n", "print('ROC AUC Score:', metrics.roc_auc_score(y_val, y_pred)) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Test Set\n", "Load the test data now" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": true }, "outputs": [], "source": [ "test_data = pd.read_csv('test.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Predict Test Set\n", "Time for the moment of truth! Predict on test set and time to make the submission." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true }, "outputs": [], "source": [ "y_test = classifier.predict(test_data)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": true }, "outputs": [], "source": [ "df = pd.DataFrame(y_test,columns=['winner'])\n", "df.to_csv('submission.csv',index=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## To download the generated csv in collab run the below command" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from google.colab import files\n", "files.download('submission.csv') " ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "To participate in the challenge click [here](https://www.aicrowd.com/challenges/dotaw-dota-2-prediction/)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" } }, "nbformat": 4, "nbformat_minor": 2 }