{ "cells": [ { "cell_type": "markdown", "metadata": { "_cell_guid": "79c7e3d0-c299-4dcb-8224-4455121ee9b0", "_uuid": "d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" }, "source": [ "# PUBG Finish Placement Prediction" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "18139ed2aa0cfba4e612ff407e2539015f2c8529" }, "source": [ "![](https://github.com/4ku/PUBG-prediction/raw/master/pictures/banner.png)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "40ab789846fdfa0fa1796079d89360f068c8348b" }, "source": [ "## 1. Feature and data explanation" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "e92b0c82116f77cb5e2ebd820acd5ca45850d041" }, "source": [ "At first, tell something about the game. **PlayerUnknown's Battlegrounds (PUBG)** - an online multiplayer battle royale game. Up to 100 players are dropped onto an island empty-handed and must explore, scavenge, loot and eliminate other players until only one is left standing, all while the play zone continues to shrink.
\n", "Battle Royale-style video games have taken the world by storm. So PUBG becomes very popular. With over 50 million copies sold, it's the fifth best selling game of all time, and has millions of active monthly players.
\n" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "8056f6c3498457943c5c73c7421fb4a3b333be3c" }, "source": [ " " ] }, { "cell_type": "markdown", "metadata": { "_uuid": "7a70af929187faf54a85cf93134757044f876372" }, "source": [ " " ] }, { "cell_type": "markdown", "metadata": { "_uuid": "9937d57732879e09960277d5f17aac59eb7ec0ef" }, "source": [ "\n" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "e006b1e7959d2d660e3ec7e5a7ba0eb2dcdb4af4" }, "source": [ "**The task**: using player statistic during the match, predict final placement of this player, where 0 is last place and 1 is winner winner, chicken dinner. \n", "

\n", "Dataset contains over 65,000 games' worth of anonymized player data, which you can download from [kaggle](https://www.kaggle.com/c/pubg-finish-placement-prediction/data) website. Each row of data is player stats at the end of the match.
\n", "The data comes from matches of all types: solos, duos, squads, and custom; there is no guarantee of there being 100 players per match, nor at most 4 player per group.
\n", "Statistics can be like - player kills, his/her match, group and personal ID, amount walked distance and etc...\n", "
**WinPlacePerc** - is a target feature on a scale from 1 (first place) to 0 (last place) - percentile winning placement.\n", "

\n", "A solution of the task can be valuable for PUBG players, for understanding, what parameters are important, which tactic to choose. Also using [PUBG Developer API](https://developer.pubg.com/) we can collect our own data with more features. So it makes real to create a lot of different apps, which will help players. For example, app with personal assisstant, who will give a tips, what skill you should to train . \n", "

\n", "Let's look to the data" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "080e57e7bb0c7144f13a9b1c9f3e7bd189936fc0" }, "source": [ "## 2-3 Primary data analysis and visual data analysis" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "78b855c6fc2f462ade4e63198759321be1ef4842" }, "outputs": [], "source": [ "import numpy as np \n", "import pandas as pd \n", "import matplotlib.pyplot as plt\n", "import matplotlib.ticker as ticker\n", "import seaborn as sns\n", "import scipy.stats as sc\n", "import gc\n", "import warnings" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "78b855c6fc2f462ade4e63198759321be1ef4842" }, "outputs": [], "source": [ "plt.rcParams['figure.figsize'] = 15,8\n", "sns.set(rc={'figure.figsize':(15,8)})\n", "pd.options.display.float_format = '{:.2f}'.format\n", "warnings.filterwarnings('ignore')\n", "gc.enable()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "3a029f8cb658e3025c4e6c7a1307422f58625db5" }, "outputs": [], "source": [ "train = pd.read_csv('../input/train_V2.csv')\n", "test = pd.read_csv('../input/test_V2.csv')\n", "train.head()" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "051839ab65d5b6ba0293ce092b76cd49ecebf629" }, "source": [ "### Data fields\n", "\n", "* **DBNOs** - Number of enemy players knocked.\n", "* **assists** - Number of enemy players this player damaged that were killed by teammates.\n", "* **boosts** - Number of boost items used.\n", "* **damageDealt** - Total damage dealt. Note: Self inflicted damage is subtracted.\n", "* **headshotKills** - Number of enemy players killed with headshots.\n", "* **heals** - Number of healing items used.\n", "* **Id** - Player’s Id\n", "* **killPlace** - Ranking in match of number of enemy players killed.\n", "* **killPoints** - Kills-based external ranking of player. (Think of this as an Elo ranking where only kills matter.) If there is * a value other than -1 in rankPoints, then any 0 in killPoints should be treated as a “None”.\n", "* **killStreaks** - Max number of enemy players killed in a short amount of time.\n", "* **kills** - Number of enemy players killed.\n", "* **longestKill** - Longest distance between player and player killed at time of death. This may be misleading, as downing a player and driving away may lead to a large longestKill stat.\n", "* **matchDuration** - Duration of match in seconds.\n", "* **matchId** - ID to identify match. There are no matches that are in both the training and testing set.\n", "* **matchType** - String identifying the game mode that the data comes from. The standard modes are “solo”, “duo”, “squad”, “solo-fpp”, “duo-fpp”, and “squad-fpp”; other modes are from events or custom matches.\n", "* **rankPoints** - Elo-like ranking of player. This ranking is inconsistent and is being deprecated in the API’s next version, so use with caution. Value of -1 takes place of “None”.\n", "* **revives** - Number of times this player revived teammates.\n", "* **rideDistance** - Total distance traveled in vehicles measured in meters.\n", "* **roadKills** - Number of kills while in a vehicle.\n", "* **swimDistance** - Total distance traveled by swimming measured in meters.\n", "* **teamKills** - Number of times this player killed a teammate.\n", "* **vehicleDestroys** - Number of vehicles destroyed.\n", "* **walkDistance** - Total distance traveled on foot measured in meters.\n", "* **weaponsAcquired** - Number of weapons picked up.\n", "* **winPoints** - Win-based external ranking of player. (Think of this as an Elo ranking where only winning matters.) If there is a value other than -1 in rankPoints, then any 0 in winPoints should be treated as a “None”.\n", "* **groupId** - ID to identify a group within a match. If the same group of players plays in different matches, they will have a different groupId each time.\n", "* **numGroups** - Number of groups we have data for in the match.\n", "* **maxPlace** - Worst placement we have data for in the match. This may not match with numGroups, as sometimes the data skips over placements.\n", "* **winPlacePerc** - The target of prediction. This is a percentile winning placement, where 1 corresponds to 1st place, and 0 corresponds to last place in the match. It is calculated off of maxPlace, not numGroups, so it is possible to have missing chunks in a match" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "d0c71af91415d64b0ebc0bee539d763bcfbca13d" }, "outputs": [], "source": [ "train.info()" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "f6e86d6f8062361ef15b3e324443a732bbd70c98" }, "source": [ "We have 4.5 millions of player stats records!
\n", "
\n", "Now check dataset for missing values" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "d6709a3efeaf22c84ffbe367f91c4a15c8e5d4a6" }, "outputs": [], "source": [ "display(train[train.isnull().any(1)])\n", "display(test[test.isnull().any(1)])" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "4f755c9054a362061eff8bab76c8fa16cf359d77" }, "source": [ "There are only one row with nan value, so let's drop it" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "5d27e34c10e6d3a52c4dc5bf530c0f4f662d2b1b" }, "outputs": [], "source": [ "train.drop(2744604, inplace=True)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "4d40355c6aad0bc8656a9c772e1d4a6b44ad703e" }, "source": [ "General info about aech column" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "5276a33bd3e840eff97cf0dfc2f90dc9f0c06a7d" }, "outputs": [], "source": [ "train.describe()" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "cd673dd5a580167326b972ce30b992a032e22291" }, "source": [ "We can already guess, that the target feature has uniform distribution. It's because winPlacePerc is already scaled feature and after every match player can have only one place." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "8e1a82661ed57ab99e016165781b0967bc4fa058" }, "outputs": [], "source": [ "train['winPlacePerc'].hist(bins=25);" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "8d07f3be4bc9df47ed2aee27f783597da29dc745" }, "source": [ "We can notice, that 0 and values are more than others. It's because first and last place exists in every match)
\n", "WinPlacePerc has obviously uniform distribution, but let's check target feature for normality and skewness of distribution (becouse of task) " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "56f4a8c6d6bfbd858ce7d76593821c25d355a53c" }, "outputs": [], "source": [ "print(sc.normaltest(train['winPlacePerc']))\n", "print('Skew: ', sc.skew(train['winPlacePerc']))" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "dca443712f07d896b827678af6c57ecf79a78e52" }, "source": [ "Pvalue is zero, so this distribution is not normal
\n", "Skew is close to zero, so distribution is almostly symmetric" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "3bcaa0dd75db93a6b8dfe95382149a196e905a15" }, "source": [ "Now look at distrubution of features with upper limit (to get rid of outliers) and without zero values (because of lots of zero values)\n", "
Also make boxplots to see correlation target feature from feature values" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "a24f765dec2d00914fe286b3866f76ac7d41b5df" }, "outputs": [], "source": [ "def featStat(featureName, constrain,plotType):\n", " feat = train[featureName][train[featureName]>0]\n", " data = train[[featureName,'winPlacePerc']].copy()\n", " q99 = int(data[featureName].quantile(0.99))\n", " plt.rcParams['figure.figsize'] = 15,5; \n", " \n", " if constrain!=None:\n", " feat = feat[feat q99, featureName] = q99+1\n", " x_order = data.groupby(featureName).mean().reset_index()[featureName]\n", " x_order.iloc[-1] = str(q99+1)+\"+\"\n", " data[featureName][data[featureName] == q99+1] = str(q99+1)+\"+\"\n", " \n", " ax = sns.boxplot(x=featureName, y='winPlacePerc', data=data, color=\"#2196F3\", order = x_order);\n", " ax.set_xlabel(featureName, size=14, color=\"#263238\")\n", " ax.set_ylabel('WinPlacePerc', size=14, color=\"#263238\")\n", " plt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "7245c69feda9ba6e8a70dd620c84005a9328d039" }, "source": [ "**Kills and damage**" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "0224cc24569ae714a97a088634e29c344e9ca092" }, "source": [ " " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "5c10e3364247e499b9f8c2965e99441d19ed8f73" }, "outputs": [], "source": [ "featStat('kills',15,'count');\n", "plt.show();\n", "featStat('longestKill',400,'hist');\n", "plt.show();\n", "featStat('damageDealt',1000,'hist');" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "0e8d163e42dccf76c5ced02860522d2c2a8d03a3" }, "source": [ "**Heals and boosts**" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "e87c4ae6f949b0a027739349861121e80c9f6a94" }, "source": [ " " ] }, { "cell_type": "markdown", "metadata": { "_uuid": "ccffbea34a9bd4746839fc55a52e819f4eb09eef" }, "source": [ " " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "379adb785bf2b1334f32cc5e898b5380aa6e6458" }, "outputs": [], "source": [ "featStat('heals',20,'count')\n", "plt.show()\n", "featStat('boosts',12,'count')" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "1f1bf583a6098376925fd12c2a0569bfbecc5fb8" }, "source": [ "**Distance**" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "89d5f8e09b2472d5e7177e0ba7e051a9402a8b90" }, "source": [ " " ] }, { "cell_type": "markdown", "metadata": { "_uuid": "4e6afdf5081fccdd08490b0944cfb6ffe47f1c38" }, "source": [ " " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "81bfcd0da08f794436c2f80817eecf7ca5903771" }, "outputs": [], "source": [ "featStat('walkDistance',5000,'hist')\n", "plt.show()\n", "featStat('swimDistance',500,'hist')\n", "plt.show()\n", "featStat('rideDistance',12000,'hist')" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "2cb6e1a0eb22d07ca32aeadad0a1b24ddcdbaf79" }, "source": [ "**Some other features**" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "dcb03b00abf6b22a43cf47f4aee202ee2d1ce220" }, "source": [ " " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "847b7ec7af84c4648cfb641dfcb459907d37f9c0" }, "outputs": [], "source": [ "featStat('weaponsAcquired',15,'count')\n", "plt.show()\n", "featStat('vehicleDestroys',None,'count')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "8269053cf825653285ffcfa39090bf72ed483f99" }, "outputs": [], "source": [ "features = ['kills', 'longestKill', 'damageDealt', 'heals', 'boosts', 'walkDistance', 'swimDistance', 'rideDistance', 'weaponsAcquired', 'vehicleDestroys']\n", "zeroPerc = ((train[features] == 0).sum(0) / len(train)*100).sort_values(ascending = False)\n", "sns.barplot(x=zeroPerc.index , y=zeroPerc, color=\"#2196F3\");\n", "plt.title(\"Percentage of zero values\")\n", "plt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "3bb5743d303d3ff1c07368899fb1bcf1659b0db0" }, "source": [ "As we can see, with increasing of value of this features, probability to win also increase. So features, described above, good correlate with target feature.
\n", "Plot remaining features" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "3e43e33f6b8ee7bb9fade23b09989eab5990440d" }, "outputs": [], "source": [ "df = train.drop(columns=['Id','matchId','groupId','matchType']+features)\n", "df[(df>0) & (df<=df.quantile(0.99))].hist(bins=25,layout=(5,5),figsize=(15,15));\n", "plt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "180edf83e593955428fdb45bf68a66fb9d967e06" }, "source": [ "### Feature correlations " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "78f57b2784d519c7a5965b4dbe6c60e1b4fd5ef7" }, "outputs": [], "source": [ "f,ax = plt.subplots(figsize=(15, 13))\n", "sns.heatmap(df.corr(), annot=True, fmt= '.1f',ax=ax,cbar=False)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "4f641bd081c13ff89ecc2b13178ea19c8a4e721a" }, "source": [ "Take features, which most correlate with target feature" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "56762c05bb19273a02fd68f1665e20e316d870c5" }, "outputs": [], "source": [ "f,ax = plt.subplots(figsize=(11, 11))\n", "cols = abs(train.corr()).nlargest(6, 'winPlacePerc')['winPlacePerc'].index\n", "hm = sns.heatmap(np.corrcoef(train[cols].values.T), annot=True, square=True, fmt='.2f', yticklabels=cols.values, xticklabels=cols.values)\n", "print(\", \".join(cols[1:]), \" most correlate with target feature\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "fb522f9cb8aa6ab55688d07728a2d429c38a7e77" }, "source": [ "Let's make pairplots. We can clearly see correlation with winPlacePerc (but maybe only with weaponsAcquired it's difficult to see)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "ef8e58feaea64bda49c2303905fe6a3a484f779b" }, "outputs": [], "source": [ "sns.set(font_scale=2)\n", "sns.pairplot(train, y_vars=[\"winPlacePerc\"], x_vars=cols[1:],height=8);\n", "sns.set(font_scale=1)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "9f92da388593b68203257c1c7bf7022f49c34849" }, "source": [ "### Match statistics" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "442a2b3822541dd68a7bd3962caf173992c5e0da" }, "outputs": [], "source": [ "print(\"Number of match in train dataset:\",train['matchId'].nunique())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "1d1c7b07495ab519242c3f6f1f00741a8c7c5c30" }, "outputs": [], "source": [ "playersJoined = train.groupby('matchId')['matchId'].transform('count')\n", "sns.countplot(playersJoined[playersJoined>=75])\n", "plt.title('playersJoined');" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "c8906a97aed192ea8ddead5830f93344611cb0d1" }, "outputs": [], "source": [ "ngroupsByMatch = train.groupby('matchId')['groupId'].nunique()\n", "ax = sns.countplot(ngroupsByMatch)\n", "plt.title('Number of groups by match');\n", "ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%d'))\n", "ax.xaxis.set_major_locator(ticker.MultipleLocator(base=5)) #Starts from 0 not from 1:(" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "7402bc1adfdaba84c4f8e423511c7a3f8e219755" }, "outputs": [], "source": [ "train.matchDuration.hist(bins=50);" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "17785118c61ce2caa767b017eed98e9d1046e952" }, "source": [ "We can see 3 peaks on second plot and 2 peaks in match duration plot. Presumably, it depends from match type." ] }, { "cell_type": "markdown", "metadata": { "_uuid": "bf02612e8a1c05e03d441a36c97f31f9173cb157" }, "source": [ "**Some stats by matchType**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "8deaf6c8a982136884afb7580c01ca19e16e1176" }, "outputs": [], "source": [ "plt.rcParams['figure.figsize'] = 18,7;\n", "types = train.groupby('matchType').size().sort_values(ascending=False)\n", "sns.barplot(x=types.index,y=types.values);\n", "plt.title(\"Number of players by matchType\");\n", "plt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "1b13f2410d9b1e5835f72144a487cbdb30f94e35" }, "source": [ "So, people usually play in squads or pairs. (or maybe just data collected in this way)

\n", "At the end, some numbers, which describe each type of game by number of players, groups, matches and etc.\n", "
In this table np.size - number of players and _num - number of matches. We can see that maxPlace and numGroups are almostly the same." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "a9e78b37f5ee7b702f810f8249b8d2f98c7f179d" }, "outputs": [], "source": [ "def _min(x):\n", " return x.value_counts().values.min()\n", "def _max(x):\n", " return x.value_counts().values.max()\n", "def _avg(x):\n", " return x.value_counts().values.mean()\n", "def _med(x):\n", " return np.median(x.value_counts().values)\n", "def _num(x):\n", " return x.nunique()\n", "infostat = train.groupby('matchType').agg({\n", " \"matchId\": [np.size, _num, _min,_med,_max], #np.size - number of players, _num - number of matches\n", " \"groupId\": [_min,_med,_max],\n", " \"matchDuration\": [min,np.median, max], \n", " \"maxPlace\": [min,np.median,max],\n", " \"numGroups\":[min,np.median,max]\n", " }).sort_values(by=('matchId','size'),ascending=False) \n", "display(infostat)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "ba1adfa0fb2b438a89b4bd9e822617853af6c87c" }, "source": [ "## 4. Insights and found dependencies" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "4105178e4e8af6d08d607f1050c6db77e54fed18" }, "source": [ "      We found, that walkDistance, killPlace, boosts, weaponsAcquired and damageDealt - is most correlated features. It easy to guess why.\n", "
     If you are close to the top, more likely, that you walk greater distance, because you have to be in the circle (game zone). More likely, that you find a good weapon and/or kill somebody. If you kill somebody, your enemy can hurt you, so then it's better to use boost. Near each killed enemy, you can find his/her loot and ,probably, you acquire some his/her weapons.\n", "
\n", "      Also we can see, that a lot of people play in squads or duos (play in groups). Players in one team have the same finish placement. Finish result depends from team work. So it's better to see general statistic by team, not by separate player." ] }, { "cell_type": "markdown", "metadata": { "_uuid": "13215f25bbb3600f108edee4e0773d71522a9c1b" }, "source": [ "*Game zones*\n", "![game zones](https://github.com/4ku/PUBG-prediction/raw/master/pictures/Circle%20zones.png)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "7fc1c0b072ee6e7da18d390906697fc2ae3fd7b3" }, "source": [ "## 5. Metrics selection" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "6c64cba8e33ddf5d95cc6479327bcdadbfebdfb2" }, "source": [ "   This task is regression problem. For regression problem we know only `mean absolute error` (MAE), `mean squared error` (MSE; also root MSE exists ) and `mean squared log error` (MSLE; root MSLE exists too).
\n", "   Our target have uniform distribution with range from 0 to 1. But MSLE more appropriate for not uniform distribution and MSE usually use, when large errors are particularly undesirable. We are not in this situations, so **MAE** will be convenient for us." ] }, { "cell_type": "markdown", "metadata": { "_uuid": "2b75a481cbcf0226f0b0d7be23de476d3b802565" }, "source": [ "![](https://github.com/4ku/PUBG-prediction/raw/master/pictures/Love%20MAE.png)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "2b9dccbf826ab36a75cbe11ba7cce4ef4469a4ba" }, "source": [ "## 6. Model selection" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "7791bc78b3bdbf1aef756f66800095543dbda352" }, "source": [ "I decided to use **LightGBM**. With LightGBM convenient to work with large datasets (our case). LightGBM more faster than, for example, XGBoost and give good score at the same time. There are lots of parameters to tune (main problem) and it supports solving regression problems." ] }, { "cell_type": "markdown", "metadata": { "_uuid": "1b5c70cd966be14a7823bdee07c85dce3920d368" }, "source": [ " \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "3321fdbaf085ffcd2a82f166052e62f20fba1a52" }, "outputs": [], "source": [ "import lightgbm as lgb" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "ab500cd5903ca56f659f771bfcfe753362057f5a" }, "source": [ "## 7. Data preprocessing" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "56e7e08d178821881e34ec5f192de9b22e36d04e" }, "source": [ "As I already mentioned, we are going to group player statistics to teams (by groupId)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "b5583de570bb850ca3fbd5d2242781e898f57c22" }, "outputs": [], "source": [ "# Function, which reduce memory usage. \n", "# This function I took from ready kernel (https://www.kaggle.com/gemartin/load-data-reduce-memory-usage)\n", "def reduce_mem_usage(df):\n", " \"\"\" iterate through all the columns of a dataframe and modify the data type\n", " to reduce memory usage. \n", " \"\"\"\n", " start_mem = df.memory_usage().sum() / 1024**2\n", " print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))\n", "\n", " for col in df.columns:\n", " col_type = df[col].dtype\n", "\n", " if col_type != object:\n", " c_min = df[col].min()\n", " c_max = df[col].max()\n", " if str(col_type)[:3] == 'int':\n", " if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:\n", " df[col] = df[col].astype(np.int8)\n", " elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:\n", " df[col] = df[col].astype(np.int16)\n", " elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:\n", " df[col] = df[col].astype(np.int32)\n", " elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:\n", " df[col] = df[col].astype(np.int64) \n", " else:\n", " if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:\n", " df[col] = df[col].astype(np.float16)\n", " elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:\n", " df[col] = df[col].astype(np.float32)\n", " else:\n", " df[col] = df[col].astype(np.float64)\n", "\n", " end_mem = df.memory_usage().sum() / 1024**2\n", " print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))\n", " print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))\n", " return df" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "9f27725dc3e9a268634f1e308ce33da22ffe9bb3" }, "source": [ "     In next steps we will create new features. That's why this step will repeat again. So, at this stage, make simple data preparation. We just group all by team and than make a ranking in each match.\n", "
\n", " >  Ranking - scaling, where the lowest value in initial table replace to value about zero (depends from distribution; no lower than 0) and maximum value replace to value about 1 (no higher than 1)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "8af2548dcc4343003c57e44233f27af33174fe81" }, "outputs": [], "source": [ "def initial_preparing(df, Debug):\n", " if Debug:\n", " df = df[df['matchId'].isin(df['matchId'].unique()[:2000])]\n", " # Drop next columns. *Points features don't correlate with target feature, need\n", " # more EDA to understand how they work.\n", " df.drop(columns=['killPoints','rankPoints','winPoints','matchType','maxPlace','Id'],inplace=True)\n", " X = df.groupby(['matchId','groupId']).agg(np.mean)\n", " X = reduce_mem_usage(X)\n", " y = X['winPlacePerc'] \n", " X.drop(columns=['winPlacePerc'],inplace=True)\n", " X_ranked = X.groupby('matchId').rank(pct=True)\n", " X = X.reset_index()[['matchId','groupId']].merge(X_ranked, how='left', on=['matchId', 'groupId'] )\n", " X.drop(['matchId','groupId'],axis=1, inplace=True)\n", " X = reduce_mem_usage(X)\n", " return X, y" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "0cdf08fc40b14b1f11c55fdecba63fb0c60e70a9" }, "outputs": [], "source": [ "X_train, y = initial_preparing(train.copy(),False)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "bb9a22016371ce414cc5941bac26e72b112b99ef" }, "source": [ "Split our train dataset to part, which we are going to train (X_train; same name), and to part with which we will check an error." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "af78babf06d39644137a6604942de2b5f553bd95" }, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "X_train, X_holdout, y_train, y_holdout = train_test_split(X_train, y, test_size=0.2, random_state=666)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "9ba939c916445b7117909c5035f9c4266150b606" }, "source": [ "## 8-9. Cross-validation and adjustment of model hyperparameters. Creation of new features and description of this process" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "f539858f9be8f27e4d7b8a30ae97d82474106d67" }, "outputs": [], "source": [ "from sklearn.model_selection import cross_val_score\n", "import sklearn.metrics\n", "from sklearn.model_selection import GridSearchCV" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "f539858f9be8f27e4d7b8a30ae97d82474106d67" }, "source": [ "I choose 5 folds in cross-validation. We have a big dataset, so it's not necessary to set more folds, 80% for train part is enough to train model. Moreover, If I choose higher number, than it will take a lot of time to compute. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "f539858f9be8f27e4d7b8a30ae97d82474106d67" }, "outputs": [], "source": [ "%%time\n", "lgtrain = lgb.Dataset(X_train, label=y_train.reset_index(drop=True))\n", "res = lgb.cv({'metric': 'mae'},lgtrain, nfold=5,stratified=False,seed=666)\n", "print(\"Mean score:\",res['l1-mean'][-1])\n", "gc.collect()" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "e351e81ebc7814af2ddd091444876a85ca6abfab" }, "source": [ "So, our score is 0.0644. It's not bad. It means that our model error is +-6.42 placements (if there are 100 players on server)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "1b4f176ec874d93db7538a89fae9f1d14740ccc3" }, "source": [ "Lets add new features and make ranking again." ] }, { "cell_type": "markdown", "metadata": { "_uuid": "1349d270e33af7581f5f05a2d4ff1842fea1ee67" }, "source": [ "When we aggregate dataset by groupId, we create \"new\" features, i.e. we can aggregate by different ways.
\n", "For example, 'boosts':sum - total number of using boosts in one team." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "86d32799af224f7b12c9e170098024ea9f5dfd04" }, "outputs": [], "source": [ "team_features = {\n", " 'assists': [sum, np.mean, np.size], #np.size - size of team\n", " 'boosts' : [sum, np.var, np.mean], \n", " 'heals': [sum, np.var, np.mean],\n", " 'damageDealt': [np.var,min,max,np.mean],\n", " 'DBNOs': [np.var,max,np.mean],\n", " 'headshotKills': [max,np.mean],\n", " 'killPlaceScall':[sum, min,max, np.var, np.mean],\n", " 'kills': [ sum, max, np.var,np.mean],\n", " 'killStreaks': [max,np.var,np.mean],\n", " 'longestKill': [max, np.mean, np.var],\n", " 'revives': sum,\n", " 'rideDistance': [sum, np.mean,np.var],\n", " 'swimDistance': [np.var],\n", " 'teamKills': sum,\n", " 'vehicleDestroys': sum,\n", " 'walkDistance': [np.var,np.mean],\n", " 'weaponsAcquired': [np.mean],\n", " 'damageRate': [np.var,min,max,np.mean],\n", " 'headshotRate': [np.var,max,np.mean],\n", " 'killStreakRate': [np.var,np.max, np.mean],\n", " 'healthItems': [np.var, np.mean],\n", " 'healsOnKill': [ np.var, np.mean],\n", " 'sniper': [ np.var, np.mean],\n", " 'totalDistance': [sum, np.var, np.mean],\n", " 'totalDistancePen': [ sum ,np.var, np.mean],\n", " 'killsPerRoadDist': [np.mean],\n", " 'killsPerWalkDist': [np.mean],\n", " 'killsPerDist': [np.mean],\n", " 'distance_over_weapons': [np.mean],\n", " 'walkDistance_over_heals': [np.mean],\n", " 'skill': [np.var,np.mean]\n", "}" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "20369f3f90f5d59b413aee63aabf12640802cfb8" }, "source": [ "**New features**
\n", "
     `killPlaceScall` - scaled `killPlace` feature. Just divide `killPlace` on number of players in a match.\n", "
     `damageRate` - ratio `kills` and `damageDealt/100`. If `damageRate`>1, player killed enemies, who was already damaged. So it was more easies to kill them.\n", "If this feature <1, it means that player deal more damage than he/she kill - player had a difficult battle or just a little damage some players, whose he/she don't kill. \n", "
    `headshotRate` - percentage of headshot kills. Shows skill of player\n", "
    `killStreakRate` - percentage of killStreak from all kills. Also shows player skill\n", "
    `healthItems` - total number of health items (heals+boosts). \n", "
    `healsOnKill` - equal to `healsItems`/`kills`. It shows how good player was in a battle. If player don't use heals after kill, it probably means, that he/she don't take damage.\n", "
    `sniper` - equal to `longestKill`/100*`weaponsAcquired`. It shows player sniper skill. Usually snipers have a good weapon. To find this weapon, player more likeky need acquired a lot other weapons. Yea, it's strange feature.\n", "
    `totalDistance` - `rideDistance` + `walkDistance` + `swimDistance`. Big distance means that player survived for long period of time, so he/she will take a good final place.\n", "
    `totalDistancePen` - penalized `totalDistance`. It's needed to predict time of player game . So vehicle speed is approximately 5 times higher than player walk speed and swim speed is approximately 10 times lower than player walk speed.\n", "
    `killsPerRoadDist` - kills per distance. This feature can show your skill too. It's difficult to kill enemy using vehicle. \n", "
    `killsPerWalkDist` - represent player style. It shows you are camper or always in moving.\n", "
    `killsPerDist` - just combination of `killsPerRoadDist` and `killsPerWalkDist`\n", "
    `distance_over_weapons` - low values can represent that player try to find loot by yourself and/or he/she don't satisfied his/her equipment, high values can mean that player just take loot from killed people and/or he/she has good equipment. Of course, it's not always true.\n", "
    `walkDistance_over_heals` - may represent player battles per distance.\n", "
    `skill` - equal to `headshotKills` + `roadKills` + `teamKills`. Just one of the indicator of player skill." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "5dee3502c7ff5ac5f4737184f1e93f7624972f61" }, "outputs": [], "source": [ "def featuring(df, isTrain, Debug):\n", " y=None\n", " if Debug:\n", " df = df[df['matchId'].isin(df['matchId'].unique()[:2000])]\n", " \n", " #Creating new features\n", " #_________________________________________________________________________________________\n", "\n", " nplayers = df.groupby('matchId')['matchId'].transform('count')\n", " df['killPlaceScall'] = df['killPlace'] / nplayers\n", " df['damageRate'] = df['kills']/(0.01*df['damageDealt'])\n", " df['headshotRate'] = df['headshotKills']/df['kills']\n", " df['killStreakRate'] = df['killStreaks']/df['kills']\n", " df['healthItems'] = df['heals'] + df['boosts']\n", " df['healsOnKill'] = df['healthItems']/df['kills']\n", " df['sniper'] = df['longestKill']/100*df['weaponsAcquired']\n", " df['totalDistance'] = df['rideDistance'] + df[\"walkDistance\"] + df[\"swimDistance\"]\n", " df['totalDistancePen'] = df['rideDistance']/5 + df[\"walkDistance\"] + df[\"swimDistance\"]*10\n", " df['killsPerRoadDist'] = df['roadKills'] / (df['rideDistance']+1)\n", " df['killsPerWalkDist'] = (df['kills']-df['roadKills']) / (df['walkDistance']+1)\n", " df['killsPerDist'] = df['kills']/(df['totalDistance']+1)\n", " df['distance_over_weapons'] = df['totalDistance'] / df['weaponsAcquired']\n", " df['walkDistance_over_heals'] = df['walkDistance']/100/df['heals']\n", " df[\"skill\"] = df[\"headshotKills\"] + df[\"roadKills\"] - df['teamKills'] \n", " df.fillna(0,inplace=True)\n", " df.replace(np.inf, 0, inplace=True)\n", " #_________________________________________________________________________________________\n", " \n", " ids = df[['matchId','groupId','Id']]\n", " df.drop(columns=['killPlace','killPoints','rankPoints','winPoints','matchType','maxPlace','Id'],inplace=True)\n", " \n", " tfeatures = team_features.copy()\n", " if isTrain:\n", " tfeatures['winPlacePerc'] = max\n", " X = df.groupby(['matchId','groupId']).agg(tfeatures)\n", " X.fillna(0,inplace=True)\n", " X.replace(np.inf, 1000000, inplace=True)\n", " X = reduce_mem_usage(X) \n", " if isTrain:\n", " y = X[('winPlacePerc','max')] \n", " X.drop(columns=[('winPlacePerc','max')],inplace=True)\n", " \n", " \n", " #Group dataset by matches. To each match apply ranking \n", " X_ranked = X.groupby('matchId').rank(pct=True) \n", " X = X.reset_index()[['matchId','groupId']].merge(X_ranked, suffixes=[\"\", \"_rank\"], how='left', on=['matchId', 'groupId'] )\n", "\n", " ids_after = X[['matchId','groupId']]\n", " ids_after.columns = ['matchId','groupId']\n", " \n", " X = X.drop(['matchId','groupId'],axis=1)\n", " X.columns = [a+\"_\"+b for a,b in X.columns]\n", " X = reduce_mem_usage(X)\n", " \n", " return X, y, ids,ids_after" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "04c1a5f2640300622474176c6cf4ee42b517c397" }, "outputs": [], "source": [ "%%time\n", "X_train, y, _,_ = featuring(train,True,False)\n", "X_test, _,ids_init,ids_after = featuring(test,False,False)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "c9f66650b564590289f43e454233ad368e200e9a" }, "source": [ "Split our train dataset again" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "04c1a5f2640300622474176c6cf4ee42b517c397" }, "outputs": [], "source": [ "X_train, X_holdout, y_train, y_holdout = train_test_split(X_train, y, test_size=0.2, random_state=666)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "06d762fbcac5f758d714d39051fe98fee1031401" }, "outputs": [], "source": [ "%%time\n", "lgtrain = lgb.Dataset(X_train, label=y_train.reset_index(drop=True))\n", "res = lgb.cv({'metric': 'mae'},lgtrain, nfold=5,stratified=False,seed=666)\n", "print(\"Mean score:\",res['l1-mean'][-1])" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "311c93e93d20b80704757d2aa03e5382b60d7b33" }, "source": [ "We get a significant improvement (almost in 2 times). So new features really help to understand the data.\n", "
\n", "
\n", "Now let's tune LightGBM. To do this, we are going to use GridSearchCV, which helps find the best parameters." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "3533ad965576226dcbb97ba586b6d2010f33bdb4" }, "outputs": [], "source": [ "gridParams = {\n", " 'num_leaves': [30,50,100], 'max_depth': [-1,8,15], \n", " 'min_data_in_leaf': [100,300,500], 'max_bin': [250,500], \n", " 'lambda_l1': [0.01], 'num_iterations': [5], \n", " 'nthread': [4], 'seed': [666],\n", " 'learning_rate': [0.05], 'metric': ['mae'],\n", " \"bagging_fraction\" : [0.7], \"bagging_seed\" : [0], \"colsample_bytree\" : [0.7]\n", " }\n", "model = lgb.LGBMRegressor()\n", "grid = GridSearchCV(model, gridParams,\n", " verbose=1,\n", " cv=5)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "6089741612997afbdb4f54c724bfbbb93b76a93b" }, "source": [ "We are going to tune `num_leaves`, `max_depth`, `min_data_in_leaf` and `max_bin`, because it's main parameters in LightGBM. \n", "
    `num_leaves` - max number of leaves in one tree. It's the main parameter to control the complexity of the tree model.\n", "
    `max_depth` - limit the max depth for tree model. This is used to deal with over-fitting. -1 means no limit\n", "
    `min_data_in_leaf` - minimal number of data in one leaf. This is a very important parameter to prevent over-fitting in a leaf-wise tree. Its optimal value depends on the number of training samples and
    `num_leaves`.\n", "
    `max_bin` - max number of bins that feature values will be bucketed in. Small number of bins may reduce training accuracy but may increase general power (deal with over-fitting)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "c81f5180b1befefc07c37ae109c1623755e61539" }, "source": [ "There we take only 500 000 teams out of 1 500 000. As we will see further (on learning curve), it's enough to find best params." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "6f29e7ddb99f4a97cc4c0d58f4dd0d6bc1b6f09f" }, "outputs": [], "source": [ "%%time\n", "grid.fit(X_train.iloc[:500000,:], y_train.iloc[:500000])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "783db9de7fcdca6cc653a5f353740ff7363dcdab" }, "outputs": [], "source": [ "print(\"Best params:\", grid.best_params_)\n", "print(\"\\nBest score:\", grid.best_score_)\n", "params = grid.best_params_" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "4bc2e73054a1e8a8c1a9179f3f8742c87085a48e" }, "source": [ "Best score is worse than after cross-validation, because there was taken 5 iterations, and in cross-validation - 100 iterations. But it's will be OK, when we set higher number of iterations with parameters, which we find now." ] }, { "cell_type": "markdown", "metadata": { "_uuid": "e9f524f1d28a12ebf17cdcaf5579995400909f9e" }, "source": [ "## 10. Plotting training and validation curves" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "1c74588a2adf25871d1d7d701027ae18ed63c9e3" }, "source": [ "Now let's plot learning curve with different sizes of trainsets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "959f6cc8b23b6626e49af8fbd73f4f38606c12a5" }, "outputs": [], "source": [ "from sklearn.model_selection import validation_curve\n", "from sklearn.model_selection import learning_curve\n", "model = lgb.LGBMRegressor(learning_rate=0.05,nthread=4)\n", "\n", "def plot_with_err(x, data, **kwargs):\n", " mu, std = data.mean(1), data.std(1)\n", " lines = plt.plot(x, mu, '-', **kwargs)\n", " plt.fill_between(x, mu - std, mu + std, edgecolor='none',\n", " facecolor=lines[0].get_color(), alpha=0.2)\n", " \n", "def plot_learning_curve():\n", " train_sizes = [1000,5000,10000,50000,100000,500000]\n", " N_train, val_train, val_test = learning_curve(model,\n", " X_train, y_train, train_sizes=train_sizes, cv=5,\n", " scoring='neg_mean_absolute_error')\n", " plot_with_err(N_train, abs(val_train), label='training scores')\n", " plot_with_err(N_train, abs(val_test), label='validation scores')\n", " plt.xlabel('Training Set Size'); plt.ylabel('MAE')\n", " plt.legend()\n", "\n", "plot_learning_curve()" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "f1debb7b1b7a8c68bbb79fa42ff112b5a20ea35e" }, "source": [ "As we can see, at small sizes of trainset, we have a big difference in train and validation scores. The reason is overfitting of train set and lack of data.\n", "
But with increasing size, this curves converge. With 500 000 train size this difference is very small. That's why I took 500 000 instead of all trainset in GridSearchCV." ] }, { "cell_type": "markdown", "metadata": { "_uuid": "09879ff1f8a4be7a9b08c38f656fdff3911a5e83" }, "source": [ "Now look how score depends from number of iterations." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "ed1b5980d8538b470f34a5fbd951fc6ea5c202b0" }, "outputs": [], "source": [ "def iter_vs_score(num_iterations):\n", " val_train, val_test = validation_curve(model, X_train[:500000], y_train[:500000],\n", " 'num_iterations', num_iterations, cv=4,scoring='neg_mean_absolute_error', verbose=1)\n", " plot_with_err(num_iterations, abs(val_test), label='validation scores')\n", " plot_with_err(num_iterations, abs(val_train), label='training scores')\n", " plt.xlabel('Number of iterations'); plt.ylabel('MAE')\n", " plt.legend();\n", " plt.show();\n", "\n", "num_iterations_small = [5,10,20,30,100,200]\n", "iter_vs_score(num_iterations_small)\n", "num_iterations_big = [500,1000,5000,10000]\n", "iter_vs_score(num_iterations_big)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "ed1b5980d8538b470f34a5fbd951fc6ea5c202b0" }, "source": [ "For small number of iterations, error fall down quickly. For large iterations error goes down, but slowly. Also we can notice, that validation and training scores are approximetly the same for small number of iterations. For big number of iterations we can see, that score for training set continue to go down, for validation set it goes down too, but much more slower. So curves diverge, but there are no overfitting, because validation score continue go down." ] }, { "cell_type": "markdown", "metadata": { "_uuid": "2e0eaec5cf48c57588ddab33361cd83fd87de83d" }, "source": [ "## 11. Prediction for test and hold-out samples" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "6e8bed644920160102d2dd158fa5b48f67aecfd0" }, "source": [ "Let's train LightGBM model with params, which we had found with GridSearchCV. In the same time we will compute error on hold-out set every 1000 iterations. Total number of iterations is 5000, it's should me enough. If we take higher number of iterations, we won't get significant improvement or can even get overfitting." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "da3f690037963a0909501ab276ebf64bafde5bc4" }, "outputs": [], "source": [ "%%time\n", "lgtrain = lgb.Dataset(X_train, label=y_train)\n", "lgval = lgb.Dataset(X_holdout, label=y_holdout)\n", "\n", "params['num_iterations'] = 5000\n", "model = lgb.train(params, lgtrain, valid_sets=[lgtrain, lgval], early_stopping_rounds=200, verbose_eval=1000)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "90de07260ea30d5ace09f01c4907cee9b134d509" }, "source": [ "We get 0.0291 for holdout set and 0.0242 for train set. It's obviously better than our previous scores. Now make a prediction for test set and put results to file `submission.csv`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_uuid": "da3f690037963a0909501ab276ebf64bafde5bc4" }, "outputs": [], "source": [ "pred_test = model.predict(X_test, num_iteration=model.best_iteration)\n", "\n", "ids_after['winPlacePerc'] = pred_test\n", "predict = ids_init.merge(ids_after, how='left', on=['groupId',\"matchId\"])['winPlacePerc']\n", "df_sub = pd.read_csv(\"../input/sample_submission_V2.csv\")\n", "df_test = pd.read_csv(\"../input/test_V2.csv\")\n", "df_sub['winPlacePerc'] = predict\n", "df_sub[[\"Id\", \"winPlacePerc\"]].to_csv(\"submission.csv\", index=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In public leaderboard I get 0.0272. Not bad)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "13acd1cb8db3ecded5e3c1799a62c3379242e260" }, "source": [ "## 12. Conclusions" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "c760db21d0bb9c0dd8482d7b520a68a433224e46" }, "source": [ "    We get a good score. But, of course, it can be better. There are lot of ways to do this. For example, I deleted `killPoints`,`rankPoints`and `winPoints` features. They may be helpfull, if correctly iterpret them. Also there are lots of cheaters in a game. So cheaters should be processed. I just slightly tuned LightGBM, we can find better parameters or even try other model.\n", "
    In the begining, I mentioned about [PUBG Developer API](https://developer.pubg.com/). We can get more features with it, so can make more complex model. It will be cool to create app, which give you tips in real time during the match. This solution can bring closer this idea or just help PUBG community by other way.\n" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "5681345740cf45769ded702cf28d3c2b11acb132" }, "source": [ "Thank you for reading and sorry for my English)" ] }, { "cell_type": "markdown", "metadata": { "_uuid": "08dfbf869bff2c249b7d6415b40f05af2d01042a" }, "source": [ " " ] }, { "cell_type": "markdown", "metadata": { "_uuid": "62f1a93679785acbbc298d9ec877b40a65b2336e" }, "source": [ "Efremov Ivan" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.3" } }, "nbformat": 4, "nbformat_minor": 2 }