{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "
\n", "\n", "##
Individual Project
##\n", "##
\"Predicting a Pulsar Star\"
##\n", "###
Аuthor: Aleksei Demin
###" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 1: Feature and data explanation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "
\n", "\n", "
\n", "\n", "\n", "### Introduction\n", "\n", " A pulsar is a highly magnetized rotating neutron star that emits a beam of electromagnetic radiation. This radiation can be observed only when the beam of emission is pointing toward Earth (much like the way a lighthouse can be seen only when the light is pointed in the direction of an observer), and is responsible for the pulsed appearance of emission. Neutron stars are very dense, and have short, regular rotational periods. This produces a very precise interval between pulses that ranges from milliseconds to seconds for an individual pulsar. Pulsars are believed to be one of the candidates for the source of ultra-high-energy cosmic rays (see also centrifugal mechanism of acceleration).\n", "\n", "\n", "Each pulsar produces a slightly different emission pattern, which varies slightly with each rotation . Thus a potential signal detection known as a 'candidate', is averaged over many rotations of the pulsar, as determined by the length of an observation. In the absence of additional info, each candidate could potentially describe a real pulsar. However in practice almost all detections are caused by radio frequency interference (RFI) and noise, making legitimate signals hard to find." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dataset information\n", "\n", "[Kaggle link](https://www.kaggle.com/pavanraj159/predicting-a-pulsar-star)\n", "\n", "\n", "Provided dataset contains 16,259 spurious examples caused by RFI/noise, and 1,639 real pulsar examples. Each row lists the variables first, and the class label is the final entry. The class labels used are 0 (negative) and 1 (positive)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data explanation\n", "\n", "Each object described by 8 continuous variables, and a single class target variable. \n", "\n", "The first four are simple statistics obtained from the integrated pulse profile (folded profile). This is an array of continuous variables that describe a longitude-resolved version of the signal that has been averaged in both time and frequency. The remaining four variables are similarly obtained from the DM-SNR curve.\n", "\n", "| № | Description |\n", "|-----------------|----------------------------------------|\n", "| 1 | Mean of the integrated profile |\n", "| 2 | Standard deviation of the integrated profile |\n", "| 3| Excess kurtosis of the integrated profile |\n", "| 4 | Skewness of the integrated profile |\n", "| 5 | Mean of the DM-SNR curve |\n", "| 6 | Standard deviation of the DM-SNR curve |\n", "| 7 | Excess kurtosis of the DM-SNR curve |\n", "| 8 | Skewness of the DM-SNR curve |\n", "| **target_class** | **Target variable** - Object is a pulsar or not |\n", "\n", "\n", "#### Small clarification about skewness and kurtosis\n", " Skewness assesses the extent to which a variable’s distribution is symmetrical. If the distribution of responses for a variable stretches toward the right or left tail of the distribution, then the distribution is referred to as skewed. Kurtosis is a measure of whether the distribution is too peaked (a very narrow distribution with most of the responses in the center).\n", " \n", "#### Small clarification about DM-SNR\n", "\n", "
\n", "\n", " Example of DM-SNR curve\n", "
\n", "\n", " One of parameter a space that be described is the Dispersion Measure (DM) of space. Dispersion is caused by the interstellar medium, and is different for every pulsar, depending on its distance and the number of electrons in the interstellar medium in the direction of the pulsar. Dispersion causes the lower frequencies of the signal to arrive later than the higher frequencies. This smears out, or disperses, the pulse. This smearing will completely obliterate the pulse if the signal is not dedispersed before folding.\n", "\n", "\n", "SNR - Pulse profile signal-to-noise ratio. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 2: Primary data analysis" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Import the necessary packages" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "import warnings\n", "import itertools\n", "\n", "from sklearn.metrics import roc_auc_score\n", "from sklearn.preprocessing import LabelBinarizer, StandardScaler\n", "from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedKFold, learning_curve, validation_curve\n", "from sklearn.ensemble import RandomForestClassifier\n", "\n", "import lightgbm as lgb\n", "\n", "warnings.filterwarnings(\"ignore\")\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Read the data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pd.read_csv('pulsar_stars.csv')\n", "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Data information" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Correlation between fields" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.corr()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Missing values" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.isnull().sum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Target class" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data['target_class'].value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Some conclusions after primary data analysis\n", "\n", "- we can observe quite large maximum values for all features, probably outliers\n", "- kurtosis and skewness does not correlate with the mean value and standard deviation, which corresponds to the theory\n", "- data types are all numeric and non-null, there's no need to do any transformations or cleaning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 3: Primary visual data analysis" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Preprocessing" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame()\n", "\n", "columns_old = [' Mean of the integrated profile', ' Standard deviation of the integrated profile', ' Excess kurtosis of the integrated profile', ' Skewness of the integrated profile',\n", " ' Mean of the DM-SNR curve', ' Standard deviation of the DM-SNR curve', ' Excess kurtosis of the DM-SNR curve',\n", " ' Skewness of the DM-SNR curve', 'target_class']\n", "columns_new = ['mean_profile', 'std_profile', 'kurtosis_profile', 'skewness_profile',\n", " 'mean_dmsnr', 'std_dmsnr', 'kurtosis_dmsnr', 'skewness_dmsnr', 'target_class']\n", "\n", "for i in range(len(columns_old)):\n", " df[columns_new[i]] = data[columns_old[i]]\n", "\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### First of all let's look at our target proportions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(12,6))\n", "plt.subplot(122)\n", "plt.pie(data[\"target_class\"].value_counts().values,\n", " labels=[\"Non-pulsar stars\",\"Pulsar stars\"],\n", " autopct=\"%1.0f%%\",wedgeprops={\"linewidth\":4,\"edgecolor\":\"white\"})\n", "plt.subplots_adjust(wspace = .2)\n", "plt.title(\"Proportion of target variable in dataset\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Distribution of feature variables" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "columns_new = ['mean_profile', 'std_profile', 'kurtosis_profile', 'skewness_profile',\n", " 'mean_dmsnr', 'std_dmsnr', 'kurtosis_dmsnr', 'skewness_dmsnr']\n", "length = len(columns_new)\n", "colors = [\"r\",\"g\",\"b\",\"m\",\"y\",\"c\",\"k\",\"orange\"] \n", "\n", "plt.figure(figsize=(13,20))\n", "for i,j,k in itertools.zip_longest(columns_new,range(length),colors):\n", " plt.subplot(length/2,length/4,j+1)\n", " sns.distplot(df[i],color=k)\n", " plt.title(i)\n", " plt.subplots_adjust(hspace = .3)\n", " plt.axvline(df[i].mean(),color = \"k\",linestyle=\"dashed\",label=\"MEAN\")\n", " plt.axvline(df[i].std(),color = \"b\",linestyle=\"dotted\",label=\"STANDARD DEVIATION\")\n", " plt.legend(loc=\"upper right\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Comparing mean and std beetwen features for target class" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "compare = df.groupby(\"target_class\")[['mean_profile', 'std_profile', 'kurtosis_profile', 'skewness_profile',\n", " 'mean_dmsnr', 'std_dmsnr', 'kurtosis_dmsnr', 'skewness_dmsnr']].mean().reset_index()\n", "compare = compare.drop(\"target_class\",axis =1)\n", "\n", "compare.plot(kind=\"bar\",width=.6,figsize=(13,6),colormap=\"Set2\")\n", "plt.grid(True,alpha=.3)\n", "plt.title(\"Comparing mean of features for target class\")\n", "\n", "compare1 = df.groupby(\"target_class\")[['mean_profile', 'std_profile', 'kurtosis_profile', 'skewness_profile',\n", " 'mean_dmsnr', 'std_dmsnr', 'kurtosis_dmsnr', 'skewness_dmsnr']].std().reset_index()\n", "compare1 = compare1.drop(\"target_class\",axis=1)\n", "compare1.plot(kind=\"bar\",width=.6,figsize=(13,6),colormap=\"Set2\")\n", "plt.grid(True,alpha=.3)\n", "plt.title(\"Comparing std of features for target class\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### PairPlot of all features" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(data=df,\n", " palette=\"husl\",\n", " hue=\"target_class\",\n", " vars=['mean_profile', 'std_profile', 'kurtosis_profile', 'skewness_profile',\n", " 'mean_dmsnr', 'std_dmsnr', 'kurtosis_dmsnr', 'skewness_dmsnr'])\n", "\n", "plt.tight_layout()\n", "plt.show() " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Correlation HeatMap" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(16,12))\n", "sns.heatmap(data=df.corr(),annot=True,cmap=\"bone\",linewidths=1,fmt=\".2f\",linecolor=\"gray\")\n", "plt.title(\"Correlation Map\",fontsize=20)\n", "plt.tight_layout()\n", "plt.show() " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Some conclusions after visual data analysis\n", "\n", "- high mean values of 'Mean of integrated profile' and 'Skewness of the DM-SNR curve' are relate to Non-pulsar objects\n", "- high mean value of 'Mean of the DM-SNR curve' is relate to Pulsar objects\n", "- high std value of 'Skewness of the DM-SNR curve' is relate to Non-pulsar objects\n", "- plots of distribution of kurtosis and skewness of integrated profile and parameters of DM-SNR curve have a long tails. Also we can observe it in table of second part of project (large maximum values in data discription table)\n", "- like in a part 2 of this project, on HeatMap we see that kurtosis and skewness does not correlate with the mean value and standard deviation, which corresponds to the theory" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 4: Insights and found dependencies" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Let's take a closer look at couple of pair plots" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(14,7))\n", "\n", "plt.subplot(121)\n", "plt.scatter(x = \"kurtosis_profile\",y = \"skewness_profile\",\n", " data=df[df[\"target_class\"] == 1],alpha=.7,\n", " label=\"Pulsar star\",s=30,color = \"g\",linewidths=.4,edgecolors=\"black\")\n", "plt.scatter(x = \"kurtosis_profile\",y = \"skewness_profile\",\n", " data=df[df[\"target_class\"] == 0],alpha=.6,\n", " label=\"Non-pulsar star\",s=30,color =\"r\",linewidths=.4,edgecolors=\"black\")\n", "\n", "plt.legend(loc =\"best\")\n", "plt.xlabel(\"Excess kurtosis of the integrated profile\")\n", "plt.ylabel(\"Skewness of the integrated profile\")\n", "plt.title(\"Scatter plot for skewness and kurtosis for target classes\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(14,7))\n", "\n", "plt.subplot(121)\n", "plt.scatter(x = \"skewness_profile\",y = \"mean_profile\",\n", " data=df[df[\"target_class\"] == 1],alpha=.7,\n", " label=\"Pulsar star\",s=30,color = \"g\",linewidths=.4,edgecolors=\"black\")\n", "plt.scatter(x = \"skewness_profile\",y = \"mean_profile\",\n", " data=df[df[\"target_class\"] == 0],alpha=.6,\n", " label=\"Non-pulsar star\",s=30,color =\"r\",linewidths=.4,edgecolors=\"black\")\n", "\n", "plt.legend(loc =\"best\")\n", "plt.xlabel(\"Skewness of the integrated profile\")\n", "plt.ylabel(\"Mean of the integrated profile\")\n", "plt.title(\"Scatter plot for skewness and kurtosis for target classes\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### So here we see that target can be easily separate on some features. It's powerfull insight that tell us what features are important.\n", "\n", "Thus example object with high value of skewness and kurtosis of integrated profile probably a Pulsar." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 5: Metrics selection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For binary classification we have these metrics:\n", "- Accuracy\n", "- Precision\n", "- Recall\n", "- F-measure\n", "- ROC-AUC" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Current dataset have a significant disbalance in the target class (only 9% of objects are Pulsars). For this case i'll choose Area Under the Receiver Operating Characteristic Curve (ROC AUC) metric.\n", "\n", "\n", "ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied.\n", "\n", "Area Under Curve (AUC) is one of the most widely used metrics for evaluation. The AUC represents a model’s ability to discriminate between positive and negative classes. An area of 1.0 represents a model that made all predictions perfectly. An area of 0.5 represents a model as good as random." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 6: Model selection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### These are the most commonly used models for binary classification tasks. Also this dataset have a small number of objects and attributes. In this case I make this choice:\n", "\n", "- `RandomForestClassifier` - Composition of the decision trees. This model has good agregation abilities. Easy to configure hyperparameters to get a good result.\n", "- `Gradient boosting on decision trees (LightGBM)` - Gradient Boosting give us the ability to get better score when configuring hyperparameters.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 7: Data preprocessing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Splitting the Feature and Target" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y = df.target_class.values\n", "df.drop([\"target_class\"],axis=1,inplace=True)\n", "X = df.values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Scaling\n", "Scale up the data for the linear model, create a scaler object and train it on the training part. Applying the obtained transformation to both samples" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "scaler = StandardScaler()\n", "X_scaled = scaler.fit_transform(X)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Splitting the Train and Test\n", "We divide the data into training and test samples. In the test sampling we will take 30% of the data. Check that both train and test samples contains Pulsar objects." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.3, random_state=512)\n", "print('Propotion of target class for train sample: ', sum(y_train)/X_train.shape[0])\n", "print('Propotion of target class for test sample: ',sum(y_test)/X_test.shape[0]) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 8: Cross-validation and adjustment of model hyperparameters" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### We will divide the sample taking into account the balance of classes with 5-fold cross-validation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=512)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Define parameters for Random Forest" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "param_grid_for_forest = { \"n_estimators\": [10, 50, 70],\n", " \"max_depth\": [5, 10, 20],\n", " \"min_samples_split\": [5, 10, 20],\n", " \"min_samples_leaf\": [5, 10, 20]}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gs_forest = GridSearchCV(RandomForestClassifier(random_state=512), param_grid = param_grid_for_forest,\n", " scoring='roc_auc', cv = skf, n_jobs=-1)\n", "gs_forest.fit(X_train, y_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Let's look at the best parametrs which found from GridSearchCV and best score for this parameters" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Best parameters: ', gs_forest.best_params_)\n", "print('Best score: ', gs_forest.best_score_)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Let's see what the model's score is on the test sample." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_rf = RandomForestClassifier(max_depth=10, min_samples_leaf=5, min_samples_split=5,\n", " n_estimators=70, random_state=512)\n", "model_rf.fit(X_train, y_train)\n", "print('ROC-AUC on validation set: ', roc_auc_score(y_test, model_rf.predict_proba(X_test)[:,1]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Define parameters for LightGBM" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "param_grid_for_lightgbm = {\n", " 'learning_rate': [0.01, 0.1, 0.5],\n", " 'n_estimators': [40, 200, 1000],\n", " 'num_leaves': [6, 10, 16],\n", " 'boosting_type' : ['gbdt'],\n", " 'objective' : ['binary'],\n", " 'random_state' : [512], \n", " 'colsample_bytree' : [0.66],\n", " 'subsample' : [0.75],\n", " 'reg_alpha' : [1,1.2],\n", " 'reg_lambda' : [1,1.4],\n", " }" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gs_lightgbm = GridSearchCV(lgb.LGBMClassifier(random_state=512), param_grid_for_lightgbm, verbose=3, cv=skf, n_jobs=-1)\n", "gs_lightgbm.fit(X_train, y_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Let's look at best parametrs which found from GridSearchCV and best score for this parameters" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Best parameters: ', gs_lightgbm.best_params_)\n", "print('Best score: ', gs_lightgbm.best_score_)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Let's see which score the model gets on the test sample" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_gbm = lgb.LGBMClassifier(**gs_lightgbm.best_params_)\n", "model_gbm.fit(X_train, y_train)\n", "print('ROC-AUC on validation set: ', roc_auc_score(y_test, model_gbm.predict_proba(X_test)[:, 1]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 9: Creation of new features and description of this process" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### In this dataset I can't create new features. But I can try to drop some insignificant features and look at new scores." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#For GridSearch from Random Forest\n", "feat_importance = pd.DataFrame(df.columns, columns = ['features'])\n", "feat_importance['value'] = gs_forest.best_estimator_.feature_importances_\n", "feat_importance.sort_values('value')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#For GridSearch from LightGBM\n", "feat_importance = pd.DataFrame(df.columns, columns = ['features'])\n", "feat_importance['value'] = gs_lightgbm.best_estimator_.feature_importances_\n", "feat_importance.sort_values('value')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Let's try to drop kurtosis and skewness of DM-SNR curve and look how much score has changed" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_dropped = data.drop([' Excess kurtosis of the DM-SNR curve',' Skewness of the DM-SNR curve'], axis=1)\n", "y_new = df_dropped.target_class.values\n", "df_dropped.drop([\"target_class\"],axis=1,inplace=True)\n", "X_new = df_dropped.values" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_scaled_new = scaler.fit_transform(X_new)\n", "X_train1, X_test1, y_train1, y_test1 = train_test_split(X_scaled_new, y_new, test_size=0.3, random_state=512)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_rf_wo_2feat = RandomForestClassifier(max_depth=10, min_samples_leaf=5, min_samples_split=5,\n", " n_estimators=70, random_state=512)\n", "model_rf_wo_2feat.fit(X_train1, y_train1)\n", "print('ROC-AUC on validation set: ', roc_auc_score(y_test1, model_rf_wo_2feat.predict_proba(X_test1)[:,1]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_lgb_wo_2feat = lgb.LGBMClassifier(**gs_lightgbm.best_params_)\n", "model_lgb_wo_2feat.fit(X_train1, y_train1)\n", "print('ROC-AUC on validation set: ', roc_auc_score(y_test1, model_lgb_wo_2feat.predict_proba(X_test1)[:, 1]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Drop of two features improved the Random Forest model but reduced score of LightGBM model. For now better score have LightGBM fitted on whole dataset (ROC-AUC 0.9830). Further in project I will use only LightGBM fitted on whole dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 10: Plotting training and validation curves" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Training and validation curves will be drawn from the best model - LightGBM" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,\n", " n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):\n", "\n", " plt.figure()\n", " plt.title(title)\n", " if ylim is not None:\n", " plt.ylim(*ylim)\n", " plt.xlabel(\"Training examples\")\n", " plt.ylabel(\"Score\")\n", " train_sizes, train_scores, test_scores = learning_curve(\n", " estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes, scoring='roc_auc')\n", " train_scores_mean = np.mean(train_scores, axis=1)\n", " train_scores_std = np.std(train_scores, axis=1)\n", " test_scores_mean = np.mean(test_scores, axis=1)\n", " test_scores_std = np.std(test_scores, axis=1)\n", " plt.grid()\n", "\n", " plt.fill_between(train_sizes, train_scores_mean - train_scores_std,\n", " train_scores_mean + train_scores_std, alpha=0.1,\n", " color=\"r\")\n", " plt.fill_between(train_sizes, test_scores_mean - test_scores_std,\n", " test_scores_mean + test_scores_std, alpha=0.1, color=\"g\")\n", " plt.plot(train_sizes, train_scores_mean, 'o-', color=\"r\",\n", " label=\"Training score\")\n", " plt.plot(train_sizes, test_scores_mean, 'o-', color=\"g\",\n", " label=\"Cross-validation score\")\n", "\n", " plt.legend(loc=\"best\")\n", " return plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(6, 4))\n", "plot_learning_curve(model_gbm, 'Training curve', X_train, y_train, cv=skf, n_jobs=-1);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a = np.arange(2, 20)\n", "val_train, val_test = validation_curve(model_gbm, X_train, y_train, param_name='num_leaves',\n", " param_range=a, cv=skf, scoring='roc_auc')\n", "\n", "def plot_with_err(x, data, **kwargs):\n", " mu, std = data.mean(1), data.std(1)\n", " lines = plt.plot(x, mu, '-', **kwargs)\n", " plt.fill_between(x, mu - std, mu + std, edgecolor='none',\n", " facecolor=lines[0].get_color(), alpha=0.2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig = plt.figure(figsize=(20, 5))\n", "ax1 = fig.add_subplot(121)\n", "ax1.set_title('Validation curve')\n", "plot_with_err(a, val_train, label='training scores')\n", "plot_with_err(a, val_test, label='validation scores')\n", "plt.xlabel(r'num_leaves'); plt.ylabel('ROC-AUC')\n", "plt.legend();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Training curve: \n", "The curves haven't aligned, adding new data may improve the model.\n", "#### Validation curve:\n", "The quality of the model is reduced as the num_leaves value increases. Hyperparameter 'num_leaves' was choosen correctly. Large value of this parameter may cause overfitting.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 11: Prediction for test or hold-out samples" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### The test sample was generated from the source dataset by randomly selecting 30% of the data from the original sample with keeping the proportions of the target variable. Predictions on the test sample are presented in sections 8 and 9 of this project.\n", "- Random Forest have a ROC-AUC on validation set (without 2 features from Part 9): 0.9817\n", "- LightGBM have a ROC-AUC on validation set: 0.9830" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 12: Conclusions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### A model with a good aggregation ability has been created. The value of this model is in the fact that it separates well space noise and Pulsars. Potentially, this model may significantly reduce search time for pulsars in space.\n", "\n", "Further improvement of the model:\n", "- Adding new data in dataset\n", "- Cut large maximum values (outliers) from some features\n", "- Better hyperparameter tuning (increasing the number of iterations, for example)\n", "- Implementation of Neural Network" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }