{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", " \n", "## [mlcourse.ai](mlcourse.ai) – Open Machine Learning Course \n", "###
Author: Andrei Rem, Andr Rem\n", " \n", "##
Individual data analysis project\n", "###
Daily Happiness & Employee Turnover" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Research plan**\n", "\n", "[Part 1. Dataset and task explanation](#part1)
\n", "\n", "[Part 2. Сomplete raw data analysis](#part2)
\n", "\n", "[Part 2.1. Churn](#part2.1)
\n", "[Part 2.1.1. Churn. Dataset and features description](#part2.1.1)
\n", "[Part 2.1.2. Churn. Exploratory data analysis, visual analysis of the features](#part2.1.2)
\n", "[Part 2.1.3. Churn. Patterns, insights, pecularities of data](#part2.1.3)
\n", "\n", "[Part 2.2. Votes](#part2.2)
\n", "[Part 2.2.1. Votes. Dataset and features description](#part2.2.1)
\n", "[Part 2.2.2. Votes. Exploratory data analysis, visual analysis of the features](#part2.2.2)
\n", "[Part 2.2.3. Votes. Patterns, insights, pecularities of data](#part2.2.3)
\n", "\n", "[Part 2.3. Anonimized clean comments](#part2.3)
\n", "[Part 2.3.1. Anonimized clean comments. Dataset and features description](#part2.3.1)
\n", "[Part 2.3.2. Anonimized clean comments. Exploratory data analysis, visual analysis of the features](#part2.3.2)
\n", "[Part 2.3.3. Anonimized clean comments. Patterns, insights, pecularities of data](#part2.3.3)
\n", "\n", "[Part 2.4. Interactions with comments](#part2.4)
\n", "[Part 2.4.1. Interactions with comments. Dataset and features description](#part2.4.1)
\n", "[Part 2.4.2. Interactions with comments. Exploratory data analysis, visual analysis of the features](#part2.4.2)
\n", "[Part 2.4.3. Interactions with comments. Patterns, insights, pecularities of data](#part2.4.3)
\n", "\n", "[Part 2.5. Employee. Collect all data in one dataset](#part2.5)
\n", "[Part 2.5.1. Employee. Dataset and features description](#part2.5.1)
\n", "[Part 2.5.2. Employee. Feature engineering and description](#part2.5.2)
\n", "[Part 2.5.3. Employee. Exploratory data analysis, visual analysis of the features](#part2.5.3)
\n", "[Part 2.5.4. Employee. Patterns, insights, pecularities of data](#part2.5.4)
\n", "\n", "[Part 3. Visual analysis of the features](#part3)
\n", "[Part 4. Patterns, insights, pecularities of data](#part4)
\n", "[Part 5. Data preprocessing](#part5)
\n", "[Part 6. Feature engineering and description](#part6)
\n", "\n", "[Part 7. Cross-validation, hyperparameter tuning](#part7)
\n", "[Part 8. Validation and learning curves](#part8)
\n", "[Part 9. Prediction for hold-out and test samples](#part9)
\n", "[Part 10. Model evaluation with metrics description](#part10)
\n", "[Part 11. Conclusions](#part11)
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 1. Dataset and task explanation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

This project uses the dataset from site Happyforce. The data is publicly available through Kaggle Datasets.

\n", "\n", "

The Dataset description on Kaggle states:

\n", "
The data consists of four tables: votes, comments, interactions and churn. A vote was obtained when an employee opened the app and answered the question: How happy are you at work today? To vote the employee indicates their feeling by touching one of four icons that appeared on the screen. After the employee indicates their happiness level, a second screen appears where they can input a text explanation (usually a complaint, suggestion or comment), this is the comments table. Out of 4,356 employees, 2,638 employees commented at least once. Finally, in a third screen the employee can see their peers’ comments and like or dislike them, this data is stored in the interactions table. 3,516 employees liked or disliked at least one of their peers’ comments. The churn table contains when an employee churned (quit or was fired).
\n", "\n", "In this individual project, I will develop a model for predicting employee dismissal. For this, I will use data on employee happiness in the company.
\n", "**My target variable is negative stillExists fields in churn.csv**.\n", "\n", "

The dataset consists of four files:

\n", "
    \n", "
  • churn.csv
  • \n", "
  • votes.csv
  • \n", "
  • comments_clean_anonimized.csv
  • \n", "
  • commentInteractions.csv
  • \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**I have 4 data files and for clarity I will conduct all stages of analysis on each file sequentially. Later, I will collect all the data in 1 file and will analyze it.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 2. Сomplete raw data analysis" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Part 2.1. Churn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.1.1. Churn. Dataset and features description" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The **churn.csv** file contains information about the date of the last interaction of an employee with the app.\n", "\n", "- **employee**: An integer id that identifies an employee inside a company.\n", "- **companyAlias**: A hash id for a given company.\n", "- **numVotes**: Total number of happyness votes emited by the user.\n", "- **lastParticipationDate**: Date of the last interaction of the user with the app.\n", "- **stillExists**: Boolean that is True if the user is still registered in the app." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "import seaborn as sns\n", "from matplotlib import pyplot as plt\n", "import warnings\n", "warnings.simplefilter('ignore')\n", "# from pylab import rcParams\n", "# rcParams['figure.figsize'] = 15, 10" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn = pd.read_csv('data/indiv/churn.csv', parse_dates=[3], \n", " dtype={\n", " 'employee': np.int32, 'numVotes': np.int32, 'stillExists': np.bool\n", " }\n", " ).drop_duplicates()\n", "churn.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.describe().T" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.1.2. Churn. Exploratory data analysis, visual analysis of the features" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the table above, we can see that there are employees with negative identifiers. Let's explore this question" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[churn['employee'] < 0].head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You may notice that these employees have 0 votes, respectively, I cannot use them for my task. Also, these employees have a negative target variable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[churn['employee'] < 0][['numVotes', 'stillExists']].nunique()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Yes, indeed, these employees did not vote and they all do not work. In my opinion, this is an onomaly and I will delete all information about employees with a negative identifier." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn = churn[churn['employee'] > 0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We do not need employees without votes, check whether there are such and remove, if there is." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[churn['numVotes'] == 0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn = churn[churn['numVotes'] > 0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also know from the dataset description that employee identifiers are unique only within the company. We will now analyze the companies and, for short, change the hash to numbers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "companies = churn['companyAlias'].unique()\n", "print('We have a %s companies' % len(companies))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "companies_map = {company: i for i, company in enumerate(companies)}\n", "churn['companyAlias'] = churn['companyAlias'].map(companies_map).astype('int')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, we recoded companies. Now we can make unique identifiers for employees. Let's do that." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn['employee'] = churn['companyAlias'].astype(str)+\"_\"+churn['employee'].astype(str)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can see how many unique employees are left in our data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('We have %s unique employees' % churn['employee'].nunique())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check whether all of our employees are truly unique." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[churn['companyAlias'] == 10]['employee'].value_counts().head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we see, that employees in churn table can be repeated" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[(churn['employee'] == '10_118')].sort_values('lastParticipationDate')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I think, than is somethink wrong, let's stay only latest" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "emp_ids = churn['employee'].value_counts().index\n", "emp_count = churn['employee'].value_counts()\n", "removing_emps_ids = []\n", "for emp_id, count in zip(emp_ids, emp_count):\n", " if count > 1:\n", " removing_emps_ids.append(emp_id)\n", "\n", "for emp_id in removing_emps_ids:\n", " churn.drop(\n", " index=churn[churn['employee'] == emp_id].sort_values('lastParticipationDate').index[:-1], \n", " inplace=True\n", " ) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[(churn['employee'] == '10_118')].sort_values('lastParticipationDate')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now all is well" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's explore how many employees in companies" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.groupby('companyAlias').size().sort_values(ascending=False).head(),\\\n", "churn.groupby('companyAlias').size().sort_values(ascending=True).head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.groupby('companyAlias').size().sort_values(ascending=False).plot(kind='bar');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It can be seen that the number of employees is distributed exponentially from 585 to 2. Let's find out how many mean employees are in companies." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Mean count of employees %.2f' % churn.groupby('companyAlias').size().mean())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the distribution of number votes among employees." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn['numVotes'].describe()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn['numVotes'].hist(bins=20);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that 75 percent of employees passed less than 66 votes. Number of votes is distributed exponentially.\n", "Let's explore the people with the most votes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.sort_values('numVotes', ascending=False).head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see that they work in one company and most of them are not fired yet, this can be a good feature.\n", "Let's see other companies." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[churn['companyAlias'] != 10].sort_values('numVotes', ascending=False).head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can observe quite a large scatter: 404 in company - 0, and 740 in company - 10. **And in other companies, people with a large number of votes work**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**lastParticipationDate** field is the date of the last vote, let's consider it" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.groupby('lastParticipationDate').size().plot();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the same data, but with a weekly sliding window." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.groupby('lastParticipationDate').size().rolling(window=7).mean().plot();" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[churn['lastParticipationDate'] > '2017-03-07 00:00:00'].groupby('lastParticipationDate').size().plot();" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('The first record date - ', churn['lastParticipationDate'].min())\n", "print('The last record date - ', churn['lastParticipationDate'].max())\n", "td = (churn['lastParticipationDate'].max() - churn['lastParticipationDate'].min())\n", "print('We have data for %.2f years' % (td.days/365))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the number of votes at the end is growing strongly. This may be due to the growing popularity of the application or something else." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ok, let's create our **target attribute from stillExists**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn['target'] = (-churn['stillExists']).astype('int')\n", "churn.drop('stillExists', axis=1, inplace=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Correlations with other features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn['target'].value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Please note that our target variable is whether the person is still working. 1 - the employee does not work, 0 - the employee works." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn['target'].hist();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the average number of votes for different classes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.groupby('target')['numVotes'].mean().plot(kind='bar');" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[churn['target'] == 0]['numVotes'].mean(), \\\n", "churn[churn['target'] == 1]['numVotes'].mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, it is not much different in classes, although it should be noted that it is still higher for those who work" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[churn['target'] == 1].hist('numVotes', bins=20);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[churn['target'] == 0].hist('numVotes', bins=20);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the ratio of classes in companies" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.groupby(['companyAlias', 'target'])['employee'].count().plot(kind='bar', figsize=(15, 10));" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In my opinion, this is a **very interesting** graph, here we can immediately notice 2 interesting facts.\n", "First, in large companies there are more retired employees. Later we look at correlation happiness with the number of employees in the company.\n", "Second, there are companies in which all employees work." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.groupby('companyAlias')['numVotes'].mean().sort_values(ascending=False).plot(kind='bar', figsize=(15, 10));" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### 2.1.3. Churn. Patterns, insights, pecularities of data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this data set, we noticed several important observations:\n", "1. Employees with negative identifiers and zero votes are not suitable for our task, as they have no votes.\n", "2. We have both large companies (585 employees) and small ones (9 employees), and with the help of a diagram, we have established that the turnover in large companies is larger.\n", "3. There are companies in which all employees work.\n", "4. The required class has lower number of votes.\n", "5. The number of votes increases over time.\n", "6. No data gaps." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Part 2.2.Votes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.2.1. Votes. Dataset and features description" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data about the votes of the app users is stored in **votes.csv**. This file containg 4 different columns:\n", "\n", "- **employee**: An integer id that identifies an employee inside a company.\n", "- **companyAlias**: A hash id for a given company.\n", "- **voteDate**: String representing the date a given vote was emited.\n", "- **vote**: The numeric value of the vote emited ranging from 1 to 4." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes = pd.read_csv(\n", " 'data/indiv/votes.csv',parse_dates=[2],\n", " dtype={\n", " 'employee': np.int32, 'vote': np.int32\n", " }).drop_duplicates()\n", "votes.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is necessary to bring identifiers of employees and companies in a general view." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes = votes[votes['employee'] > 0]\n", "votes['companyAlias'] = votes['companyAlias'].map(companies_map).astype('int')\n", "votes['employee'] = votes['companyAlias'].astype(str)+\"_\"+votes['employee'].astype(str)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.describe().T" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see what number of votes are employees of companies" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(votes['companyAlias'].value_counts().head());" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.2.2. Votes. Exploratory data analysis, visual analysis of the features" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Look at the distribution of votes" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.groupby('vote').size().plot(kind='bar');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that people are mostly **almost happy** in their companies.
\n", "It is interesting to look at the distribution of votes for different time periods, add a couple of features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes['day_of_week'] = votes['voteDate'].dt.dayofweek\n", "votes['month'] = votes['voteDate'].dt.month\n", "votes['year'] = votes['voteDate'].dt.year" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How much time do we have a votes stats?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(votes['voteDate'].min(), votes['voteDate'].max())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this period we have votes - **2014.06 - 2017.03**." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.groupby('voteDate').size().plot();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On this graph, we are convinced of the hypothesis that the popularity of the application is growing.
\n", "Let's look at the same graph but with a sliding window." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.groupby('voteDate').size().rolling(window=31).mean().plot();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try to build a schedule for months." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.groupby('month').size().plot();" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.groupby(['month', 'vote']).size().plot(kind='bar');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, the schedule is very similar to the last year, because last year was the largest audience increase.
\n", "Build a schedule for the days of the week.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.groupby(['day_of_week', 'vote']).size().plot(kind='bar');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general, every day reflects the setting as a whole, but you can see that on Friday more people put 4, which means that they are happier before the weekend. And some strange people are voting on weekends and their vote 1 frequency is higher than usual." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Look at the distribution of votes by year." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.groupby('year').size().plot(kind='bar');" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.groupby(['year', 'vote']).size().plot(kind='bar');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The application is growing very actively, we can see that in just 3 months of 2017, the application has already gained about half of the votes in 2016." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see how employees vote on average" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.groupby('employee')['vote'].mean().sort_values().plot();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I wonder how the average vote in the company." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "votes.groupby('companyAlias')['vote'].mean().sort_values().plot(kind='bar', figsize=(12, 8));" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(x=votes.groupby('companyAlias')['vote'].mean(), y=churn.groupby('companyAlias')['employee'].size());\n", "plt.ylabel('Employees count')\n", "plt.xlabel('Mean employees happiness')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this graph, we see an important **relationship between employee happiness and company size**; the higher the number of employees, the less happy the employee." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plot the average churn and happiness level for the company" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(x=votes.groupby('companyAlias')['vote'].mean(), y=churn.groupby('companyAlias')['target'].mean());\n", "plt.xlabel('Mean employees happiness')\n", "plt.ylabel('Mean churn')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It seems intuitively that not happy people quit more often, but there is no strict dependence" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### 2.2.3. Votes. Patterns, insights, pecularities of data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this data set, we noticed several important observations:\n", "1. Data collected for the period 2014.06 - 2017.03.\n", "2. Application is growing very actively\n", "3. People are mostly almost happy in their companies and choose 3.\n", "4. On Friday more people put 4.\n", "5. Correlation between employee happiness and company size is negative.\n", "6. No data gaps." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Part 2.3. Anonimized clean comments" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.3.1. Anonimized clean comments. Dataset and features description" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is possible to find data about the comments written in the app in the **comments_clean_anonimized.csv** file, and it has the following structure:\n", "\n", "- **employee**: An integer id that identifies an employee inside a company.\n", "- **companyAlias**: A hash id for a given company.\n", "- **commentDate**: String representing the date a given comment was written.\n", "- **commentId**: A unique id for a given comment.\n", "- **comment**: Anonimized comment. It has the same length as the original comment. \n", "- **likes**: Number of likes that the comment received.\n", "- **dislikes**: Number of dislikes that the comment received." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments = pd.read_csv(\n", " 'data/indiv/comments_clean_anonimized.csv', parse_dates=[-1],\n", " dtype={\n", " 'employee': np.int32, 'likes': np.float, 'dislikes': np.float\n", " }).drop_duplicates()\n", "comments.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments.describe()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments['txt'].fillna('', inplace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is necessary to bring identifiers of employees and companies in a general view." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments = comments[comments['employee'] > 0]\n", "comments['companyAlias'] = comments['companyAlias'].map(companies_map).astype('int')\n", "comments['employee'] = comments['companyAlias'].astype(str)+\"_\"+comments['employee'].astype(str)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can notice that the comment text is encoded, but we can use it, we can take its length and create an indication of the presence of the text.
\n", "and add a couple of time-related features" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_len(txt):\n", " if type(txt) == str:\n", " return len(txt)\n", " else:\n", " return 0\n", " \n", "def text_exist(txt):\n", " if type(txt) == str and len(txt) > 0:\n", " return 1\n", " else:\n", " return 0\n", " \n", "comments['txt_len'] = comments['txt'].apply(get_len)\n", "comments['txt_exist'] = comments['txt'].apply(text_exist)\n", "\n", "comments['day_of_week'] = comments['commentDate'].dt.dayofweek\n", "comments['month'] = comments['commentDate'].dt.month\n", "comments['year'] = comments['commentDate'].dt.year\n", "\n", "comments.drop(columns=['txt'], axis=1, inplace=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments.dropna(inplace=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments['likes'] = comments['likes'].astype('int64')\n", "comments['dislikes'] = comments['dislikes'].astype('int64')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.3.2. Anonimized clean comments. Exploratory data analysis, visual analysis of the features" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see how comments are distributed by time." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments.groupby('commentDate').size().plot(figsize=(12, 8));" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is usually 1 or 2 comments. Here we do not see the growth of comments over time, but there is some kind of surge, you can explore it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments[(comments['commentDate'] > '2016-09-01') & (comments['commentDate'] < '2016-10-01')].groupby('employee').size().sort_values(ascending=False).head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments[\n", " (comments['commentDate'] > '2016-09-01') & \n", " (comments['commentDate'] < '2016-10-01') &\n", " (comments['employee'] == '22_278')\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The employee actually writes messages of different lengths, and judging by likes, his colleagues like them.
\n", "Let's see, does he still work?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn[churn['employee'] == '22_278']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Yes, this employee is still working. I assume that the **presence and length of the text** are good features" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How many comments can be written per day?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments.groupby(['commentDate', 'employee']).size().max()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at a couple of graphs with likes and dislikes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.boxplot(x='month', y='likes', data=comments);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.boxplot(x='month', y='dislikes', data=comments);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.boxplot(x='day_of_week', y='likes', data=comments);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Interestingly, there are not so many votes on weekends, but comments written on weekends collect a lot of likes, as seen in the diagram above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments['likes'].hist(bins=20);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments['dislikes'].hist(bins=20);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check the relationship between lenght text and the number of likes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(x=comments['txt_len'], y=comments['likes'])\n", "# plt.label(label='Correlation bitween length of text and likes')\n", "plt.xlabel('Lenght of text')\n", "plt.ylabel('Number of likes')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We cannot interpret the graph because of the text 30k long. Let's fix it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(x=comments[comments['txt_len'] < 3000]['txt_len'], y=comments[comments['txt_len'] < 3000]['likes'])\n", "plt.xlabel('Lenght of text')\n", "plt.ylabel('Number of likes')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is no clear relationship. A large number of likes can get short comments, and a little long ones." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is one dirty record, I know what this record is, it is the last one in this dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments[-1:]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Perhaps, when downloading data from the application, it appeared." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments.dropna(inplace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lastly, we look at the correlation of features" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.heatmap(comments.corr());" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The dependence of likes and length of the text is highlighted." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### 2.3.3. Anonimized clean comments. Patterns, insights, pecularities of data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this data set, we noticed several important observations:\n", "\n", "1. There is a text with a very long one.\n", "2. There is a slight relationship between likes and long text.\n", "3. There are no people in the data who would write more than 2 comments per day.\n", "4. Comments written on the weekend, collect more likes.\n", "5. No data gaps." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Part 2.4. Interactions with comments" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.4.1. Interactions with comments. Dataset and features description" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The file **commentInteractions.csv** contains information about which employee liked or disliked a given comment.\n", "\n", "- **employee**: An integer id that identifies an employee inside a company.\n", "- **companyAlias**: A hash id for a given company.\n", "- **commentId**: A unique id for a given comment that allows us to relate it with the comments in anon_comments.csv.\n", "- **liked**: A boolean that is True if the employee liked the comment.\n", "- **disliked**: A boolean that is True if the employee disliked the comment." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments_inters = pd.read_csv('data/indiv/commentInteractions.csv',\n", " dtype={\n", " 'employee': np.int32, \n", " 'liked': np.int32, \n", " 'disliked': np.int32\n", " }).drop_duplicates().dropna()\n", "comments_inters.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments_inters.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments_inters['companyAlias'] = comments_inters['companyAlias'].map(companies_map).astype('int')\n", "comments_inters = comments_inters[comments_inters['employee'] >= 0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create unique employees ids

" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments_inters['employee'] = comments_inters['companyAlias'].astype(str)+\"_\"+comments_inters['employee'].astype(str)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments_inters.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How many likes and dislikes by companies" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments_inters['companyAlias'].value_counts().head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Distribution of **likes** by company" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By one record employee can liked or disliked, but not together" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "len(comments_inters[\n", " (comments_inters['liked'] == False) & (comments_inters['disliked'] == False)\n", "]) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.4.2. Interactions with comments. Exploratory data analysis, visual analysis of the features" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments_inters[comments_inters['liked'] == True].groupby('companyAlias').size().sort_values().plot(kind='bar');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Distribution of **dislikes** by company" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments_inters[comments_inters['liked'] == False].groupby('companyAlias').size().sort_values().plot(kind='bar');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In big companies expectedly most like and dislike" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Look at the number of employees in the company and how many likes and dislikes per employee" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "churn.groupby('companyAlias')['employee'].size().sort_values().plot(kind='bar');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "key - Interactions per employee, value - number employees" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dict(zip((comments_inters.groupby('companyAlias').size() // churn.groupby('companyAlias').size()), churn.groupby('companyAlias').size()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We may notice that very small companies have no records" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(\n", " x=comments_inters.groupby('companyAlias').size() // churn.groupby('companyAlias').size(), \n", " y=churn.groupby('companyAlias').size()\n", ")\n", "plt.title('Interactions per employee and number employees in company')\n", "plt.xlabel('likes and dislikes per employee')\n", "plt.ylabel('number employees in company')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see how much an employee can do interactions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments_inters.groupby('employee').size().sort_values(ascending=False).head(50).plot(kind='bar');" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comments_inters.groupby('employee').size().mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is an assumption that the more active the employee, the higher the likelihood that he works. Let's see if these employees work yet." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "so_active_employees = comments_inters.groupby('employee').size().sort_values(ascending=False).keys()[:20]\n", "churn[churn['employee'].isin(so_active_employees)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Indeed, almost all employees still work." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### 2.4.3. Interactions with comments. Patterns, insights, pecularities of data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this data set, we noticed several important observations:\n", "\n", "1. There are companies in which employees made 200 interactions per employee.\n", "2. On average, the staff did 109 interactions\n", "3. There are employees who have done 3,500 interactions.\n", "4. There is a correlation between employee activity and that he works.\n", "5. No data gaps." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Part 2.5. Employee. Collect all data in one dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.5.1. Employee. Dataset and features description" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let me remind you that we plan to predict the churnout of employees.
\n", "To do this, we need to aggregate the data for each employee and add them to a single set of data.
\n", "The following signs seemed interesting to me:\n", "1. Voices (min, max, mean, std)\n", "2. Length and presence of comments (max, mean, std, sum)\n", "3. Reaction to comments of other employees(likes, dislikes) (min, max, mean, std)\n", "4. Reaction to employee comments (likes, dislikes) (min, max, mean, std)\n", "5. The average level of happiness in the company\n", "6. The number of people in the company\n", "7. The number of days between the first and last vote" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**'employee'** - unique identifier of the employee
\n", "**'companyAlias'** - unique company identifier
\n", "**'numVotes'** - the number of votes passed
\n", "**'target'** - is the target variable. Still working - 0, Not working - 1
\n", "\n", "**('vote', 'mean')** - the mean vote of the employee
\n", "**('vote', 'min')** - minimum employee’s vote
\n", "**('vote', 'max')** - maximum employee vote
\n", "\n", "**('dislikes', 'mean')** - the mean number of dislikes received for comments from colleagues
\n", "**('dislikes', 'sum')** - the total number of dislikes received for comments from colleagues
\n", "**('dislikes', 'min')** - the minimum number of dislikes received for comments from colleagues
\n", "**('dislikes', 'max')** - the maximum number of dislikes received for comments from colleagues
\n", "\n", "**('likes', 'mean')** - the mean number of likes received for comments from colleagues
\n", "**('likes', 'sum')** - the total number of likes received on their comments from colleagues
\n", "**('likes', 'min')** - the minimum number of likes received for comments from colleagues
\n", "**('likes', 'max')** - the maximum number of likes received for comments from colleagues
\n", "\n", "**('txt_len', 'mean')** - mean length of the comment text
\n", "**('txt_len', 'sum')** - total length of comment texts
\n", "**('txt_len', 'min')** - the minimum length of the comment text
\n", "**('txt_len', 'max')** - maximum comment text length
\n", "\n", "**('txt_exist', 'mean')** - percentage of votes with a comment
\n", "\n", "**('disliked', 'mean')** - percentage of dislikes from all interactions with colleagues comments
\n", "**('disliked', 'sum')** - the total number of dislikes that the employee put
\n", "\n", "**('liked', 'mean')** - percentage of likes from all interactions with colleagues comments
\n", "**('liked', 'sum')** - the total number of likes that the employee put
\n", "\n", "**'mean_vote_by_company'** - mean happiness of employees in a company
\n", "**'num_emploees_by_company'** - the number of employees in the company
\n", "**'first_last_vote_days'** - the number of days between the first and last vote
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.5.2. Employee. Feature engineering and description " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "employee = pd.DataFrame()\n", "employee = churn[['employee', 'companyAlias', 'numVotes', 'target']]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "agg_votes = votes[['employee', 'vote']].groupby('employee', as_index=False).agg({'vote': [np.mean, min, max]})\n", "\n", "agg_comments = comments[['employee', 'txt_len', 'likes', 'dislikes', 'txt_exist']]\\\n", " .groupby('employee', as_index=False).agg(\n", " {\n", " 'txt_len': [np.mean, np.sum, min, max],\n", " 'likes': [np.mean, np.sum, min, max],\n", " 'dislikes': [np.mean, np.sum, min, max],\n", " 'txt_exist': [np.mean]\n", " }\n", " )\n", "\n", "agg_employee_likes_dislokes = comments_inters[['employee', 'liked', 'disliked']].groupby('employee', as_index=False).agg(\n", " {\n", " 'liked': [np.mean, np.sum],\n", " 'disliked': [np.mean, np.sum],\n", " }\n", ")\n", "\n", "agg_work_long = votes[['employee', 'voteDate']].groupby('employee', as_index=False).agg({'voteDate': [min, max]})\n", "\n", "employee = employee.merge(agg_votes, on=['employee','employee'],how='left')\n", "employee = employee.merge(agg_comments, on=['employee','employee'],how='left')\n", "employee = employee.merge(agg_employee_likes_dislokes, on=['employee','employee'],how='left')\n", "employee = employee.merge(agg_work_long, on=['employee','employee'],how='left')\n", "\n", "employee['first_last_vote_days'] = (employee[('voteDate', 'max')] - employee[('voteDate', 'min')]).dt.days\n", "employee.drop([('voteDate', 'max'), ('voteDate', 'min')], axis=1, inplace=True)\n", "\n", "employee.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mean_vote_by_company = votes.groupby('companyAlias')['vote'].mean().to_dict()\n", "employee['mean_vote_by_company'] = employee['companyAlias'].map(mean_vote_by_company)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "emploies_by_company = churn.groupby('companyAlias')['employee'].count().to_dict()\n", "employee['num_emploees_by_company'] = employee['companyAlias'].map(emploies_by_company)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "employee.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "employee.fillna(0, inplace=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "employee.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "employee.describe().T" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You may notice that the company identifier is a number, so it should not be, we make it a string" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "employee['companyAlias'] = employee['companyAlias'].astype('str')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also see target class 16%" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.5.3. Employee. Exploratory data analysis, visual analysis of the features " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "employee.hist(figsize=(12, 12));" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pylab import rcParams\n", "rcParams['figure.figsize'] = 10, 8\n", "sns.heatmap(employee.corr())\n", "plt.title('Features Correlation Heatmap',fontsize=24)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On this heat map we can see a lot of interesting.\n", "The target variable correlates well with parameters such as:\n", "(liked, mean), (txt_exist, mean), (txt_len, mean), (likes, mean), num_emploees_by_company, (vote, mean)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So here we see correlations with the target feature." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "employee.corr()['target'].abs().sort_values(ascending=False)[:10]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "corr_feat = employee.corr()['target'].abs().sort_values(ascending=False)[:10].keys()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.boxplot(y='mean_vote_by_company', x='target', data=employee);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see how representatives of different classes put on dislikes" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.boxplot(y=('disliked', 'mean'), x='target', data=employee);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It can be noted that people who want to leave put less dislikes to comments of other employees." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.boxplot(y=('liked', 'mean'), x='target', data=employee);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And put less likes to comments of other employees." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.boxplot(y=('txt_len', 'mean'), x='target', data=employee[employee[('txt_len', 'mean')] < 250]);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And less write comments." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What about getting likes" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.boxplot(y=('likes', 'mean'), x='target', data=employee[employee[('likes', 'mean')] < 20]);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.boxplot(y=('dislikes', 'mean'), x='target', data=employee[employee[('dislikes', 'mean')] < 2]);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Part 2.5.4. Employee. Patterns, insights, pecularities of data " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can make the following assumptions:\n", "1. People actively participating in this application, presumably, also actively communicate with colleagues and they do not quit.\n", "2. If a person writes comments, then he most likely wants to work in a company.\n", "3. The more employees, the higher the turnover." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 3. Visual analysis of the features" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 4. Patterns, insights, pecularities of data " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 5. Data preprocessing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 6. Feature engineering and description" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Parts 3,4,5,6 are done for each dataset separately, for ease of reading. Sorry, that complicated the assessment of my work, but in my opinion it is logical." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 7. Cross-validation, hyperparameter tuning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we have a strong imbalance in the target feature, we will use train_test_split with stratify.
\n", "And for fit model we will use StratifiedKFold." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.ensemble import RandomForestClassifier\n", "from sklearn.model_selection import GridSearchCV\n", "from sklearn.metrics import confusion_matrix, roc_auc_score\n", "from sklearn.model_selection import StratifiedKFold, train_test_split\n", "\n", "y = employee['target']\n", "X = employee.drop(['target', 'companyAlias'], axis=1).set_index('employee')\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check that the distribution of classes is the same." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y.value_counts()[0]/y.value_counts()[1], \\\n", "y_test.value_counts()[0]/y_test.value_counts()[1], \\\n", "y_train.value_counts()[0]/y_train.value_counts()[1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In our task, we predict the dismissal of an employee. In our problem there are 2 types of errors.\n", "First, we predicted that the person would quit, and he did not quit, that is, instead of 0, predict 1. This is an error F1.\n", "Second, we say that the person will remain, and he leaves. **This is a f2 error.** For our task, **it is more important**, because We try to minimize the costs associated with hiring new employees." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rf = RandomForestClassifier(random_state=17)\n", "rf.fit(X_train, y_train)\n", "# false negative - f2\n", "# [tn, fp]\n", "# [fn, tp]\n", "confusion_matrix(y_train, rf.predict(X_train))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "confusion_matrix(y_test, rf.predict(X_test))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import operator\n", "sorted(zip(X_train.columns, rf.feature_importances_), key=operator.itemgetter(1), reverse=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "skf = StratifiedKFold(n_splits=3)\n", "\n", "def fit_grid_rf(params, cv=skf, X_train=X_train, y_train=y_train):\n", " rf_for_grid = RandomForestClassifier(random_state=17, n_jobs=-1, bootstrap=True)\n", " rf_grid = GridSearchCV(estimator=rf_for_grid, param_grid=params, cv=skf)\n", " rf_grid.fit(X_train, y_train)\n", " return rf_grid\n", "\n", "def get_all_stats(grid, X_train=X_train, X_test=X_test, y_train=y_train, y_test=y_test):\n", " print('Best grid params: ', grid.best_params_)\n", " print('Best grid score: ', grid.best_score_)\n", " print('Confusion matrix for train: ', confusion_matrix(y_train, grid.best_estimator_.predict(X_train)))\n", " print('Confusion matrix for test: ', confusion_matrix(y_test, grid.best_estimator_.predict(X_test)))\n", " print('ROC AUC for train: ', roc_auc_score(y_train, grid.best_estimator_.predict_proba(X_train)[:, 1]))\n", " print('ROC AUC for test: ', roc_auc_score(y_test, grid.best_estimator_.predict_proba(X_test)[:, 1]))\n", " print('Best features', sorted(zip(X_train.columns, grid.best_estimator_.feature_importances_), key=operator.itemgetter(1), reverse=True)[:8])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try to train the first search grid. Here I have given a lot of weight to the examples of our positive class." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## First grid search " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "rf_params = {\n", " 'n_estimators': [10, 50, 100],\n", " 'max_depth': [5, 10, 15],\n", " 'class_weight': [{0: 1, 1: 5}, {0: 1, 1: 9}],\n", " 'max_features': [0.2, 0.5, 0.7,]\n", "}\n", "get_all_stats(fit_grid_rf(rf_params))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Second grid search" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "skf = StratifiedKFold(n_splits=3)\n", "\n", "rf_params = {\n", " 'n_estimators': [100, 120],\n", " 'max_depth': [15, 16],\n", " 'class_weight': [{0: 1, 1: 5}],\n", " 'max_features': [0.6, 0.7, 0.9],\n", " 'criterion': ['gini', 'entropy']\n", "}\n", "get_all_stats(fit_grid_rf(rf_params))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Third grid search" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "skf = StratifiedKFold(n_splits=3)\n", "\n", "rf_params = {\n", " 'n_estimators': [120],\n", " 'max_depth': [16],\n", " 'max_features': [0.7],\n", " 'class_weight': [{0: 1, 1: 5}]\n", "}\n", "final_grid = fit_grid_rf(rf_params)\n", "get_all_stats(final_grid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try to leave only awesome features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ " X_train.columns" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y = employee['target']\n", "\n", "best_features = sorted(\n", " zip(\n", " X_train.columns, final_grid.best_estimator_.feature_importances_), key=operator.itemgetter(1), reverse=True\n", ")\n", "feats = []\n", "for feat in best_features:\n", " feats.append(feat[0])\n", "feats[:9]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result has not changed, 71 f2 errors are quite a lot." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The most important feature for this model are:
\n", "**num_emploees_by_company'**
\n", "**('liked', 'mean')**
\n", "**('liked', 'sum')**
\n", "**first_last_vote_days**
\n", "**mean_vote_by_company**
\n", "**('txt_len', 'min')**
\n", "**numVotes**
\n", "**('vote', 'mean')**
\n", "**('txt_len', 'sum')**
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 9. Prediction for hold-out and test samples " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "get_all_stats(final_grid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "ROC AUC for holdout a.k.a test = 0.911" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 10. Model evaluation with metrics description" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I chose to use rock auk for simplicity, you can later develop it and use another metric" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 11. Conclusions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This task is very important to reduce the costs of finding and adapting employees. In particular, this task is important for large companies, because the turnover in them is more.\n", "\n", "As a result of researching this data set, it was revealed that the dismissal of an employee does not greatly depend on whether he is happy, more dependent on the size of the company, how active the social life of the employee is, whether he receives likes from employees and how much he interacts with the application.\n", "\n", "In the future, you can try other models, such as logistic regression, perhaps it will also cope well with this task. You can also improve the predictive model by increasing the size of the data set." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 2 }