{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# *Exploring Hacker News Posts*\n", "\n", "In this project, we'll compare two different types of posts from [Hacker News](news.ycombinator.com), a popular site where technology related stories (or 'posts') are voted and commented upon. The two types of posts we'll explore begin with either Ask HN or Show HN.\n", "\n", "Users submit Ask HN posts to ask the Hacker News community a specific question, such as \"What is the best online course you've ever taken?\" Likewise, users submit Show HN posts to show the Hacker News community a project, product, or just generally something interesting.\n", "\n", "We'll specifically compare these two types of posts to determine the following:\n", "\n", "*Do Ask HN or Show HN receive more comments on average?*
\n", "*Do posts created at a certain time receive more comments on average?*

\n", "It should be noted that the [data set](https://www.kaggle.com/hacker-news/hacker-news-posts) we're working with was reduced from almost 300,000 rows to approximately 20,000 rows by removing all submissions that did not receive any comments, and then randomly sampling from the remaining submissions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Introduction and Getting Data Ready\n", "\n", "Here, we will make use of csv module and convert hackernews.csv file into a lists of lists.\n", "\n", "- _**headers**_ list will contain the column names\n", "- _**hn**_ list will cover the whole remaining data" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[['12224879',\n", " 'Interactive Dynamic Video',\n", " 'http://www.interactivedynamicvideo.com/',\n", " '386',\n", " '52',\n", " 'ne0phyte',\n", " '8/4/2016 11:52'],\n", " ['10975351',\n", " 'How to Use Open Source and Shut the Fuck Up at the Same Time',\n", " 'http://hueniverse.com/2016/01/26/how-to-use-open-source-and-shut-the-fuck-up-at-the-same-time/',\n", " '39',\n", " '10',\n", " 'josep2',\n", " '1/26/2016 19:30'],\n", " ['11964716',\n", " \"Florida DJs May Face Felony for April Fools' Water Joke\",\n", " 'http://www.thewire.com/entertainment/2013/04/florida-djs-april-fools-water-joke/63798/',\n", " '2',\n", " '1',\n", " 'vezycash',\n", " '6/23/2016 22:20'],\n", " ['11919867',\n", " 'Technology ventures: From Idea to Enterprise',\n", " 'https://www.amazon.com/Technology-Ventures-Enterprise-Thomas-Byers/dp/0073523429',\n", " '3',\n", " '1',\n", " 'hswarna',\n", " '6/17/2016 0:01'],\n", " ['10301696',\n", " 'Note by Note: The Making of Steinway L1037 (2007)',\n", " 'http://www.nytimes.com/2007/11/07/movies/07stein.html?_r=0',\n", " '8',\n", " '2',\n", " 'walterbell',\n", " '9/30/2015 4:12']]" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "f = open('hacker_news.csv')\n", "\n", "from csv import reader\n", "read_file = reader(f)\n", "data = list(read_file)\n", "\n", "headers = data[0]\n", "hn = data[1:]\n", "\n", "# Printing Data\n", "hn[:5]" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['id', 'title', 'url', 'num_points', 'num_comments', 'author', 'created_at']" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Printing Headers\n", "headers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Extracting Ask HN and Show HN Posts\n", "\n", "Using regex, we will search posts that beign with either Ask HN or Show HN posts.
And divide data into three lists:\n", "\n", "- show_posts\n", "- ask_posts\n", "- other_posts" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1744\n", "1162\n", "17194\n" ] } ], "source": [ "import re\n", "patterna = r\"^Ask HN\"\n", "patterns = r\"^Show HN\"\n", "\n", "ask_posts = []\n", "show_posts = []\n", "other_posts = []\n", "\n", "for row in hn:\n", " title = row[1]\n", " match1 = re.search(patterna, title, re.I)\n", " match2 = re.search(patterns, title, re.I)\n", " if match1:\n", " ask_posts.append(row)\n", " elif match2:\n", " show_posts.append(row)\n", " else:\n", " other_posts.append(row)\n", " \n", "print(len(ask_posts))\n", "print(len(show_posts))\n", "print(len(other_posts))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***Ask HN Posts***" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[['12296411',\n", " 'Ask HN: How to improve my personal website?',\n", " '',\n", " '2',\n", " '6',\n", " 'ahmedbaracat',\n", " '8/16/2016 9:55'],\n", " ['10610020',\n", " 'Ask HN: Am I the only one outraged by Twitter shutting down share counts?',\n", " '',\n", " '28',\n", " '29',\n", " 'tkfx',\n", " '11/22/2015 13:43'],\n", " ['11610310',\n", " 'Ask HN: Aby recent changes to CSS that broke mobile?',\n", " '',\n", " '1',\n", " '1',\n", " 'polskibus',\n", " '5/2/2016 10:14']]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ask_posts[:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***Show HN Posts***" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[['10627194',\n", " 'Show HN: Wio Link ESP8266 Based Web of Things Hardware Development Platform',\n", " 'https://iot.seeed.cc',\n", " '26',\n", " '22',\n", " 'kfihihc',\n", " '11/25/2015 14:03'],\n", " ['10646440',\n", " 'Show HN: Something pointless I made',\n", " 'http://dn.ht/picklecat/',\n", " '747',\n", " '102',\n", " 'dhotson',\n", " '11/29/2015 22:46'],\n", " ['11590768',\n", " 'Show HN: Shanhu.io, a programming playground powered by e8vm',\n", " 'https://shanhu.io',\n", " '1',\n", " '1',\n", " 'h8liu',\n", " '4/28/2016 18:05']]" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "show_posts[:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***Other Posts***" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[['12224879',\n", " 'Interactive Dynamic Video',\n", " 'http://www.interactivedynamicvideo.com/',\n", " '386',\n", " '52',\n", " 'ne0phyte',\n", " '8/4/2016 11:52'],\n", " ['10975351',\n", " 'How to Use Open Source and Shut the Fuck Up at the Same Time',\n", " 'http://hueniverse.com/2016/01/26/how-to-use-open-source-and-shut-the-fuck-up-at-the-same-time/',\n", " '39',\n", " '10',\n", " 'josep2',\n", " '1/26/2016 19:30'],\n", " ['11964716',\n", " \"Florida DJs May Face Felony for April Fools' Water Joke\",\n", " 'http://www.thewire.com/entertainment/2013/04/florida-djs-april-fools-water-joke/63798/',\n", " '2',\n", " '1',\n", " 'vezycash',\n", " '6/23/2016 22:20']]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "other_posts[:3]" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### Calculating the Average Number of Comments for Ask HN and Show HN Posts\n", "Now that we separated Ask HN and Show HN posts into different lists, we'll calculate the Average Number of Comments each type of post receives." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Avg. Ask Comments: 14.038417431192661\n", "Avg. Show Comments: 10.31669535283993\n" ] } ], "source": [ "ask_comments = [int(i[4]) for i in ask_posts]\n", "show_comments = [int(i[4]) for i in show_posts]\n", "\n", "avg_ask_comments = sum(ask_comments)/len(ask_comments)\n", "avg_show_comments = sum(show_comments)/len(show_comments)\n", "\n", "print('Avg. Ask Comments: ', avg_ask_comments)\n", "print('Avg. Show Comments: ', avg_show_comments)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On Average, the Ask HN posts received 40% comments more then the Show HN posts.
\n", "We will continue our further analysis, using the Ask HN posts only.\n", "\n", "***\n", "\n", "### Finding the Amount of Ask HN Posts and Comments by Hour Created\n", "Next, we'll determine if we can maximize the amount of comments an Ask HN post receives by creating it at a certain time. \n", "\n", "We'll do this by doing the following:\n", "\n", "- Finding the Amount of Ask HN posts created during each hour of day, along with the number of comments those posts received. \n", "- Then, we'll calculate the average amount of comments received by Ask HN posts created every hour." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "import datetime as dt\n", "\n", "created_at = []\n", "for row in ask_posts:\n", " created_at.append(row[6])\n", "\n", "result_list = list(zip(ask_comments, created_at))\n", "\n", "counts_by_hour = {}\n", "comments_by_hour = {}\n", "\n", "\n", "for row in result_list:\n", " date = dt.datetime.strptime(row[1], '%m/%d/%Y %H:%M')\n", " hour = date.strftime(\"%H\")\n", " \n", " if hour in counts_by_hour:\n", " counts_by_hour[hour] += 1\n", " comments_by_hour[hour] += row[0]\n", " else:\n", " counts_by_hour[hour] = 1\n", " comments_by_hour[hour] = row[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***Posts created per hour***" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'09': 45,\n", " '13': 85,\n", " '10': 59,\n", " '14': 107,\n", " '16': 108,\n", " '23': 68,\n", " '12': 73,\n", " '17': 100,\n", " '15': 116,\n", " '21': 109,\n", " '20': 80,\n", " '02': 58,\n", " '18': 109,\n", " '03': 54,\n", " '05': 46,\n", " '19': 110,\n", " '01': 60,\n", " '22': 71,\n", " '08': 48,\n", " '04': 47,\n", " '00': 55,\n", " '06': 44,\n", " '07': 34,\n", " '11': 58}" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "counts_by_hour" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***Comments Received on Posts created per hour.***" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'09': 251,\n", " '13': 1253,\n", " '10': 793,\n", " '14': 1416,\n", " '16': 1814,\n", " '23': 543,\n", " '12': 687,\n", " '17': 1146,\n", " '15': 4477,\n", " '21': 1745,\n", " '20': 1722,\n", " '02': 1381,\n", " '18': 1439,\n", " '03': 421,\n", " '05': 464,\n", " '19': 1188,\n", " '01': 683,\n", " '22': 479,\n", " '08': 492,\n", " '04': 337,\n", " '00': 447,\n", " '06': 397,\n", " '07': 267,\n", " '11': 641}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "comments_by_hour" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### Calculating the Average Number of Comments for Ask HN Posts by Hour" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "avg_by_hour = [[i, comments_by_hour[i]/counts_by_hour[i]] for i in counts_by_hour] " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***Average Comments for Ask HN Posts by Hour***" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[['09', 5.5777777777777775],\n", " ['13', 14.741176470588234],\n", " ['10', 13.440677966101696],\n", " ['14', 13.233644859813085],\n", " ['16', 16.796296296296298],\n", " ['23', 7.985294117647059],\n", " ['12', 9.41095890410959],\n", " ['17', 11.46],\n", " ['15', 38.5948275862069],\n", " ['21', 16.009174311926607],\n", " ['20', 21.525],\n", " ['02', 23.810344827586206],\n", " ['18', 13.20183486238532],\n", " ['03', 7.796296296296297],\n", " ['05', 10.08695652173913],\n", " ['19', 10.8],\n", " ['01', 11.383333333333333],\n", " ['22', 6.746478873239437],\n", " ['08', 10.25],\n", " ['04', 7.170212765957447],\n", " ['00', 8.127272727272727],\n", " ['06', 9.022727272727273],\n", " ['07', 7.852941176470588],\n", " ['11', 11.051724137931034]]" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "avg_by_hour" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sorting and Printing Values from a List of Lists\n", "\n", "- Printing the Top 5 Hours, during which if post created it will receive most comments." ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Top 5 Hours for 'Ask HN' Comments\n", "15:00: 38.59 average comments per post\n", "02:00: 23.81 average comments per post\n", "20:00: 21.52 average comments per post\n", "16:00: 16.80 average comments per post\n", "21:00: 16.01 average comments per post\n" ] } ], "source": [ "avg_by_hour = sorted(avg_by_hour, key=lambda x: x[1], reverse=True)\n", "print(\"Top 5 Hours for 'Ask HN' Comments\")\n", "\n", "for row in avg_by_hour[:5]:\n", " time = dt.datetime.strptime(row[0], \"%H\")\n", " print(\"{}: {:.2f} average comments per post\".format(time.strftime(\"%H:%M\"), row[1]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The hour that receives the most comments per post on average is 15:00, with an average of 38.59 comments per post. There's about a 60% increase in the number of comments between the hours with the highest and second highest average number of comments.
\n", "\n", "According to the data set [documentation](https://www.kaggle.com/hacker-news/hacker-news-posts/home), the timezone used is Eastern Time in the US.
So, we (in UAE) should post at 00:00 AM GST." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Conclusion\n", "In this project, we analyzed Ask HN and Show HN posts to determine which type of post and time receive the most comments on average. Based on our analysis, to maximize the amount of comments a post receives, we'd recommend the post be categorized as Ask HN and created between 00:00 and 01:00 GST (3:00 pm EST - 4:00 pm EST).\n", "\n", "However, it should be noted that the data set we analyzed excluded posts without any comments. Given that, it's more accurate to say that of the posts that received comments, Ask HN posts received more comments on average and Ask HN posts created between 00:00 and 01:00 GST received the most comments on average." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" } }, "nbformat": 4, "nbformat_minor": 2 }