{ "metadata": { "name": "" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "#Mining the Social Web, 2nd Edition\n", "\n", "##Chapter 5: . Mining Web Pages: Using Natural Language Processing to Understand Human Language, Summarize Blog Posts, and More\n", "\n", "This IPython Notebook provides an interactive way to follow along with and explore the numbered examples from [_Mining the Social Web (2nd Edition)_](http://bit.ly/135dHfs). The intent behind this notebook is to reinforce the concepts from the sample code in a fun, convenient, and effective way. This notebook assumes that you are reading along with the book and have the context of the discussion as you work through these exercises.\n", "\n", "In the somewhat unlikely event that you've somehow stumbled across this notebook outside of its context on GitHub, [you can find the full source code repository here](http://bit.ly/16kGNyb).\n", "\n", "## Copyright and Licensing\n", "\n", "You are free to use or adapt this notebook for any purpose you'd like. However, please respect the [Simplified BSD License](https://github.com/ptwobrussell/Mining-the-Social-Web-2nd-Edition/blob/master/LICENSE.txt) that governs its use." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: If you find yourself wanting to copy output files from this notebook back to your host environment, see the bottom of this notebook for one possible way to do it." ] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Example 1. Using boilerpipe to extract the text from a web page" ] }, { "cell_type": "code", "collapsed": false, "input": [ "from boilerpipe.extract import Extractor\n", "\n", "URL='http://radar.oreilly.com/2010/07/louvre-industrial-age-henry-ford.html'\n", "\n", "extractor = Extractor(extractor='ArticleExtractor', url=URL)\n", "\n", "print extractor.getText()" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Example 2. Using feedparser to extract the text (and other fields) from an RSS or Atom feed" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import feedparser\n", "\n", "FEED_URL='http://feeds.feedburner.com/oreilly/radar/atom'\n", "\n", "fp = feedparser.parse(FEED_URL)\n", "\n", "for e in fp.entries:\n", " print e.title\n", " print e.links[0].href\n", " print e.content[0].value" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Example 3. Pseudocode for a breadth-first search" ] }, { "cell_type": "code", "collapsed": false, "input": [ "Create an empty graph\n", "Create an empty queue to keep track of nodes that need to be processed\n", "\n", "Add the starting point to the graph as the root node\n", "Add the root node to a queue for processing\n", "\n", "Repeat until some maximum depth is reached or the queue is empty:\n", " Remove a node from the queue \n", " For each of the node's neighbors: \n", " If the neighbor hasn't already been processed: \n", " Add it to the queue \n", " Add it to the graph \n", " Create an edge in the graph that connects the node and its neighbor" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Naive sentence detection based on periods**" ] }, { "cell_type": "code", "collapsed": false, "input": [ "txt = \"Mr. Green killed Colonel Mustard in the study with the candlestick. Mr. Green is not a very nice fellow.\"\n", "print txt.split(\".\")" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**More sophisticated sentence detection**" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import nltk\n", "\n", "# Downloading nltk packages used in this example\n", "nltk.download('punkt')\n", "\n", "sentences = nltk.tokenize.sent_tokenize(txt)\n", "print sentences" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Tokenization of sentences**" ] }, { "cell_type": "code", "collapsed": false, "input": [ "tokens = [nltk.tokenize.word_tokenize(s) for s in sentences]\n", "print tokens" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Part of speech tagging for tokens**" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Downloading nltk packages used in this example\n", "nltk.download('maxent_treebank_pos_tagger')\n", "\n", "pos_tagged_tokens = [nltk.pos_tag(t) for t in tokens]\n", "print pos_tagged_tokens" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Named entity extraction/chunking for tokens**" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Downloading nltk packages used in this example\n", "nltk.download('maxent_ne_chunker')\n", "nltk.download('words')\n", "\n", "ne_chunks = nltk.batch_ne_chunk(pos_tagged_tokens)\n", "print ne_chunks\n", "print ne_chunks[0].pprint() # You can prettyprint each chunk in the tree" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Example 4. Harvesting blog data by parsing feeds" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import os\n", "import sys\n", "import json\n", "import feedparser\n", "from BeautifulSoup import BeautifulStoneSoup\n", "from nltk import clean_html\n", "\n", "FEED_URL = 'http://feeds.feedburner.com/oreilly/radar/atom'\n", "\n", "def cleanHtml(html):\n", " return BeautifulStoneSoup(clean_html(html),\n", " convertEntities=BeautifulStoneSoup.HTML_ENTITIES).contents[0]\n", "\n", "fp = feedparser.parse(FEED_URL)\n", "\n", "print \"Fetched %s entries from '%s'\" % (len(fp.entries[0].title), fp.feed.title)\n", "\n", "blog_posts = []\n", "for e in fp.entries:\n", " blog_posts.append({'title': e.title, 'content'\n", " : cleanHtml(e.content[0].value), 'link': e.links[0].href})\n", "\n", "out_file = os.path.join('resources', 'ch05-webpages', 'feed.json')\n", "f = open(out_file, 'w')\n", "f.write(json.dumps(blog_posts, indent=1))\n", "f.close()\n", "\n", "print 'Wrote output file to %s' % (f.name, )" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Example 5. Using NLTK\u2019s NLP tools to process human language in blog data" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import json\n", "import nltk\n", "\n", "# Download nltk packages used in this example\n", "nltk.download('stopwords')\n", "\n", "BLOG_DATA = \"resources/ch05-webpages/feed.json\"\n", "\n", "blog_data = json.loads(open(BLOG_DATA).read())\n", "\n", "# Customize your list of stopwords as needed. Here, we add common\n", "# punctuation and contraction artifacts.\n", "\n", "stop_words = nltk.corpus.stopwords.words('english') + [\n", " '.',\n", " ',',\n", " '--',\n", " '\\'s',\n", " '?',\n", " ')',\n", " '(',\n", " ':',\n", " '\\'',\n", " '\\'re',\n", " '\"',\n", " '-',\n", " '}',\n", " '{',\n", " u'\u2014',\n", " ]\n", "\n", "for post in blog_data:\n", " sentences = nltk.tokenize.sent_tokenize(post['content'])\n", "\n", " words = [w.lower() for sentence in sentences for w in\n", " nltk.tokenize.word_tokenize(sentence)]\n", "\n", " fdist = nltk.FreqDist(words)\n", "\n", " # Basic stats\n", "\n", " num_words = sum([i[1] for i in fdist.items()])\n", " num_unique_words = len(fdist.keys())\n", "\n", " # Hapaxes are words that appear only once\n", "\n", " num_hapaxes = len(fdist.hapaxes())\n", "\n", " top_10_words_sans_stop_words = [w for w in fdist.items() if w[0]\n", " not in stop_words][:10]\n", "\n", " print post['title']\n", " print '\\tNum Sentences:'.ljust(25), len(sentences)\n", " print '\\tNum Words:'.ljust(25), num_words\n", " print '\\tNum Unique Words:'.ljust(25), num_unique_words\n", " print '\\tNum Hapaxes:'.ljust(25), num_hapaxes\n", " print '\\tTop 10 Most Frequent Words (sans stop words):\\n\\t\\t', \\\n", " '\\n\\t\\t'.join(['%s (%s)'\n", " % (w[0], w[1]) for w in top_10_words_sans_stop_words])\n", " print" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Example 6. A document summarization algorithm based principally upon sentence detection and frequency analysis within sentences" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import json\n", "import nltk\n", "import numpy\n", "\n", "BLOG_DATA = \"resources/ch05-webpages/feed.json\"\n", "\n", "N = 100 # Number of words to consider\n", "CLUSTER_THRESHOLD = 5 # Distance between words to consider\n", "TOP_SENTENCES = 5 # Number of sentences to return for a \"top n\" summary\n", "\n", "# Approach taken from \"The Automatic Creation of Literature Abstracts\" by H.P. Luhn\n", "\n", "def _score_sentences(sentences, important_words):\n", " scores = []\n", " sentence_idx = -1\n", "\n", " for s in [nltk.tokenize.word_tokenize(s) for s in sentences]:\n", "\n", " sentence_idx += 1\n", " word_idx = []\n", "\n", " # For each word in the word list...\n", " for w in important_words:\n", " try:\n", " # Compute an index for where any important words occur in the sentence.\n", "\n", " word_idx.append(s.index(w))\n", " except ValueError, e: # w not in this particular sentence\n", " pass\n", "\n", " word_idx.sort()\n", "\n", " # It is possible that some sentences may not contain any important words at all.\n", " if len(word_idx)== 0: continue\n", "\n", " # Using the word index, compute clusters by using a max distance threshold\n", " # for any two consecutive words.\n", "\n", " clusters = []\n", " cluster = [word_idx[0]]\n", " i = 1\n", " while i < len(word_idx):\n", " if word_idx[i] - word_idx[i - 1] < CLUSTER_THRESHOLD:\n", " cluster.append(word_idx[i])\n", " else:\n", " clusters.append(cluster[:])\n", " cluster = [word_idx[i]]\n", " i += 1\n", " clusters.append(cluster)\n", "\n", " # Score each cluster. The max score for any given cluster is the score \n", " # for the sentence.\n", "\n", " max_cluster_score = 0\n", " for c in clusters:\n", " significant_words_in_cluster = len(c)\n", " total_words_in_cluster = c[-1] - c[0] + 1\n", " score = 1.0 * significant_words_in_cluster \\\n", " * significant_words_in_cluster / total_words_in_cluster\n", "\n", " if score > max_cluster_score:\n", " max_cluster_score = score\n", "\n", " scores.append((sentence_idx, score))\n", "\n", " return scores\n", "\n", "def summarize(txt):\n", " sentences = [s for s in nltk.tokenize.sent_tokenize(txt)]\n", " normalized_sentences = [s.lower() for s in sentences]\n", "\n", " words = [w.lower() for sentence in normalized_sentences for w in\n", " nltk.tokenize.word_tokenize(sentence)]\n", "\n", " fdist = nltk.FreqDist(words)\n", "\n", " top_n_words = [w[0] for w in fdist.items() \n", " if w[0] not in nltk.corpus.stopwords.words('english')][:N]\n", "\n", " scored_sentences = _score_sentences(normalized_sentences, top_n_words)\n", "\n", " # Summarization Approach 1:\n", " # Filter out nonsignificant sentences by using the average score plus a\n", " # fraction of the std dev as a filter\n", "\n", " avg = numpy.mean([s[1] for s in scored_sentences])\n", " std = numpy.std([s[1] for s in scored_sentences])\n", " mean_scored = [(sent_idx, score) for (sent_idx, score) in scored_sentences\n", " if score > avg + 0.5 * std]\n", "\n", " # Summarization Approach 2:\n", " # Another approach would be to return only the top N ranked sentences\n", "\n", " top_n_scored = sorted(scored_sentences, key=lambda s: s[1])[-TOP_SENTENCES:]\n", " top_n_scored = sorted(top_n_scored, key=lambda s: s[0])\n", "\n", " # Decorate the post object with summaries\n", "\n", " return dict(top_n_summary=[sentences[idx] for (idx, score) in top_n_scored],\n", " mean_scored_summary=[sentences[idx] for (idx, score) in mean_scored])\n", "\n", "blog_data = json.loads(open(BLOG_DATA).read())\n", "\n", "for post in blog_data:\n", " \n", " post.update(summarize(post['content']))\n", "\n", " print post['title']\n", " print '=' * len(post['title'])\n", " print\n", " print 'Top N Summary'\n", " print '-------------'\n", " print ' '.join(post['top_n_summary'])\n", " print\n", " print 'Mean Scored Summary'\n", " print '-------------------'\n", " print ' '.join(post['mean_scored_summary'])\n", " print" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Example 7. Visualizing document summarization results with HTML output" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import os\n", "import json\n", "import nltk\n", "import numpy\n", "from IPython.display import IFrame\n", "from IPython.core.display import display\n", "\n", "BLOG_DATA = \"resources/ch05-webpages/feed.json\"\n", "\n", "HTML_TEMPLATE = \"\"\"\n", " \n", " %s\n", " \n", " \n", " %s\n", "\"\"\"\n", "\n", "blog_data = json.loads(open(BLOG_DATA).read())\n", "\n", "for post in blog_data:\n", " \n", " # Uses previously defined summarize function.\n", " post.update(summarize(post['content']))\n", "\n", " # You could also store a version of the full post with key sentences marked up\n", " # for analysis with simple string replacement...\n", "\n", " for summary_type in ['top_n_summary', 'mean_scored_summary']:\n", " post[summary_type + '_marked_up'] = '

%s

' % (post['content'], )\n", " for s in post[summary_type]:\n", " post[summary_type + '_marked_up'] = \\\n", " post[summary_type + '_marked_up'].replace(s, '%s' % (s, ))\n", "\n", " filename = post['title'].replace(\"?\", \"\") + '.summary.' + summary_type + '.html'\n", " f = open(os.path.join('resources', 'ch05-webpages', filename), 'w')\n", " html = HTML_TEMPLATE % (post['title'] + \\\n", " ' Summary', post[summary_type + '_marked_up'],)\n", " \n", " f.write(html.encode('utf-8'))\n", " f.close()\n", "\n", " print \"Data written to\", f.name\n", "\n", "# Display any of these files with an inline frame. This displays the\n", "# last file processed by using the last value of f.name...\n", "\n", "print \"Displaying %s:\" % f.name\n", "display(IFrame('files/%s' % f.name, '100%', '600px'))" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Example 8. Extracting entities from a text with NLTK" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import nltk\n", "import json\n", "\n", "BLOG_DATA = \"resources/ch05-webpages/feed.json\"\n", "\n", "blog_data = json.loads(open(BLOG_DATA).read())\n", "\n", "for post in blog_data:\n", "\n", " sentences = nltk.tokenize.sent_tokenize(post['content'])\n", " tokens = [nltk.tokenize.word_tokenize(s) for s in sentences]\n", " pos_tagged_tokens = [nltk.pos_tag(t) for t in tokens]\n", "\n", " # Flatten the list since we're not using sentence structure\n", " # and sentences are guaranteed to be separated by a special\n", " # POS tuple such as ('.', '.')\n", "\n", " pos_tagged_tokens = [token for sent in pos_tagged_tokens for token in sent]\n", "\n", " all_entity_chunks = []\n", " previous_pos = None\n", " current_entity_chunk = []\n", " for (token, pos) in pos_tagged_tokens:\n", "\n", " if pos == previous_pos and pos.startswith('NN'):\n", " current_entity_chunk.append(token)\n", " elif pos.startswith('NN'):\n", " if current_entity_chunk != []:\n", "\n", " # Note that current_entity_chunk could be a duplicate when appended,\n", " # so frequency analysis again becomes a consideration\n", "\n", " all_entity_chunks.append((' '.join(current_entity_chunk), pos))\n", " current_entity_chunk = [token]\n", "\n", " previous_pos = pos\n", "\n", " # Store the chunks as an index for the document\n", " # and account for frequency while we're at it...\n", "\n", " post['entities'] = {}\n", " for c in all_entity_chunks:\n", " post['entities'][c] = post['entities'].get(c, 0) + 1\n", "\n", " # For example, we could display just the title-cased entities\n", "\n", " print post['title']\n", " print '-' * len(post['title'])\n", " proper_nouns = []\n", " for (entity, pos) in post['entities']:\n", " if entity.istitle():\n", " print '\\t%s (%s)' % (entity, post['entities'][(entity, pos)])\n", " print" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Example 9. Discovering interactions between entities" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import nltk\n", "import json\n", "\n", "BLOG_DATA = \"resources/ch05-webpages/feed.json\"\n", "\n", "def extract_interactions(txt):\n", " sentences = nltk.tokenize.sent_tokenize(txt)\n", " tokens = [nltk.tokenize.word_tokenize(s) for s in sentences]\n", " pos_tagged_tokens = [nltk.pos_tag(t) for t in tokens]\n", "\n", " entity_interactions = []\n", " for sentence in pos_tagged_tokens:\n", "\n", " all_entity_chunks = []\n", " previous_pos = None\n", " current_entity_chunk = []\n", "\n", " for (token, pos) in sentence:\n", "\n", " if pos == previous_pos and pos.startswith('NN'):\n", " current_entity_chunk.append(token)\n", " elif pos.startswith('NN'):\n", " if current_entity_chunk != []:\n", " all_entity_chunks.append((' '.join(current_entity_chunk),\n", " pos))\n", " current_entity_chunk = [token]\n", "\n", " previous_pos = pos\n", "\n", " if len(all_entity_chunks) > 1:\n", " entity_interactions.append(all_entity_chunks)\n", " else:\n", " entity_interactions.append([])\n", "\n", " assert len(entity_interactions) == len(sentences)\n", "\n", " return dict(entity_interactions=entity_interactions,\n", " sentences=sentences)\n", "\n", "blog_data = json.loads(open(BLOG_DATA).read())\n", "\n", "# Display selected interactions on a per-sentence basis\n", "\n", "for post in blog_data:\n", "\n", " post.update(extract_interactions(post['content']))\n", "\n", " print post['title']\n", " print '-' * len(post['title'])\n", " for interactions in post['entity_interactions']:\n", " print '; '.join([i[0] for i in interactions])\n", " print" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "heading", "level": 2, "metadata": {}, "source": [ "Example 10. Visualizing interactions between entities with HTML output" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import os\n", "import json\n", "import nltk\n", "from IPython.display import IFrame\n", "from IPython.core.display import display\n", "\n", "BLOG_DATA = \"resources/ch05-webpages/feed.json\"\n", "\n", "HTML_TEMPLATE = \"\"\"\n", " \n", " %s\n", " \n", " \n", " %s\n", "\"\"\"\n", "\n", "blog_data = json.loads(open(BLOG_DATA).read())\n", "\n", "for post in blog_data:\n", "\n", " post.update(extract_interactions(post['content']))\n", "\n", " # Display output as markup with entities presented in bold text\n", "\n", " post['markup'] = []\n", "\n", " for sentence_idx in range(len(post['sentences'])):\n", "\n", " s = post['sentences'][sentence_idx]\n", " for (term, _) in post['entity_interactions'][sentence_idx]:\n", " s = s.replace(term, '%s' % (term, ))\n", "\n", " post['markup'] += [s] \n", " \n", " filename = post['title'].replace(\"?\", \"\") + '.entity_interactions.html'\n", " f = open(os.path.join('resources', 'ch05-webpages', filename), 'w')\n", " html = HTML_TEMPLATE % (post['title'] + ' Interactions', \n", " ' '.join(post['markup']),)\n", " f.write(html.encode('utf-8'))\n", " f.close()\n", "\n", " print \"Data written to\", f.name\n", " \n", " # Display any of these files with an inline frame. This displays the\n", " # last file processed by using the last value of f.name...\n", " \n", " print \"Displaying %s:\" % f.name\n", " display(IFrame('files/%s' % f.name, '100%', '600px'))" ], "language": "python", "metadata": {}, "outputs": [] } ], "metadata": {} } ] }