{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Regex" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this lesson, we'll learn about a useful tool in the NLP toolkit: regex.\n", "\n", "Let's consider two motivating examples:\n", "\n", "#### 1. The phone number problem\n", "\n", "Suppose we are given some data that includes phone numbers:\n", "\n", "123-456-7890\n", "\n", "123 456 7890\n", "\n", "101 Howard\n", "\n", "Some of the phone numbers have different formats (hyphens, no hyphens). Also, there are some errors in the data-- 101 Howard isn't a phon number! How can we find all the phone numbers?\n", "\n", "#### 2. Creating our own tokens\n", "\n", "In the previous lessons, we used sklearn or fastai to tokenize our text. What if we want to do it ourselves?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The phone number problem" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose we are given some data that includes phone numbers:\n", "\n", "123-456-7890\n", "\n", "123 456 7890\n", "\n", "(123)456-7890\n", "\n", "101 Howard\n", "\n", "Some of the phone numbers have different formats (hyphens, no hyphens, parentheses). Also, there are some errors in the data-- 101 Howard isn't a phone number! How can we find all the phone numbers?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will attempt this without regex, but will see that this quickly leads to lot of if/else branching statements and isn't a veyr promising approach:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Attempt 1 (without regex)" ] }, { "cell_type": "code", "execution_count": 77, "metadata": {}, "outputs": [], "source": [ "phone1 = \"123-456-7890\"\n", "\n", "phone2 = \"123 456 7890\"\n", "\n", "not_phone1 = \"101 Howard\"" ] }, { "cell_type": "code", "execution_count": 78, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'0123456789'" ] }, "execution_count": 78, "metadata": {}, "output_type": "execute_result" } ], "source": [ "string.digits" ] }, { "cell_type": "code", "execution_count": 79, "metadata": {}, "outputs": [], "source": [ "def check_phone(inp):\n", " valid_chars = string.digits + ' -()'\n", " for char in inp:\n", " if char not in valid_chars:\n", " return False\n", " return True" ] }, { "cell_type": "code", "execution_count": 80, "metadata": {}, "outputs": [], "source": [ "assert(check_phone(phone1))\n", "assert(check_phone(phone2))\n", "assert(not check_phone(not_phone1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Attempt 2 (without regex)" ] }, { "cell_type": "code", "execution_count": 81, "metadata": {}, "outputs": [], "source": [ "not_phone2 = \"1234\"" ] }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [ { "ename": "AssertionError", "evalue": "", "output_type": "error", "traceback": [ "\u001b[0;31m-----------------------------------\u001b[0m", "\u001b[0;31mAssertionError\u001b[0mTraceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0;32massert\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;32mnot\u001b[0m \u001b[0mcheck_phone\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnot_phone2\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;31mAssertionError\u001b[0m: " ] } ], "source": [ "assert(not check_phone(not_phone2))" ] }, { "cell_type": "code", "execution_count": 83, "metadata": {}, "outputs": [], "source": [ "def check_phone(inp):\n", " nums = string.digits\n", " valid_chars = nums + ' -()'\n", " num_counter = 0\n", " for char in inp:\n", " if char not in valid_chars:\n", " return False\n", " if char in nums:\n", " num_counter += 1\n", " if num_counter==10:\n", " return True\n", " else:\n", " return False" ] }, { "cell_type": "code", "execution_count": 84, "metadata": {}, "outputs": [], "source": [ "assert(check_phone(phone1))\n", "assert(check_phone(phone2))\n", "assert(not check_phone(not_phone1))\n", "assert(not check_phone(not_phone2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Attempt 3 (without regex)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But we also need to extract the digits!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Also, what about:\n", "\n", "34!NA5098gn#213ee2" ] }, { "cell_type": "code", "execution_count": 85, "metadata": {}, "outputs": [ { "ename": "AssertionError", "evalue": "", "output_type": "error", "traceback": [ "\u001b[0;31m-----------------------------------\u001b[0m", "\u001b[0;31mAssertionError\u001b[0mTraceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0mnot_phone3\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m\"34 50 98 21 32\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 3\u001b[0;31m \u001b[0;32massert\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;32mnot\u001b[0m \u001b[0mcheck_phone\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnot_phone3\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;31mAssertionError\u001b[0m: " ] } ], "source": [ "not_phone3 = \"34 50 98 21 32\"\n", "\n", "assert(not check_phone(not_phone3))" ] }, { "cell_type": "code", "execution_count": 86, "metadata": {}, "outputs": [ { "ename": "AssertionError", "evalue": "", "output_type": "error", "traceback": [ "\u001b[0;31m-----------------------------------\u001b[0m", "\u001b[0;31mAssertionError\u001b[0mTraceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0mnot_phone4\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m\"(34)(50)()()982132\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 3\u001b[0;31m \u001b[0;32massert\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;32mnot\u001b[0m \u001b[0mcheck_phone\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnot_phone3\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;31mAssertionError\u001b[0m: " ] } ], "source": [ "not_phone4 = \"(34)(50)()()982132\"\n", "\n", "assert(not check_phone(not_phone3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is getting increasingly unwieldy. We need a different approach." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introducing regex" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Useful regex resources:\n", "\n", "- https://regexr.com/\n", "- http://callumacrae.github.io/regex-tuesday/\n", "- https://regexone.com/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Best practice: Be as specific as possible.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Parts of the following section were adapted from Brian Spiering, who taught the MSDS [NLP elective last summer](https://github.com/brianspiering/nlp-course)." ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "slideshow": { "slide_type": "slide" } }, "source": [ "### What is regex?\n", "\n", "Regular expressions is a pattern matching language. \n", "\n", "Instead of writing `0 1 2 3 4 5 6 7 8 9`, you can write `[0-9]` or `\\d`" ] }, { "cell_type": "markdown", "metadata": { "hidden": true, "slideshow": { "slide_type": "fragment" } }, "source": [ "It is Domain Specific Language (DSL). Powerful (but limited language). \n", "\n", "**What other DSLs do you already know?**\n", "- SQL \n", "- Markdown\n", "- TensorFlow" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Matching Phone Numbers (The \"Hello, world!\" of Regex)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "`[0-9][0-9][0-9]-[0-9][0-9][0-9]-[0-9][0-9][0-9][0-9]` matches US telephone number." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Refactored: `\\d\\d\\d-\\d\\d\\d-\\d\\d\\d\\d`\n", "\n", "A **metacharacter** is one or more special characters that have a unique meaning and are NOT used as literals in the search expression. For example \"\\d\" means any digit.\n", "\n", "**Metacharacters are the special sauce of regex.**" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "\n", "\n", "Quantifiers\n", "-----\n", "\n", "Allow you to specify how many times the preceding expression should match. \n", "\n", "`{}` is an extact qualifer\n", "\n", "Refactored: `\\d{3}-\\d{3}-\\d{4}`" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Unexact quantifiers\n", "-----\n", "\n", "1. `?` question mark - zero or one \n", "2. `*` star - zero or more\n", "3. `+` plus sign - one or more | " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Regex can look really weird, since it's so concise" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The best (only?) way to learn it is through practice. Otherwise, you feel like you're just reading lists of rules.\n", "\n", "Let's take 15 minutes to begin working through the lessons on [regexone](https://regexone.com/)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Reminder: Be as specific as possible!**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Pros & Cons of Regex" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "**What are the advantages of regex?**\n", "\n", "1. Concise and powerful pattern matching DSL\n", "2. Supported by many computer languages, including SQL" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "**What are the disadvantages of regex?**\n", "\n", "1. Brittle \n", "2. Hard to write, can get complex to be correct\n", "3. Hard to read" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Revisiting tokenization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous lessons, we used a tokenizer. Now, let's learn how we could do this ourselves, and get a better understanding of tokenization." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What if we needed to create our own tokens?" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "import re" ] }, { "cell_type": "code", "execution_count": 139, "metadata": {}, "outputs": [], "source": [ "re_punc = re.compile(\"([\\\"\\''().,;:/_?!—\\-])\") # add spaces around punctuation\n", "re_apos = re.compile(r\"n ' t \") # n't\n", "re_bpos = re.compile(r\" ' s \") # 's\n", "re_mult_space = re.compile(r\" *\") # replace multiple spaces with just one\n", "\n", "def simple_toks(sent):\n", " sent = re_punc.sub(r\" \\1 \", sent)\n", " sent = re_apos.sub(r\" n't \", sent)\n", " sent = re_bpos.sub(r\" 's \", sent)\n", " sent = re_mult_space.sub(' ', sent)\n", " return sent.lower().split()" ] }, { "cell_type": "code", "execution_count": 140, "metadata": {}, "outputs": [], "source": [ "text = \"I don't know who Kara's new friend is-- is it 'Mr. Toad'?\"" ] }, { "cell_type": "code", "execution_count": 141, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"i do n't know who kara 's new friend is - - is it ' mr . toad ' ?\"" ] }, "execution_count": 141, "metadata": {}, "output_type": "execute_result" } ], "source": [ "' '.join(simple_toks(text))" ] }, { "cell_type": "code", "execution_count": 145, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I don ' t know who Kara ' s new friend is - - is it ' Mr . Toad ' ? \"" ] }, "execution_count": 145, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text2 = re_punc.sub(r\" \\1 \", text); text2" ] }, { "cell_type": "code", "execution_count": 144, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I do n't know who Kara ' s new friend is - - is it ' Mr . Toad ' ? \"" ] }, "execution_count": 144, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text3 = re_apos.sub(r\" n't \", text2); text3" ] }, { "cell_type": "code", "execution_count": 146, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I do n't know who Kara 's new friend is - - is it ' Mr . Toad ' ? \"" ] }, "execution_count": 146, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text4 = re_bpos.sub(r\" 's \", text3); text4" ] }, { "cell_type": "code", "execution_count": 147, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I do n't know who Kara 's new friend is - - is it ' Mr . Toad ' ? \"" ] }, "execution_count": 147, "metadata": {}, "output_type": "execute_result" } ], "source": [ "re_mult_space.sub(' ', text4)" ] }, { "cell_type": "code", "execution_count": 155, "metadata": {}, "outputs": [], "source": [ "sentences = ['All this happened, more or less.',\n", " 'The war parts, anyway, are pretty much true.',\n", " \"One guy I knew really was shot for taking a teapot that wasn't his.\",\n", " 'Another guy I knew really did threaten to have his personal enemies killed by hired gunmen after the war.',\n", " 'And so on.',\n", " \"I've changed all their names.\"]" ] }, { "cell_type": "code", "execution_count": 170, "metadata": {}, "outputs": [], "source": [ "tokens = list(map(simple_toks, sentences))" ] }, { "cell_type": "code", "execution_count": 171, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[['all', 'this', 'happened', ',', 'more', 'or', 'less', '.'],\n", " ['the',\n", " 'war',\n", " 'parts',\n", " ',',\n", " 'anyway',\n", " ',',\n", " 'are',\n", " 'pretty',\n", " 'much',\n", " 'true',\n", " '.'],\n", " ['one',\n", " 'guy',\n", " 'i',\n", " 'knew',\n", " 'really',\n", " 'was',\n", " 'shot',\n", " 'for',\n", " 'taking',\n", " 'a',\n", " 'teapot',\n", " 'that',\n", " 'was',\n", " \"n't\",\n", " 'his',\n", " '.'],\n", " ['another',\n", " 'guy',\n", " 'i',\n", " 'knew',\n", " 'really',\n", " 'did',\n", " 'threaten',\n", " 'to',\n", " 'have',\n", " 'his',\n", " 'personal',\n", " 'enemies',\n", " 'killed',\n", " 'by',\n", " 'hired',\n", " 'gunmen',\n", " 'after',\n", " 'the',\n", " 'war',\n", " '.'],\n", " ['and', 'so', 'on', '.'],\n", " ['i', \"'\", 've', 'changed', 'all', 'their', 'names', '.']]" ] }, "execution_count": 171, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokens" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once we have our tokens, we need to convert them to integer ids. We will also need to know our vocabulary, and have a way to convert between words and ids." ] }, { "cell_type": "code", "execution_count": 172, "metadata": {}, "outputs": [], "source": [ "import collections" ] }, { "cell_type": "code", "execution_count": 177, "metadata": {}, "outputs": [], "source": [ "PAD = 0; SOS = 1\n", "\n", "def toks2ids(sentences):\n", " voc_cnt = collections.Counter(t for sent in sentences for t in sent)\n", " vocab = sorted(voc_cnt, key=voc_cnt.get, reverse=True)\n", " vocab.insert(PAD, \"\")\n", " vocab.insert(SOS, \"\")\n", " w2id = {w:i for i,w in enumerate(vocab)}\n", " ids = [[w2id[t] for t in sent] for sent in sentences]\n", " return ids, vocab, w2id, voc_cnt" ] }, { "cell_type": "code", "execution_count": 178, "metadata": {}, "outputs": [], "source": [ "ids, vocab, w2id, voc_cnt = toks2ids(tokens)" ] }, { "cell_type": "code", "execution_count": 179, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[5, 13, 14, 3, 15, 16, 17, 2],\n", " [6, 7, 18, 3, 19, 3, 20, 21, 22, 23, 2],\n", " [24, 8, 4, 9, 10, 11, 25, 26, 27, 28, 29, 30, 11, 31, 12, 2],\n", " [32, 8, 4, 9, 10, 33, 34, 35, 36, 12, 37, 38, 39, 40, 41, 42, 43, 6, 7, 2],\n", " [44, 45, 46, 2],\n", " [4, 47, 48, 49, 5, 50, 51, 2]]" ] }, "execution_count": 179, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ids" ] }, { "cell_type": "code", "execution_count": 180, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['',\n", " '',\n", " '.',\n", " ',',\n", " 'i',\n", " 'all',\n", " 'the',\n", " 'war',\n", " 'guy',\n", " 'knew',\n", " 'really',\n", " 'was',\n", " 'his',\n", " 'this',\n", " 'happened',\n", " 'more',\n", " 'or',\n", " 'less',\n", " 'parts',\n", " 'anyway',\n", " 'are',\n", " 'pretty',\n", " 'much',\n", " 'true',\n", " 'one',\n", " 'shot',\n", " 'for',\n", " 'taking',\n", " 'a',\n", " 'teapot',\n", " 'that',\n", " \"n't\",\n", " 'another',\n", " 'did',\n", " 'threaten',\n", " 'to',\n", " 'have',\n", " 'personal',\n", " 'enemies',\n", " 'killed',\n", " 'by',\n", " 'hired',\n", " 'gunmen',\n", " 'after',\n", " 'and',\n", " 'so',\n", " 'on',\n", " \"'\",\n", " 've',\n", " 'changed',\n", " 'their',\n", " 'names']" ] }, "execution_count": 180, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vocab" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q: what could be another name of the `vocab` variable above?" ] }, { "cell_type": "code", "execution_count": 181, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'': 0,\n", " '': 1,\n", " '.': 2,\n", " ',': 3,\n", " 'i': 4,\n", " 'all': 5,\n", " 'the': 6,\n", " 'war': 7,\n", " 'guy': 8,\n", " 'knew': 9,\n", " 'really': 10,\n", " 'was': 11,\n", " 'his': 12,\n", " 'this': 13,\n", " 'happened': 14,\n", " 'more': 15,\n", " 'or': 16,\n", " 'less': 17,\n", " 'parts': 18,\n", " 'anyway': 19,\n", " 'are': 20,\n", " 'pretty': 21,\n", " 'much': 22,\n", " 'true': 23,\n", " 'one': 24,\n", " 'shot': 25,\n", " 'for': 26,\n", " 'taking': 27,\n", " 'a': 28,\n", " 'teapot': 29,\n", " 'that': 30,\n", " \"n't\": 31,\n", " 'another': 32,\n", " 'did': 33,\n", " 'threaten': 34,\n", " 'to': 35,\n", " 'have': 36,\n", " 'personal': 37,\n", " 'enemies': 38,\n", " 'killed': 39,\n", " 'by': 40,\n", " 'hired': 41,\n", " 'gunmen': 42,\n", " 'after': 43,\n", " 'and': 44,\n", " 'so': 45,\n", " 'on': 46,\n", " \"'\": 47,\n", " 've': 48,\n", " 'changed': 49,\n", " 'their': 50,\n", " 'names': 51}" ] }, "execution_count": 181, "metadata": {}, "output_type": "execute_result" } ], "source": [ "w2id" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "What are the uses of RegEx?\n", "---\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "1. Find / Search\n", "1. Find & Replace\n", "2. Cleaning" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Don't forgot about Python's `str` methods\n", "-----\n", "\n", "`str.`\n", " \n", "str.find()" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "str.find?" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Regex vs. String methods\n", "-----\n", "\n", "1. String methods are easier to understand.\n", "1. String methods express the intent more clearly. \n", "\n", "-----\n", "\n", "1. Regex handle much broader use cases.\n", "1. Regex can be language independent.\n", "1. Regex can be faster at scale." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What about unicode?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "message = \"😒🎦 🤢🍕\"\n", "\n", "re_frown = re.compile(r\"😒|🤢\")\n", "re_frown.sub(r\"😊\", message)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Regex Errors:" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "__False positives__ (Type I): Matching strings that we should __not__ have\n", "matched\n", "\n", "__False negatives__ (Type II): __Not__ matching strings that we should have matched" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Reducing the error rate for a task often involves two antagonistic efforts:\n", "\n", "1. Minimizing false positives\n", "2. Minimizing false negatives\n", "\n", "**Important to have tests for both!**" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "In a perfect world, you would be able to minimize both but in reality you often have to trade one for the other." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Useful Tools:\n", "----\n", "- [Regex cheatsheet](http://www.cheatography.com/davechild/cheat-sheets/regular-expressions/)\n", "- [regexr.com](http://regexr.com/) Realtime regex engine\n", "- [pyregex.com](https://pythex.org/) Realtime Python regex engine" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Summary\n", "----\n", "\n", "1. We use regex as a metalanguage to find string patterns in blocks of text\n", "1. `r\"\"` are your IRL friends for Python regex\n", "1. We are just doing binary classification so use the same performance metrics\n", "1. You'll make a lot of mistakes in regex 😩. \n", " - False Positive: Thinking you are right but you are wrong\n", " - False Negative: Missing something" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "
" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "
\n", "
\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Regex Terms\n", "----\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "- __target string__:\tThis term describes the string that we will be searching, that is, the string in which we want to find our match or search pattern.\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "- __search expression__: The pattern we use to find what we want. Most commonly called the regular expression. \n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "- __literal__:\tA literal is any character we use in a search or matching expression, for example, to find 'ind' in 'windows' the 'ind' is a literal string - each character plays a part in the search, it is literally the string we want to find." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "- __metacharacter__: A metacharacter is one or more special characters that have a unique meaning and are NOT used as literals in the search expression. For example \".\" means any character.\n", "\n", "Metacharacters are the special sauce of regex." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "- __escape sequence__:\tAn escape sequence is a way of indicating that we want to use a metacharacters as a literal. " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "In a regular expression an escape sequence involves placing the metacharacter \\ (backslash) in front of the metacharacter that we want to use as a literal. \n", "\n", "`'\\.'` means find literal period character (not match any character)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Regex Workflow\n", "---\n", "1. Create pattern in Plain English\n", "2. Map to regex language\n", "3. Make sure results are correct:\n", " - All Positives: Captures all examples of pattern\n", " - No Negatives: Everything captured is from the pattern\n", "4. Don't over-engineer your regex. \n", " - Your goal is to Get Stuff Done, not write the best regex in the world\n", " - Filtering before and after are okay." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 1 }