{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<small><small><i>\n",
    "All the IPython Notebooks in **[Python Natural Language Processing](https://github.com/milaan9/Python_Python_Natural_Language_Processing)** lecture series by **[Dr. Milaan Parmar](https://www.linkedin.com/in/milaanparmar/)** are available @ **[GitHub](https://github.com/milaan9)**\n",
    "</i></small></small>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/milaan9/Python_Python_Natural_Language_Processing/blob/main/01_Tokenization_NLP.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "2mc2QuFzHACp"
   },
   "source": [
    "# 01 Tokenization by Python"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "5tkJYfGNQs39"
   },
   "source": [
    "- **Tokenization is a way of separating a piece of text into smaller units called tokens. Here, tokens can be either words, characters, or subwords.**\n",
    "- **Tokenization is the act of breaking up a sequence of strings into pieces such as words, keywords, phrases, symbols and other elements called tokens. Tokens can be individual words, phrases or even whole sentences. In the process of tokenization, some characters like punctuation marks are discarded.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "aIr4gmR1ebiw",
    "outputId": "445027b3-a826-4950-f8e0-043fd259f5b0"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['I', 'll', 'always', 'be', 'there', 'with', 'you', 'forever', 'in', 'your', 'heart', '']\n"
     ]
    }
   ],
   "source": [
    "# Split by Whitespace\n",
    "\n",
    "import re\n",
    "text = \"I\\'ll always be there with you forever in your heart.!\"\n",
    "words = re.split(r'\\W+', text)\n",
    "print(words[:100])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "4nlEszKsHACv"
   },
   "source": [
    "Here It didn't recognise **`.`** at the last of the sentence."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "PF44c1m4ITRu"
   },
   "source": [
    "**Remove punctuations and separate the word**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "HneykDD2gC4k",
    "outputId": "81314e08-1e56-4f4a-fe0c-8052a3dad513"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['Ill', 'always', 'be', 'there', 'with', 'you', 'forever', 'in', 'your', 'heart']\n"
     ]
    }
   ],
   "source": [
    "import string\n",
    "import re\n",
    "\n",
    "# split into words by white space\n",
    "words = text.split()\n",
    "\n",
    "# prepare regex for char filtering\n",
    "re_punc = re.compile('[%s]' % re.escape(string.punctuation))\n",
    "\n",
    "# remove punctuation from each word\n",
    "stripped = [re_punc.sub('', w) for w in words]\n",
    "print(stripped[:100])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "nlE9FFEXgDEs",
    "outputId": "c2c01f66-7d5e-4153-decb-ec211bfe9062"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[\"I'll\", 'always', 'be', 'there', 'with', 'you', 'forever', 'in', 'your', 'heart!']\n"
     ]
    }
   ],
   "source": [
    "# string.printable inverse of string.punctuation\n",
    "\n",
    "re_print = re.compile('[^%s]' % re.escape(string.printable))\n",
    "result = [re_print.sub('', w) for w in words]\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "Uf_8fwtrgDRT",
    "outputId": "d9be8202-6c2f-4641-9012-c48335da0725"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[\"i'll\", 'always', 'be', 'there', 'with', 'you', 'forever', 'in', 'your', 'heart!']\n"
     ]
    }
   ],
   "source": [
    "# Normalizing Case\n",
    "\n",
    "# split into words by white space\n",
    "words = text.split()\n",
    "# convert to lower case\n",
    "words = [word.lower() for word in words]\n",
    "print(words[:100])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "XC1YBN3qJZuv"
   },
   "source": [
    "# Spacy\n",
    "\n",
    "Spacy is an open-source software python library used in advanced natural language processing and machine learning. It will be used to build information extraction, natural language understanding systems, and to pre-process text for deep learning."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "AQfx-EN2wjhz"
   },
   "source": [
    "Install:\n",
    "```py\n",
    "!pip install -U spacy\n",
    "!python -m spacy download en_core_web_sm\n",
    "```\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "ND0h9KA5eirK",
    "outputId": "127a139a-72ab-4bf9-8b01-730743f9a57b"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Requirement already satisfied: spacy in /usr/local/lib/python3.7/dist-packages (2.2.4)\n",
      "Collecting spacy\n",
      "  Downloading spacy-3.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.0 MB)\n",
      "\u001b[K     |████████████████████████████████| 6.0 MB 8.3 MB/s \n",
      "\u001b[?25hRequirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from spacy) (2.11.3)\n",
      "Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (1.0.5)\n",
      "Collecting thinc<8.1.0,>=8.0.12\n",
      "  Downloading thinc-8.0.13-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (628 kB)\n",
      "\u001b[K     |████████████████████████████████| 628 kB 33.5 MB/s \n",
      "\u001b[?25hCollecting typer<0.5.0,>=0.3.0\n",
      "  Downloading typer-0.4.0-py3-none-any.whl (27 kB)\n",
      "Collecting langcodes<4.0.0,>=3.2.0\n",
      "  Downloading langcodes-3.2.1.tar.gz (173 kB)\n",
      "\u001b[K     |████████████████████████████████| 173 kB 59.4 MB/s \n",
      "\u001b[?25hCollecting spacy-loggers<2.0.0,>=1.0.0\n",
      "  Downloading spacy_loggers-1.0.1-py3-none-any.whl (7.0 kB)\n",
      "Collecting spacy-legacy<3.1.0,>=3.0.8\n",
      "  Downloading spacy_legacy-3.0.8-py2.py3-none-any.whl (14 kB)\n",
      "Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy) (3.0.5)\n",
      "Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (2.23.0)\n",
      "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (21.0)\n",
      "Collecting pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4\n",
      "  Downloading pydantic-1.8.2-cp37-cp37m-manylinux2014_x86_64.whl (10.1 MB)\n",
      "\u001b[K     |████████████████████████████████| 10.1 MB 45.8 MB/s \n",
      "\u001b[?25hCollecting pathy>=0.3.5\n",
      "  Downloading pathy-0.6.1-py3-none-any.whl (42 kB)\n",
      "\u001b[K     |████████████████████████████████| 42 kB 1.3 MB/s \n",
      "\u001b[?25hRequirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy) (57.4.0)\n",
      "Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (1.19.5)\n",
      "Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (0.4.1)\n",
      "Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (4.62.3)\n",
      "Requirement already satisfied: typing-extensions<4.0.0.0,>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy) (3.7.4.3)\n",
      "Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy) (2.0.5)\n",
      "Collecting catalogue<2.1.0,>=2.0.6\n",
      "  Downloading catalogue-2.0.6-py3-none-any.whl (17 kB)\n",
      "Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from spacy) (0.8.2)\n",
      "Collecting srsly<3.0.0,>=2.4.1\n",
      "  Downloading srsly-2.4.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (451 kB)\n",
      "\u001b[K     |████████████████████████████████| 451 kB 48.3 MB/s \n",
      "\u001b[?25hRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from catalogue<2.1.0,>=2.0.6->spacy) (3.6.0)\n",
      "Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->spacy) (2.4.7)\n",
      "Requirement already satisfied: smart-open<6.0.0,>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from pathy>=0.3.5->spacy) (5.2.1)\n",
      "Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (1.24.3)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (2021.5.30)\n",
      "Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (3.0.4)\n",
      "Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (2.10)\n",
      "Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.7/dist-packages (from typer<0.5.0,>=0.3.0->spacy) (7.1.2)\n",
      "Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->spacy) (2.0.1)\n",
      "Building wheels for collected packages: langcodes\n",
      "  Building wheel for langcodes (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Created wheel for langcodes: filename=langcodes-3.2.1-py3-none-any.whl size=169396 sha256=9531f6c74ca24a2e58cea16da88a53faefff68155c8068bb2c3399a6b8c5a15f\n",
      "  Stored in directory: /root/.cache/pip/wheels/12/9c/b3/d42c928e622075d3b6056733125190086e44c9230878e6eb2b\n",
      "Successfully built langcodes\n",
      "Installing collected packages: catalogue, typer, srsly, pydantic, thinc, spacy-loggers, spacy-legacy, pathy, langcodes, spacy\n",
      "  Attempting uninstall: catalogue\n",
      "    Found existing installation: catalogue 1.0.0\n",
      "    Uninstalling catalogue-1.0.0:\n",
      "      Successfully uninstalled catalogue-1.0.0\n",
      "  Attempting uninstall: srsly\n",
      "    Found existing installation: srsly 1.0.5\n",
      "    Uninstalling srsly-1.0.5:\n",
      "      Successfully uninstalled srsly-1.0.5\n",
      "  Attempting uninstall: thinc\n",
      "    Found existing installation: thinc 7.4.0\n",
      "    Uninstalling thinc-7.4.0:\n",
      "      Successfully uninstalled thinc-7.4.0\n",
      "  Attempting uninstall: spacy\n",
      "    Found existing installation: spacy 2.2.4\n",
      "    Uninstalling spacy-2.2.4:\n",
      "      Successfully uninstalled spacy-2.2.4\n",
      "Successfully installed catalogue-2.0.6 langcodes-3.2.1 pathy-0.6.1 pydantic-1.8.2 spacy-3.2.0 spacy-legacy-3.0.8 spacy-loggers-1.0.1 srsly-2.4.2 thinc-8.0.13 typer-0.4.0\n",
      "Collecting en-core-web-sm==3.2.0\n",
      "  Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.2.0/en_core_web_sm-3.2.0-py3-none-any.whl (13.9 MB)\n",
      "\u001b[K     |████████████████████████████████| 13.9 MB 94 kB/s \n",
      "\u001b[?25hRequirement already satisfied: spacy<3.3.0,>=3.2.0 in /usr/local/lib/python3.7/dist-packages (from en-core-web-sm==3.2.0) (3.2.0)\n",
      "Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (1.0.1)\n",
      "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (21.0)\n",
      "Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.11.3)\n",
      "Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.8 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.0.8)\n",
      "Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (0.4.1)\n",
      "Requirement already satisfied: srsly<3.0.0,>=2.4.1 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.4.2)\n",
      "Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.2.1)\n",
      "Requirement already satisfied: thinc<8.1.0,>=8.0.12 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (8.0.13)\n",
      "Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (4.62.3)\n",
      "Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (0.8.2)\n",
      "Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.0.6)\n",
      "Requirement already satisfied: pathy>=0.3.5 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (0.6.1)\n",
      "Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.23.0)\n",
      "Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.0.5)\n",
      "Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (1.19.5)\n",
      "Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (57.4.0)\n",
      "Requirement already satisfied: pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (1.8.2)\n",
      "Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (1.0.5)\n",
      "Requirement already satisfied: typer<0.5.0,>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (0.4.0)\n",
      "Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.0.5)\n",
      "Requirement already satisfied: typing-extensions<4.0.0.0,>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.7.4.3)\n",
      "Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from catalogue<2.1.0,>=2.0.6->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.6.0)\n",
      "Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.4.7)\n",
      "Requirement already satisfied: smart-open<6.0.0,>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from pathy>=0.3.5->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (5.2.1)\n",
      "Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (3.0.4)\n",
      "Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (1.24.3)\n",
      "Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.10)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2021.5.30)\n",
      "Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.7/dist-packages (from typer<0.5.0,>=0.3.0->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (7.1.2)\n",
      "Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->spacy<3.3.0,>=3.2.0->en-core-web-sm==3.2.0) (2.0.1)\n",
      "Installing collected packages: en-core-web-sm\n",
      "  Attempting uninstall: en-core-web-sm\n",
      "    Found existing installation: en-core-web-sm 2.2.5\n",
      "    Uninstalling en-core-web-sm-2.2.5:\n",
      "      Successfully uninstalled en-core-web-sm-2.2.5\n",
      "Successfully installed en-core-web-sm-3.2.0\n",
      "\u001b[38;5;2m✔ Download and installation successful\u001b[0m\n",
      "You can now load the package via spacy.load('en_core_web_sm')\n"
     ]
    }
   ],
   "source": [
    "!pip install -U spacy\n",
    "!python -m spacy download en_core_web_sm\n",
    "\n",
    "import spacy\n",
    "nlp = spacy.load('en_core_web_sm')\n",
    "# Here en_core means core english,web_sm means small language,We import core english language here.\n",
    "# This is spacy's internal english language"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "kMNC9Bg5em1H",
    "outputId": "5e20c9e6-e9ff-41b4-9535-20ba7a5713fd"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\"I'll always be there with you forever in your mind!\"\n"
     ]
    }
   ],
   "source": [
    "string = '\"I\\'ll always be there with you forever in your mind!\"'\n",
    "print(string)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "7lBckJcXe0qH",
    "outputId": "058663e8-0bdc-4a7b-86d1-bc4f99ab795f"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\" | I | 'll | always | be | there | with | you | forever | in | your | mind | ! | \" | "
     ]
    }
   ],
   "source": [
    "# Here we will break the string into token and print in text\n",
    "\n",
    "doc = nlp(string)\n",
    "for token in doc:\n",
    "    print(token.text, end=' | ')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "ZkWH-w-BKlXk",
    "outputId": "8276135f-a56e-4298-b9c1-ee384756a8f5"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\"I'll always be there with you forever in your mind!\""
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "doc"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "sV4C5FwUe9AW",
    "outputId": "6dbb350a-4e5a-4843-8726-9dd098c69d1a"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "I\n",
      "'m\n",
      "always\n",
      "here\n",
      "to\n",
      "help\n",
      "you\n",
      "all\n",
      "!\n",
      "Email:milaanparmar9@gmail.com\n",
      "or\n",
      "visit\n",
      "more\n",
      "at\n",
      "https://github.com/milaan9\n",
      "!\n"
     ]
    }
   ],
   "source": [
    "# Here we break the string into unicode token\n",
    "\n",
    "doc2 = nlp(u\"I'm always here to help you all! Email:milaanparmar9@gmail.com or visit more at https://github.com/milaan9!\")\n",
    "for t in doc2:\n",
    "    print(t)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "HzJHS44WfRRW",
    "outputId": "8c1953f2-912a-46cb-dcb8-58c24d3b1581"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "A\n",
      "5\n",
      "km\n",
      "NYC\n",
      "cab\n",
      "ride\n",
      "costs\n",
      "$\n",
      "10.30\n"
     ]
    }
   ],
   "source": [
    "doc3 = nlp(u'A 5km NYC cab ride costs $10.30')\n",
    "for t in doc3:\n",
    "    print(t)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "c5Sg8JJsfdBF",
    "outputId": "95dd0a33-9ac3-4eca-e167-6b7bd5b93a33"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Let\n",
      "'s\n",
      "visit\n",
      "St.\n",
      "Louis\n",
      "in\n",
      "the\n",
      "U.S.\n",
      "next\n",
      "year\n",
      ".\n"
     ]
    }
   ],
   "source": [
    "doc4 = nlp(u\"Let's visit St. Louis in the U.S. next year.\")\n",
    "for t in doc4:\n",
    "    print(t)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "aaqmKOO5MKc3"
   },
   "source": [
    "**see the number of word used in the string**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "xkCo6hyBffhl",
    "outputId": "3d43245a-4cc9-4025-b955-431d6c69a234"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "11"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(doc4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "dQq3QuY_MRtt"
   },
   "source": [
    "**see the number of vocabolary or character used in the string**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "Gt_7rMkOfhmk",
    "outputId": "264316cf-4211-4ce4-af09-562b73711486"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "802"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# How many vocabolary present in token\n",
    "len(doc.vocab)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "jRUxCMlEMb95"
   },
   "source": [
    "**Print the 3rd word from the string**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "wgwP4IoZfi-E",
    "outputId": "1c09b033-3e52-4367-98b8-bb2dd08a406e"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "better"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "doc5 = nlp(u'It is better to give than to receive.')\n",
    "# Retrieve the third token:\n",
    "doc5[2]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "HI9TiGloMkI2"
   },
   "source": [
    "**Print the words from 3rd word to 4th word**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "OtyHwNwxflgF",
    "outputId": "f6c130f3-f898-4c98-ca14-aff923002e79"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "better to give"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Retrieve three tokens from the middle:\n",
    "doc5[2:5]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "dQa1r-iWfnNE",
    "outputId": "b6b307d6-5fc8-4e71-ed73-125d0aec78d0"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "than to receive."
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Retrieve the last four tokens:\n",
    "doc5[-4:]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "id": "otu1NnvZfpiU"
   },
   "outputs": [],
   "source": [
    "doc6 = nlp(u'My dinner was horrible.')\n",
    "doc7 = nlp(u'Your dinner was delicious.')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "3Evq_cTRM2pO"
   },
   "source": [
    "**We can't store any word from one line to the any word of other line/sentence**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 180
    },
    "id": "6UNwsoHIfri0",
    "outputId": "d8acdb4c-66a6-45e1-a2f2-52a449644076"
   },
   "outputs": [
    {
     "ename": "TypeError",
     "evalue": "ignored",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mTypeError\u001b[0m                                 Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-18-d4fb8c39c40b>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[0;31m# Try to change \"My dinner was horrible\" to \"My dinner was delicious\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mdoc6\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdoc7\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
      "\u001b[0;31mTypeError\u001b[0m: 'spacy.tokens.doc.Doc' object does not support item assignment"
     ]
    }
   ],
   "source": [
    "# Try to change \"My dinner was horrible\" to \"My dinner was delicious\"\n",
    "doc6[3] = doc7[3]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0NeK7IKK2wJf"
   },
   "source": [
    "### Explain which type of token is this"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "FAwFVwGAftIk",
    "outputId": "9bd6593f-455b-44e9-aba0-cc84d7f8d37e"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Apple | to | build | a | Hong | Kong | factory | for | $ | 6 | million | \n",
      "----\n",
      "Apple - ORG - Companies, agencies, institutions, etc.\n",
      "Hong Kong - GPE - Countries, cities, states\n",
      "$6 million - MONEY - Monetary values, including unit\n"
     ]
    }
   ],
   "source": [
    "# We can explain which type of token is this\n",
    "doc8 = nlp(u'Apple to build a Hong Kong factory for $6 million')\n",
    "\n",
    "for token in doc8:\n",
    "    print(token.text, end=' | ')\n",
    "\n",
    "print('\\n----')\n",
    "# ents shows all entities present in all tokens\n",
    "for ent in doc8.ents:\n",
    "    print(ent.text+' - '+ent.label_+' - '+str(spacy.explain(ent.label_)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "IoslO9CoHAC5"
   },
   "source": [
    "Here it shows \"Apple\" may be a companies,agencies,institution.Similarly like Hong Kong may be a countries,cities.....etc"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "sbba5DeCfvm0",
    "outputId": "4507f984-8099-4e11-ea14-9869bbb36d02"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "3"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(doc8.ents)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "93CItEYtfx3U",
    "outputId": "de022ab5-0f72-4747-9554-f0d9c001e42e"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Autonomous cars\n",
      "insurance liability\n",
      "manufacturers\n"
     ]
    }
   ],
   "source": [
    "doc9 = nlp(u\"Autonomous cars shift insurance liability toward manufacturers.\")\n",
    "# noun_chunks finds all noun present in all tokens\n",
    "for chunk in doc9.noun_chunks:\n",
    "    print(chunk.text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "_I8g6wx0fz50",
    "outputId": "ccee1ad2-1b57-4915-c00b-21c99ddb3ace"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Red cars\n",
      "higher insurance rates\n"
     ]
    }
   ],
   "source": [
    "doc10 = nlp(u\"Red cars do not carry higher insurance rates.\")\n",
    "\n",
    "for chunk in doc10.noun_chunks:\n",
    "    print(chunk.text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "tQCHFOLIf1TD",
    "outputId": "a9a8beba-02ad-4ef0-d73b-fe4f66c1265e"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "He\n",
      "a one-eyed, one-horned, flying, purple people-eater\n"
     ]
    }
   ],
   "source": [
    "doc11 = nlp(u\"He was a one-eyed, one-horned, flying, purple people-eater.\")\n",
    "\n",
    "for chunk in doc11.noun_chunks:\n",
    "    print(chunk.text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 398
    },
    "id": "YOFRWbWkf2jj",
    "outputId": "0ed4f197-52d6-49fd-9311-768efc9e0167"
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<span class=\"tex2jax_ignore\"><svg xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xml:lang=\"en\" id=\"5f0d4b75e08f4c98a278f20b7e57e478-0\" class=\"displacy\" width=\"1370\" height=\"357.0\" direction=\"ltr\" style=\"max-width: none; height: 357.0px; color: #000000; background: #ffffff; font-family: Arial; direction: ltr\">\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"50\">Apple</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"50\">PROPN</tspan>\n",
       "</text>\n",
       "\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"160\">is</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"160\">AUX</tspan>\n",
       "</text>\n",
       "\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"270\">going</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"270\">VERB</tspan>\n",
       "</text>\n",
       "\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"380\">to</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"380\">PART</tspan>\n",
       "</text>\n",
       "\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"490\">build</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"490\">VERB</tspan>\n",
       "</text>\n",
       "\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"600\">a</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"600\">DET</tspan>\n",
       "</text>\n",
       "\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"710\">U.K.</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"710\">PROPN</tspan>\n",
       "</text>\n",
       "\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"820\">factory</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"820\">NOUN</tspan>\n",
       "</text>\n",
       "\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"930\">for</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"930\">ADP</tspan>\n",
       "</text>\n",
       "\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1040\">$</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1040\">SYM</tspan>\n",
       "</text>\n",
       "\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1150\">6</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1150\">NUM</tspan>\n",
       "</text>\n",
       "\n",
       "<text class=\"displacy-token\" fill=\"currentColor\" text-anchor=\"middle\" y=\"267.0\">\n",
       "    <tspan class=\"displacy-word\" fill=\"currentColor\" x=\"1260\">million.</tspan>\n",
       "    <tspan class=\"displacy-tag\" dy=\"2em\" fill=\"currentColor\" x=\"1260\">NUM</tspan>\n",
       "</text>\n",
       "\n",
       "<g class=\"displacy-arrow\">\n",
       "    <path class=\"displacy-arc\" id=\"arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-0\" stroke-width=\"2px\" d=\"M70,222.0 C70,112.0 260.0,112.0 260.0,222.0\" fill=\"none\" stroke=\"currentColor\"/>\n",
       "    <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n",
       "        <textPath xlink:href=\"#arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-0\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">nsubj</textPath>\n",
       "    </text>\n",
       "    <path class=\"displacy-arrowhead\" d=\"M70,224.0 L62,212.0 78,212.0\" fill=\"currentColor\"/>\n",
       "</g>\n",
       "\n",
       "<g class=\"displacy-arrow\">\n",
       "    <path class=\"displacy-arc\" id=\"arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-1\" stroke-width=\"2px\" d=\"M180,222.0 C180,167.0 255.0,167.0 255.0,222.0\" fill=\"none\" stroke=\"currentColor\"/>\n",
       "    <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n",
       "        <textPath xlink:href=\"#arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-1\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">aux</textPath>\n",
       "    </text>\n",
       "    <path class=\"displacy-arrowhead\" d=\"M180,224.0 L172,212.0 188,212.0\" fill=\"currentColor\"/>\n",
       "</g>\n",
       "\n",
       "<g class=\"displacy-arrow\">\n",
       "    <path class=\"displacy-arc\" id=\"arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-2\" stroke-width=\"2px\" d=\"M400,222.0 C400,167.0 475.0,167.0 475.0,222.0\" fill=\"none\" stroke=\"currentColor\"/>\n",
       "    <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n",
       "        <textPath xlink:href=\"#arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-2\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">aux</textPath>\n",
       "    </text>\n",
       "    <path class=\"displacy-arrowhead\" d=\"M400,224.0 L392,212.0 408,212.0\" fill=\"currentColor\"/>\n",
       "</g>\n",
       "\n",
       "<g class=\"displacy-arrow\">\n",
       "    <path class=\"displacy-arc\" id=\"arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-3\" stroke-width=\"2px\" d=\"M290,222.0 C290,112.0 480.0,112.0 480.0,222.0\" fill=\"none\" stroke=\"currentColor\"/>\n",
       "    <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n",
       "        <textPath xlink:href=\"#arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-3\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">xcomp</textPath>\n",
       "    </text>\n",
       "    <path class=\"displacy-arrowhead\" d=\"M480.0,224.0 L488.0,212.0 472.0,212.0\" fill=\"currentColor\"/>\n",
       "</g>\n",
       "\n",
       "<g class=\"displacy-arrow\">\n",
       "    <path class=\"displacy-arc\" id=\"arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-4\" stroke-width=\"2px\" d=\"M620,222.0 C620,112.0 810.0,112.0 810.0,222.0\" fill=\"none\" stroke=\"currentColor\"/>\n",
       "    <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n",
       "        <textPath xlink:href=\"#arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-4\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">det</textPath>\n",
       "    </text>\n",
       "    <path class=\"displacy-arrowhead\" d=\"M620,224.0 L612,212.0 628,212.0\" fill=\"currentColor\"/>\n",
       "</g>\n",
       "\n",
       "<g class=\"displacy-arrow\">\n",
       "    <path class=\"displacy-arc\" id=\"arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-5\" stroke-width=\"2px\" d=\"M730,222.0 C730,167.0 805.0,167.0 805.0,222.0\" fill=\"none\" stroke=\"currentColor\"/>\n",
       "    <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n",
       "        <textPath xlink:href=\"#arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-5\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">compound</textPath>\n",
       "    </text>\n",
       "    <path class=\"displacy-arrowhead\" d=\"M730,224.0 L722,212.0 738,212.0\" fill=\"currentColor\"/>\n",
       "</g>\n",
       "\n",
       "<g class=\"displacy-arrow\">\n",
       "    <path class=\"displacy-arc\" id=\"arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-6\" stroke-width=\"2px\" d=\"M510,222.0 C510,57.0 815.0,57.0 815.0,222.0\" fill=\"none\" stroke=\"currentColor\"/>\n",
       "    <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n",
       "        <textPath xlink:href=\"#arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-6\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">dobj</textPath>\n",
       "    </text>\n",
       "    <path class=\"displacy-arrowhead\" d=\"M815.0,224.0 L823.0,212.0 807.0,212.0\" fill=\"currentColor\"/>\n",
       "</g>\n",
       "\n",
       "<g class=\"displacy-arrow\">\n",
       "    <path class=\"displacy-arc\" id=\"arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-7\" stroke-width=\"2px\" d=\"M510,222.0 C510,2.0 930.0,2.0 930.0,222.0\" fill=\"none\" stroke=\"currentColor\"/>\n",
       "    <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n",
       "        <textPath xlink:href=\"#arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-7\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">prep</textPath>\n",
       "    </text>\n",
       "    <path class=\"displacy-arrowhead\" d=\"M930.0,224.0 L938.0,212.0 922.0,212.0\" fill=\"currentColor\"/>\n",
       "</g>\n",
       "\n",
       "<g class=\"displacy-arrow\">\n",
       "    <path class=\"displacy-arc\" id=\"arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-8\" stroke-width=\"2px\" d=\"M1060,222.0 C1060,112.0 1250.0,112.0 1250.0,222.0\" fill=\"none\" stroke=\"currentColor\"/>\n",
       "    <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n",
       "        <textPath xlink:href=\"#arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-8\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">quantmod</textPath>\n",
       "    </text>\n",
       "    <path class=\"displacy-arrowhead\" d=\"M1060,224.0 L1052,212.0 1068,212.0\" fill=\"currentColor\"/>\n",
       "</g>\n",
       "\n",
       "<g class=\"displacy-arrow\">\n",
       "    <path class=\"displacy-arc\" id=\"arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-9\" stroke-width=\"2px\" d=\"M1170,222.0 C1170,167.0 1245.0,167.0 1245.0,222.0\" fill=\"none\" stroke=\"currentColor\"/>\n",
       "    <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n",
       "        <textPath xlink:href=\"#arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-9\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">compound</textPath>\n",
       "    </text>\n",
       "    <path class=\"displacy-arrowhead\" d=\"M1170,224.0 L1162,212.0 1178,212.0\" fill=\"currentColor\"/>\n",
       "</g>\n",
       "\n",
       "<g class=\"displacy-arrow\">\n",
       "    <path class=\"displacy-arc\" id=\"arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-10\" stroke-width=\"2px\" d=\"M950,222.0 C950,57.0 1255.0,57.0 1255.0,222.0\" fill=\"none\" stroke=\"currentColor\"/>\n",
       "    <text dy=\"1.25em\" style=\"font-size: 0.8em; letter-spacing: 1px\">\n",
       "        <textPath xlink:href=\"#arrow-5f0d4b75e08f4c98a278f20b7e57e478-0-10\" class=\"displacy-label\" startOffset=\"50%\" side=\"left\" fill=\"currentColor\" text-anchor=\"middle\">pobj</textPath>\n",
       "    </text>\n",
       "    <path class=\"displacy-arrowhead\" d=\"M1255.0,224.0 L1263.0,212.0 1247.0,212.0\" fill=\"currentColor\"/>\n",
       "</g>\n",
       "</svg></span>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Here we can see how \"displacy render\" shows which type of token are these\n",
    "# and relationships between them \n",
    "from spacy import displacy\n",
    "\n",
    "doc = nlp(u'Apple is going to build a U.K. factory for $6 million.')\n",
    "displacy.render(doc, style='dep', jupyter=True, options={'distance': 110})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 52
    },
    "id": "31zs1BD1f4oj",
    "outputId": "c8fc1643-5286-4a23-d4f4-88f333290912"
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<span class=\"tex2jax_ignore\"><div class=\"entities\" style=\"line-height: 2.5; direction: ltr\">Over \n",
       "<mark class=\"entity\" style=\"background: #bfe1d9; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;\">\n",
       "    the last quarter\n",
       "    <span style=\"font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem\">DATE</span>\n",
       "</mark>\n",
       " \n",
       "<mark class=\"entity\" style=\"background: #7aecec; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;\">\n",
       "    Apple\n",
       "    <span style=\"font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem\">ORG</span>\n",
       "</mark>\n",
       " sold \n",
       "<mark class=\"entity\" style=\"background: #e4e7d2; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;\">\n",
       "    nearly 20 thousand\n",
       "    <span style=\"font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem\">CARDINAL</span>\n",
       "</mark>\n",
       " \n",
       "<mark class=\"entity\" style=\"background: #bfeeb7; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;\">\n",
       "    iPods\n",
       "    <span style=\"font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem\">PRODUCT</span>\n",
       "</mark>\n",
       " for a profit of \n",
       "<mark class=\"entity\" style=\"background: #e4e7d2; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;\">\n",
       "    $6 million\n",
       "    <span style=\"font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem\">MONEY</span>\n",
       "</mark>\n",
       ".</div></span>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Showing types of token and there relationships in a line\n",
    "doc = nlp(u'Over the last quarter Apple sold nearly 20 thousand iPods for a profit of $6 million.')\n",
    "displacy.render(doc, style='ent', jupyter=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "Cp9FS2jif6Mz",
    "outputId": "4c43ccd9-d089-49dd-c70b-ae148ce08365"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Using the 'dep' visualizer\n",
      "Serving on http://0.0.0.0:5000 ...\n",
      "\n",
      "Shutting down server on port 5000.\n"
     ]
    }
   ],
   "source": [
    "doc = nlp(u'This is a sentence.')\n",
    "displacy.serve(doc, style='dep')\n",
    "\n",
    "# In-case ▶ runtime is too long press ■ Stop "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "nzvL0nQM1DtC"
   },
   "source": [
    "# **Tokenization - KN**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "vazxHZOeKtBo",
    "outputId": "5b226f29-24fc-4e5f-e49b-6d7635ff1841"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Requirement already satisfied: nltk in /usr/local/lib/python3.7/dist-packages (3.2.5)\n",
      "Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from nltk) (1.15.0)\n"
     ]
    }
   ],
   "source": [
    "!pip install nltk"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "G9kpihDJ1ONU",
    "outputId": "63c7e578-9fb9-4761-a425-66a42adf21a7"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[nltk_data] Downloading collection 'all'\n",
      "[nltk_data]    | \n",
      "[nltk_data]    | Downloading package abc to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/abc.zip.\n",
      "[nltk_data]    | Downloading package alpino to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/alpino.zip.\n",
      "[nltk_data]    | Downloading package biocreative_ppi to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/biocreative_ppi.zip.\n",
      "[nltk_data]    | Downloading package brown to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/brown.zip.\n",
      "[nltk_data]    | Downloading package brown_tei to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/brown_tei.zip.\n",
      "[nltk_data]    | Downloading package cess_cat to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/cess_cat.zip.\n",
      "[nltk_data]    | Downloading package cess_esp to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/cess_esp.zip.\n",
      "[nltk_data]    | Downloading package chat80 to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/chat80.zip.\n",
      "[nltk_data]    | Downloading package city_database to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/city_database.zip.\n",
      "[nltk_data]    | Downloading package cmudict to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/cmudict.zip.\n",
      "[nltk_data]    | Downloading package comparative_sentences to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/comparative_sentences.zip.\n",
      "[nltk_data]    | Downloading package comtrans to /root/nltk_data...\n",
      "[nltk_data]    | Downloading package conll2000 to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/conll2000.zip.\n",
      "[nltk_data]    | Downloading package conll2002 to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/conll2002.zip.\n",
      "[nltk_data]    | Downloading package conll2007 to /root/nltk_data...\n",
      "[nltk_data]    | Downloading package crubadan to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/crubadan.zip.\n",
      "[nltk_data]    | Downloading package dependency_treebank to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/dependency_treebank.zip.\n",
      "[nltk_data]    | Downloading package dolch to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/dolch.zip.\n",
      "[nltk_data]    | Downloading package europarl_raw to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/europarl_raw.zip.\n",
      "[nltk_data]    | Downloading package floresta to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/floresta.zip.\n",
      "[nltk_data]    | Downloading package framenet_v15 to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/framenet_v15.zip.\n",
      "[nltk_data]    | Downloading package framenet_v17 to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/framenet_v17.zip.\n",
      "[nltk_data]    | Downloading package gazetteers to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/gazetteers.zip.\n",
      "[nltk_data]    | Downloading package genesis to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/genesis.zip.\n",
      "[nltk_data]    | Downloading package gutenberg to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/gutenberg.zip.\n",
      "[nltk_data]    | Downloading package ieer to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/ieer.zip.\n",
      "[nltk_data]    | Downloading package inaugural to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/inaugural.zip.\n",
      "[nltk_data]    | Downloading package indian to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/indian.zip.\n",
      "[nltk_data]    | Downloading package jeita to /root/nltk_data...\n",
      "[nltk_data]    | Downloading package kimmo to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/kimmo.zip.\n",
      "[nltk_data]    | Downloading package knbc to /root/nltk_data...\n",
      "[nltk_data]    | Downloading package lin_thesaurus to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/lin_thesaurus.zip.\n",
      "[nltk_data]    | Downloading package mac_morpho to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/mac_morpho.zip.\n",
      "[nltk_data]    | Downloading package machado to /root/nltk_data...\n",
      "[nltk_data]    | Downloading package masc_tagged to /root/nltk_data...\n",
      "[nltk_data]    | Downloading package moses_sample to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping models/moses_sample.zip.\n",
      "[nltk_data]    | Downloading package movie_reviews to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/movie_reviews.zip.\n",
      "[nltk_data]    | Downloading package names to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/names.zip.\n",
      "[nltk_data]    | Downloading package nombank.1.0 to /root/nltk_data...\n",
      "[nltk_data]    | Downloading package nps_chat to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/nps_chat.zip.\n",
      "[nltk_data]    | Downloading package omw to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/omw.zip.\n",
      "[nltk_data]    | Downloading package opinion_lexicon to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/opinion_lexicon.zip.\n",
      "[nltk_data]    | Downloading package paradigms to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/paradigms.zip.\n",
      "[nltk_data]    | Downloading package pil to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/pil.zip.\n",
      "[nltk_data]    | Downloading package pl196x to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/pl196x.zip.\n",
      "[nltk_data]    | Downloading package ppattach to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/ppattach.zip.\n",
      "[nltk_data]    | Downloading package problem_reports to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/problem_reports.zip.\n",
      "[nltk_data]    | Downloading package propbank to /root/nltk_data...\n",
      "[nltk_data]    | Downloading package ptb to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/ptb.zip.\n",
      "[nltk_data]    | Downloading package product_reviews_1 to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/product_reviews_1.zip.\n",
      "[nltk_data]    | Downloading package product_reviews_2 to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/product_reviews_2.zip.\n",
      "[nltk_data]    | Downloading package pros_cons to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/pros_cons.zip.\n",
      "[nltk_data]    | Downloading package qc to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/qc.zip.\n",
      "[nltk_data]    | Downloading package reuters to /root/nltk_data...\n",
      "[nltk_data]    | Downloading package rte to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/rte.zip.\n",
      "[nltk_data]    | Downloading package semcor to /root/nltk_data...\n",
      "[nltk_data]    | Downloading package senseval to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/senseval.zip.\n",
      "[nltk_data]    | Downloading package sentiwordnet to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/sentiwordnet.zip.\n",
      "[nltk_data]    | Downloading package sentence_polarity to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/sentence_polarity.zip.\n",
      "[nltk_data]    | Downloading package shakespeare to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/shakespeare.zip.\n",
      "[nltk_data]    | Downloading package sinica_treebank to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/sinica_treebank.zip.\n",
      "[nltk_data]    | Downloading package smultron to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/smultron.zip.\n",
      "[nltk_data]    | Downloading package state_union to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/state_union.zip.\n",
      "[nltk_data]    | Downloading package stopwords to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/stopwords.zip.\n",
      "[nltk_data]    | Downloading package subjectivity to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/subjectivity.zip.\n",
      "[nltk_data]    | Downloading package swadesh to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/swadesh.zip.\n",
      "[nltk_data]    | Downloading package switchboard to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/switchboard.zip.\n",
      "[nltk_data]    | Downloading package timit to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/timit.zip.\n",
      "[nltk_data]    | Downloading package toolbox to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/toolbox.zip.\n",
      "[nltk_data]    | Downloading package treebank to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/treebank.zip.\n",
      "[nltk_data]    | Downloading package twitter_samples to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/twitter_samples.zip.\n",
      "[nltk_data]    | Downloading package udhr to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/udhr.zip.\n",
      "[nltk_data]    | Downloading package udhr2 to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/udhr2.zip.\n",
      "[nltk_data]    | Downloading package unicode_samples to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/unicode_samples.zip.\n",
      "[nltk_data]    | Downloading package universal_treebanks_v20 to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    | Downloading package verbnet to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/verbnet.zip.\n",
      "[nltk_data]    | Downloading package verbnet3 to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/verbnet3.zip.\n",
      "[nltk_data]    | Downloading package webtext to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/webtext.zip.\n",
      "[nltk_data]    | Downloading package wordnet to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/wordnet.zip.\n",
      "[nltk_data]    | Downloading package wordnet31 to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/wordnet31.zip.\n",
      "[nltk_data]    | Downloading package wordnet_ic to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/wordnet_ic.zip.\n",
      "[nltk_data]    | Downloading package words to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/words.zip.\n",
      "[nltk_data]    | Downloading package ycoe to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/ycoe.zip.\n",
      "[nltk_data]    | Downloading package rslp to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping stemmers/rslp.zip.\n",
      "[nltk_data]    | Downloading package maxent_treebank_pos_tagger to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping taggers/maxent_treebank_pos_tagger.zip.\n",
      "[nltk_data]    | Downloading package universal_tagset to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping taggers/universal_tagset.zip.\n",
      "[nltk_data]    | Downloading package maxent_ne_chunker to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping chunkers/maxent_ne_chunker.zip.\n",
      "[nltk_data]    | Downloading package punkt to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping tokenizers/punkt.zip.\n",
      "[nltk_data]    | Downloading package book_grammars to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping grammars/book_grammars.zip.\n",
      "[nltk_data]    | Downloading package sample_grammars to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping grammars/sample_grammars.zip.\n",
      "[nltk_data]    | Downloading package spanish_grammars to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping grammars/spanish_grammars.zip.\n",
      "[nltk_data]    | Downloading package basque_grammars to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping grammars/basque_grammars.zip.\n",
      "[nltk_data]    | Downloading package large_grammars to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping grammars/large_grammars.zip.\n",
      "[nltk_data]    | Downloading package tagsets to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping help/tagsets.zip.\n",
      "[nltk_data]    | Downloading package snowball_data to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    | Downloading package bllip_wsj_no_aux to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping models/bllip_wsj_no_aux.zip.\n",
      "[nltk_data]    | Downloading package word2vec_sample to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping models/word2vec_sample.zip.\n",
      "[nltk_data]    | Downloading package panlex_swadesh to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    | Downloading package mte_teip5 to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/mte_teip5.zip.\n",
      "[nltk_data]    | Downloading package averaged_perceptron_tagger to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping taggers/averaged_perceptron_tagger.zip.\n",
      "[nltk_data]    | Downloading package averaged_perceptron_tagger_ru to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping\n",
      "[nltk_data]    |       taggers/averaged_perceptron_tagger_ru.zip.\n",
      "[nltk_data]    | Downloading package perluniprops to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping misc/perluniprops.zip.\n",
      "[nltk_data]    | Downloading package nonbreaking_prefixes to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping corpora/nonbreaking_prefixes.zip.\n",
      "[nltk_data]    | Downloading package vader_lexicon to\n",
      "[nltk_data]    |     /root/nltk_data...\n",
      "[nltk_data]    | Downloading package porter_test to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping stemmers/porter_test.zip.\n",
      "[nltk_data]    | Downloading package wmt15_eval to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping models/wmt15_eval.zip.\n",
      "[nltk_data]    | Downloading package mwa_ppdb to /root/nltk_data...\n",
      "[nltk_data]    |   Unzipping misc/mwa_ppdb.zip.\n",
      "[nltk_data]    | \n",
      "[nltk_data]  Done downloading collection all\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Tokenization of paragraphs/sentences\n",
    "import nltk\n",
    "# nltk.download(\"popular\") # use this to download all popular libraries in nltk\n",
    "nltk.download('all')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "id": "TEzGQqAV1OnN"
   },
   "outputs": [],
   "source": [
    "paragraph = \"\"\"I have three visions for India. In 3000 years of our history, people from all over \n",
    "               the world have come and invaded us, captured our lands, conquered our minds. \n",
    "               From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British,\n",
    "               the French, the Dutch, all of them came and looted us, took over what was ours. \n",
    "               Yet we have not done this to any other nation. We have not conquered anyone. \n",
    "               We have not grabbed their land, their culture, \n",
    "               their history and tried to enforce our way of life on them. \n",
    "               Why? Because we respect the freedom of others.That is why my \n",
    "               first vision is that of freedom. I believe that India got its first vision of \n",
    "               this in 1857, when we started the War of Independence. It is this freedom that\n",
    "               we must protect and nurture and build on. If we are not free, no one will respect us.\n",
    "               My second vision for India’s development. For fifty years we have been a developing nation.\n",
    "               It is time we see ourselves as a developed nation. We are among the top 5 nations of the world\n",
    "               in terms of GDP. We have a 10 percent growth rate in most areas. Our poverty levels are falling.\n",
    "               Our achievements are being globally recognised today. Yet we lack the self-confidence to\n",
    "               see ourselves as a developed nation, self-reliant and self-assured. Isn’t this incorrect?\n",
    "               I have a third vision. India must stand up to the world. Because I believe that unless India \n",
    "               stands up to the world, no one will respect us. Only strength respects strength. We must be \n",
    "               strong not only as a military power but also as an economic power. Both must go hand-in-hand. \n",
    "               My good fortune was to have worked with three great minds. Dr. Vikram Sarabhai of the Dept. of \n",
    "               space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material.\n",
    "               I was lucky to have worked with all three of them closely and consider this the great opportunity of my life. \n",
    "               I see four milestones in my career\"\"\"\n",
    "               "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "id": "PT4J91Ql1Okd"
   },
   "outputs": [],
   "source": [
    "# Tokenizing sentences\n",
    "sentences = nltk.sent_tokenize(paragraph)\n",
    "\n",
    "# Tokenizing words\n",
    "words = nltk.word_tokenize(paragraph)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "vlJeJOjj1Ofg",
    "outputId": "2c62504e-e25a-4e6d-d155-ce2dc046c7fe"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['I have three visions for India.',\n",
       " 'In 3000 years of our history, people from all over \\n               the world have come and invaded us, captured our lands, conquered our minds.',\n",
       " 'From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British,\\n               the French, the Dutch, all of them came and looted us, took over what was ours.',\n",
       " 'Yet we have not done this to any other nation.',\n",
       " 'We have not conquered anyone.',\n",
       " 'We have not grabbed their land, their culture, \\n               their history and tried to enforce our way of life on them.',\n",
       " 'Why?',\n",
       " 'Because we respect the freedom of others.That is why my \\n               first vision is that of freedom.',\n",
       " 'I believe that India got its first vision of \\n               this in 1857, when we started the War of Independence.',\n",
       " 'It is this freedom that\\n               we must protect and nurture and build on.',\n",
       " 'If we are not free, no one will respect us.',\n",
       " 'My second vision for India’s development.',\n",
       " 'For fifty years we have been a developing nation.',\n",
       " 'It is time we see ourselves as a developed nation.',\n",
       " 'We are among the top 5 nations of the world\\n               in terms of GDP.',\n",
       " 'We have a 10 percent growth rate in most areas.',\n",
       " 'Our poverty levels are falling.',\n",
       " 'Our achievements are being globally recognised today.',\n",
       " 'Yet we lack the self-confidence to\\n               see ourselves as a developed nation, self-reliant and self-assured.',\n",
       " 'Isn’t this incorrect?',\n",
       " 'I have a third vision.',\n",
       " 'India must stand up to the world.',\n",
       " 'Because I believe that unless India \\n               stands up to the world, no one will respect us.',\n",
       " 'Only strength respects strength.',\n",
       " 'We must be \\n               strong not only as a military power but also as an economic power.',\n",
       " 'Both must go hand-in-hand.',\n",
       " 'My good fortune was to have worked with three great minds.',\n",
       " 'Dr. Vikram Sarabhai of the Dept.',\n",
       " 'of \\n               space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material.',\n",
       " 'I was lucky to have worked with all three of them closely and consider this the great opportunity of my life.',\n",
       " 'I see four milestones in my career']"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sentences"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "578Z4zGd1ObF",
    "outputId": "03b50f2b-c02b-4ef2-b706-4d249cfa1a84"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['I',\n",
       " 'have',\n",
       " 'three',\n",
       " 'visions',\n",
       " 'for',\n",
       " 'India',\n",
       " '.',\n",
       " 'In',\n",
       " '3000',\n",
       " 'years',\n",
       " 'of',\n",
       " 'our',\n",
       " 'history',\n",
       " ',',\n",
       " 'people',\n",
       " 'from',\n",
       " 'all',\n",
       " 'over',\n",
       " 'the',\n",
       " 'world',\n",
       " 'have',\n",
       " 'come',\n",
       " 'and',\n",
       " 'invaded',\n",
       " 'us',\n",
       " ',',\n",
       " 'captured',\n",
       " 'our',\n",
       " 'lands',\n",
       " ',',\n",
       " 'conquered',\n",
       " 'our',\n",
       " 'minds',\n",
       " '.',\n",
       " 'From',\n",
       " 'Alexander',\n",
       " 'onwards',\n",
       " ',',\n",
       " 'the',\n",
       " 'Greeks',\n",
       " ',',\n",
       " 'the',\n",
       " 'Turks',\n",
       " ',',\n",
       " 'the',\n",
       " 'Moguls',\n",
       " ',',\n",
       " 'the',\n",
       " 'Portuguese',\n",
       " ',',\n",
       " 'the',\n",
       " 'British',\n",
       " ',',\n",
       " 'the',\n",
       " 'French',\n",
       " ',',\n",
       " 'the',\n",
       " 'Dutch',\n",
       " ',',\n",
       " 'all',\n",
       " 'of',\n",
       " 'them',\n",
       " 'came',\n",
       " 'and',\n",
       " 'looted',\n",
       " 'us',\n",
       " ',',\n",
       " 'took',\n",
       " 'over',\n",
       " 'what',\n",
       " 'was',\n",
       " 'ours',\n",
       " '.',\n",
       " 'Yet',\n",
       " 'we',\n",
       " 'have',\n",
       " 'not',\n",
       " 'done',\n",
       " 'this',\n",
       " 'to',\n",
       " 'any',\n",
       " 'other',\n",
       " 'nation',\n",
       " '.',\n",
       " 'We',\n",
       " 'have',\n",
       " 'not',\n",
       " 'conquered',\n",
       " 'anyone',\n",
       " '.',\n",
       " 'We',\n",
       " 'have',\n",
       " 'not',\n",
       " 'grabbed',\n",
       " 'their',\n",
       " 'land',\n",
       " ',',\n",
       " 'their',\n",
       " 'culture',\n",
       " ',',\n",
       " 'their',\n",
       " 'history',\n",
       " 'and',\n",
       " 'tried',\n",
       " 'to',\n",
       " 'enforce',\n",
       " 'our',\n",
       " 'way',\n",
       " 'of',\n",
       " 'life',\n",
       " 'on',\n",
       " 'them',\n",
       " '.',\n",
       " 'Why',\n",
       " '?',\n",
       " 'Because',\n",
       " 'we',\n",
       " 'respect',\n",
       " 'the',\n",
       " 'freedom',\n",
       " 'of',\n",
       " 'others.That',\n",
       " 'is',\n",
       " 'why',\n",
       " 'my',\n",
       " 'first',\n",
       " 'vision',\n",
       " 'is',\n",
       " 'that',\n",
       " 'of',\n",
       " 'freedom',\n",
       " '.',\n",
       " 'I',\n",
       " 'believe',\n",
       " 'that',\n",
       " 'India',\n",
       " 'got',\n",
       " 'its',\n",
       " 'first',\n",
       " 'vision',\n",
       " 'of',\n",
       " 'this',\n",
       " 'in',\n",
       " '1857',\n",
       " ',',\n",
       " 'when',\n",
       " 'we',\n",
       " 'started',\n",
       " 'the',\n",
       " 'War',\n",
       " 'of',\n",
       " 'Independence',\n",
       " '.',\n",
       " 'It',\n",
       " 'is',\n",
       " 'this',\n",
       " 'freedom',\n",
       " 'that',\n",
       " 'we',\n",
       " 'must',\n",
       " 'protect',\n",
       " 'and',\n",
       " 'nurture',\n",
       " 'and',\n",
       " 'build',\n",
       " 'on',\n",
       " '.',\n",
       " 'If',\n",
       " 'we',\n",
       " 'are',\n",
       " 'not',\n",
       " 'free',\n",
       " ',',\n",
       " 'no',\n",
       " 'one',\n",
       " 'will',\n",
       " 'respect',\n",
       " 'us',\n",
       " '.',\n",
       " 'My',\n",
       " 'second',\n",
       " 'vision',\n",
       " 'for',\n",
       " 'India',\n",
       " '’',\n",
       " 's',\n",
       " 'development',\n",
       " '.',\n",
       " 'For',\n",
       " 'fifty',\n",
       " 'years',\n",
       " 'we',\n",
       " 'have',\n",
       " 'been',\n",
       " 'a',\n",
       " 'developing',\n",
       " 'nation',\n",
       " '.',\n",
       " 'It',\n",
       " 'is',\n",
       " 'time',\n",
       " 'we',\n",
       " 'see',\n",
       " 'ourselves',\n",
       " 'as',\n",
       " 'a',\n",
       " 'developed',\n",
       " 'nation',\n",
       " '.',\n",
       " 'We',\n",
       " 'are',\n",
       " 'among',\n",
       " 'the',\n",
       " 'top',\n",
       " '5',\n",
       " 'nations',\n",
       " 'of',\n",
       " 'the',\n",
       " 'world',\n",
       " 'in',\n",
       " 'terms',\n",
       " 'of',\n",
       " 'GDP',\n",
       " '.',\n",
       " 'We',\n",
       " 'have',\n",
       " 'a',\n",
       " '10',\n",
       " 'percent',\n",
       " 'growth',\n",
       " 'rate',\n",
       " 'in',\n",
       " 'most',\n",
       " 'areas',\n",
       " '.',\n",
       " 'Our',\n",
       " 'poverty',\n",
       " 'levels',\n",
       " 'are',\n",
       " 'falling',\n",
       " '.',\n",
       " 'Our',\n",
       " 'achievements',\n",
       " 'are',\n",
       " 'being',\n",
       " 'globally',\n",
       " 'recognised',\n",
       " 'today',\n",
       " '.',\n",
       " 'Yet',\n",
       " 'we',\n",
       " 'lack',\n",
       " 'the',\n",
       " 'self-confidence',\n",
       " 'to',\n",
       " 'see',\n",
       " 'ourselves',\n",
       " 'as',\n",
       " 'a',\n",
       " 'developed',\n",
       " 'nation',\n",
       " ',',\n",
       " 'self-reliant',\n",
       " 'and',\n",
       " 'self-assured',\n",
       " '.',\n",
       " 'Isn',\n",
       " '’',\n",
       " 't',\n",
       " 'this',\n",
       " 'incorrect',\n",
       " '?',\n",
       " 'I',\n",
       " 'have',\n",
       " 'a',\n",
       " 'third',\n",
       " 'vision',\n",
       " '.',\n",
       " 'India',\n",
       " 'must',\n",
       " 'stand',\n",
       " 'up',\n",
       " 'to',\n",
       " 'the',\n",
       " 'world',\n",
       " '.',\n",
       " 'Because',\n",
       " 'I',\n",
       " 'believe',\n",
       " 'that',\n",
       " 'unless',\n",
       " 'India',\n",
       " 'stands',\n",
       " 'up',\n",
       " 'to',\n",
       " 'the',\n",
       " 'world',\n",
       " ',',\n",
       " 'no',\n",
       " 'one',\n",
       " 'will',\n",
       " 'respect',\n",
       " 'us',\n",
       " '.',\n",
       " 'Only',\n",
       " 'strength',\n",
       " 'respects',\n",
       " 'strength',\n",
       " '.',\n",
       " 'We',\n",
       " 'must',\n",
       " 'be',\n",
       " 'strong',\n",
       " 'not',\n",
       " 'only',\n",
       " 'as',\n",
       " 'a',\n",
       " 'military',\n",
       " 'power',\n",
       " 'but',\n",
       " 'also',\n",
       " 'as',\n",
       " 'an',\n",
       " 'economic',\n",
       " 'power',\n",
       " '.',\n",
       " 'Both',\n",
       " 'must',\n",
       " 'go',\n",
       " 'hand-in-hand',\n",
       " '.',\n",
       " 'My',\n",
       " 'good',\n",
       " 'fortune',\n",
       " 'was',\n",
       " 'to',\n",
       " 'have',\n",
       " 'worked',\n",
       " 'with',\n",
       " 'three',\n",
       " 'great',\n",
       " 'minds',\n",
       " '.',\n",
       " 'Dr.',\n",
       " 'Vikram',\n",
       " 'Sarabhai',\n",
       " 'of',\n",
       " 'the',\n",
       " 'Dept',\n",
       " '.',\n",
       " 'of',\n",
       " 'space',\n",
       " ',',\n",
       " 'Professor',\n",
       " 'Satish',\n",
       " 'Dhawan',\n",
       " ',',\n",
       " 'who',\n",
       " 'succeeded',\n",
       " 'him',\n",
       " 'and',\n",
       " 'Dr.',\n",
       " 'Brahm',\n",
       " 'Prakash',\n",
       " ',',\n",
       " 'father',\n",
       " 'of',\n",
       " 'nuclear',\n",
       " 'material',\n",
       " '.',\n",
       " 'I',\n",
       " 'was',\n",
       " 'lucky',\n",
       " 'to',\n",
       " 'have',\n",
       " 'worked',\n",
       " 'with',\n",
       " 'all',\n",
       " 'three',\n",
       " 'of',\n",
       " 'them',\n",
       " 'closely',\n",
       " 'and',\n",
       " 'consider',\n",
       " 'this',\n",
       " 'the',\n",
       " 'great',\n",
       " 'opportunity',\n",
       " 'of',\n",
       " 'my',\n",
       " 'life',\n",
       " '.',\n",
       " 'I',\n",
       " 'see',\n",
       " 'four',\n",
       " 'milestones',\n",
       " 'in',\n",
       " 'my',\n",
       " 'career']"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "words"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "HLJ28p011OTv"
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "name": "1_Tokenization_NLP.ipynb",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}