{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Python Tutorial 2: Hidden Markov Models" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**(C) 2016-2024 by [Damir Cavar](http://damir.cavar.me/) <>**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Version:** 1.3, January 2024" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Download:** This and various other Jupyter notebooks are available from my [GitHub repo](https://github.com/dcavar/python-tutorial-notebooks)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**License:** [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/) ([CA BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Prerequisites:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install -U nltk" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install -U numpy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a tutorial about developing simple [Part-of-Speech taggers](https://en.wikipedia.org/wiki/Part-of-speech_tagging) using Python 3.x, the [NLTK](http://nltk.org/) (Bird et al., 2009), and a [Hidden Markov Model](https://en.wikipedia.org/wiki/Hidden_Markov_model) ([HMM](https://en.wikipedia.org/wiki/Hidden_Markov_model))." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This tutorial was developed as part of the course material for the course Advanced Natural Language Processing in the [Computational Linguistics Program](http://cl.indiana.edu/) of the [Department of Linguistics](http://www.indiana.edu/~lingdept/) at [Indiana University](https://www.indiana.edu/)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hidden Markov Models" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Given a [Hidden Markov Model](https://en.wikipedia.org/wiki/Hidden_Markov_model) ([HMM](https://en.wikipedia.org/wiki/Hidden_Markov_model)), we want to calculate the probability of a state at a certain time, given some evidence via some sequence of emissions. Let us assume the following [HMM](https://en.wikipedia.org/wiki/Hidden_Markov_model) as described in Chapter 9.2 of [Manning and Schütze](http://nlp.stanford.edu/fsnlp/) (1999):" ] }, { "cell_type": "markdown", "metadata": { "caption": "Hidden Markov Model: Crazy Soda Machine", "label": "fig:HMMCSM", "widefigure": true }, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The emission probabilities for *cola*, *iced tea*, and *lemonade* when the machine is in state *Cola Pref.* are:" ] }, { "cell_type": "markdown", "metadata": { "caption": "Hidden Markov Model: Crazy Soda Machine Cola Pref. Emissions", "label": "fig:HMMEmissionsColaPref", "widefigure": true }, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When the machine is in the *Iced Tea Pref.* state, the emission probabilities for the three different beverages are:" ] }, { "cell_type": "markdown", "metadata": { "caption": "Hidden Markov Model: Crazy Soda Machine Iced Tea Pref. Emissions", "label": "fig:HMMEmissionsIcedTeaPref", "widefigure": true }, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can describe the machine using state and emission matrices. The state matrix would be:" ] }, { "cell_type": "markdown", "metadata": { "caption": "Hidden Markov Model: Crazy Soda Machine State Transition Table", "label": "tbl:HMMStateTransitionTable", "widefigure": true }, "source": [ "| | Cola Pref. | Iced Tea Pref.\n", "| -: | :-: | :-:\n", "| **Cola Pref.** | 0.7 | 0.3\n", "| **Iced Tea Pref.** | 0.5 | 0.5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The emission matrix is:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "| | cola | iced tea | lemonade\n", "| -: | :-: | :-: | :-:\n", "| **Cola Pref.** | 0.6 | 0.1 | 0.3\n", "| **Iced Tea Pref.** | 0.1 | 0.7 | 0.2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can describe the probability that the soda machine starts in a particular state, using an initial probability matrix. If we want to state that the machine always starts in the *Cola Pref.* state, we specify the following initial probability matrix that assignes a 0 probability to the state *Iced Tea Pref.* as a start state:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "| State | Prob.\n", "| -: | :-:\n", "| *Cola Pref.* | 1.0\n", "| *Iced Tea Pref.* | 0.0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can represent these matrices using Python [numpy](http://www.numpy.org) data structures. If you are using the [Anaconda release](https://www.continuum.io/downloads), the [numpy](http://www.numpy.org) module should be already part of your installation. To import the [numpy](http://www.numpy.org) module, use the following command:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The state matrix could be coded in the following way:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "stateMatrix = numpy.matrix(\"0.7 0.3; 0.5 0.5\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can inspect the *stateMatrix* content:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "matrix([[0.7, 0.3],\n", " [0.5, 0.5]])" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stateMatrix" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "matrix([[0.64, 0.36],\n", " [0.6 , 0.4 ]])" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stateMatrix.dot(stateMatrix)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The emission matrix can be coded in the same way:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "emissionMatrix = numpy.matrix(\"0.6 0.1 0.3; 0.1 0.7 0.2\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can inspect the *emissionMatrix* content:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "matrix([[0.6, 0.1, 0.3],\n", " [0.1, 0.7, 0.2]])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "emissionMatrix" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And finally we can encode the initial probability matrix as:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "initialMatrix = numpy.matrix(\"1 0\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can inspect it using:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "matrix([[1, 0]])" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "initialMatrix" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Given some observation of outputs from the crazy soda machine, we want to calculate the most likely sequence of hidden states. Assume that we observe the following:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are asking for the most likely sequence of states $X(t-1),\\ X(t),\\ X(t+1)$ for the given observation that the machine is delivering the products *cola lemonade cola* in that particular sequence. We can calculate the probability by assuming that we start in the *Cola Pref.* state, as expressed by the initial probability matrix. That is, computing the paths or transitions starting from any other start state would have a $0$ probability and are thus irrelevant. For only two observations *cola lemonade* there are 4 possible paths through the [HMM](https://en.wikipedia.org/wiki/Hidden_Markov_model) above, this is: *Cola Pref.* and *Iced Tea Pref.*, *Cola Pref.* and *Cola Pref.*, *Iced Tea Pref.* and *Iced Tea Pref.*, *Iced Tea Pref.* and *Cola Pref.*. For three observations *cola lemonade cola* we have 8 paths with a probability larger than $0$ through our [HMM](https://en.wikipedia.org/wiki/Hidden_Markov_model). To compute the probability for a certain sequence to be emitted by our [HMM](https://en.wikipedia.org/wiki/Hidden_Markov_model) taking **one path**, we would **multiply the products** of the state transition probability and the output emission probability at every time point for the observation. We would **sum up** the products of all these probabilities for **all possible paths** through our [HMM](https://en.wikipedia.org/wiki/Hidden_Markov_model) to compute the general probability of our observation. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the *Cola Pref.* state we would have a probability to transit to the *Cola Pref.* state of $P(Cola\\ Pref.\\ |\\ Cola\\ Pref.) = 0.7$ and the likelihood to emit a *cola* is $P(cola) = 0.6$. That is, for the first step we would have to multiply:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$$P(Cola\\ Pref.\\ |\\ Cola\\ Pref.)\\ *\\ P(cola\\ |\\ Cola\\ Pref.) = 0.7 * 0.6 = 0.42$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will have to multiply this first step result now with the probabilities of the next steps for a given path. Let us assume that our machine decides to transit to the *Iced Tea Pref.* state with $P(Iced\\ Tea\\ Pref.\\ |\\ Cola\\ Pref.) = 0.3$. The emission probability for *lemonade* would then be $P(lemonade\\ |\\ Iced\\ Tea\\ Pref.) = 0.2$. The probability for this sub-path so far given the observation *cola lemonade* would thus be:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$$P(Cola\\ Pref.\\ |\\ Cola\\ Pref.) * P(cola\\ |\\ Cola\\ Pref.) * P(Iced\\ Tea\\ Pref.\\ |\\ Cola\\ Pref.) * P(lemonade\\ |\\ Iced\\ Tea\\ Pref.) =$$ $$0.7 * 0.6 * 0.3 * 0.2 = 0.0252$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us assume that the next step that the machine takes is a step back to the $Cola\\ Pref.$ state. The likelihood of the transition is $P(Cola\\ Pref.\\ |\\ Iced\\ Tea\\ Pref.) = 0.5$. The likelihood to emit *cola* from the target state is $P(cola\\ |\\ Cola\\ Pref.) = 0.6$. The likelihood of observing the emissions (*cola, lemonade, cola*) in that particular sequence when starting in the *Cola Pref.* state and when taking a path through the states (*Cola Pref., Iced Tea Pref., Cola Pref.*) is thus $0.7 * 0.6 * 0.3 * 0.2 * 0.5 * 0.6$. Now we would have to sum up this probability with the respective probabilities for all the other possible paths." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Decoding" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Forward Algorithm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For more detailed introductions and overviews see for example [Jurafsky and Klein](https://web.stanford.edu/~jurafsky/slp3/) (2014, draft) [Chapter 8](https://web.stanford.edu/~jurafsky/slp3/8.pdf), or [Wikipedia](https://en.wikipedia.org/wiki/Forward_algorithm)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we would want to calculate the probability of a specific observation like *cola lemonade cola* for a given HMM, we would need to calculate the probability of all possible paths through the HMM and sum those up in the end. To calculate the probability of all possible paths, we would " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$$P(\\mathbf{O}) = \\sum_{\\mathbf{S}} P(\\mathbf{O} \\cap \\mathbf{S}) =\n", " \\sum_{s_{i_1}, \\ldots, s_{i_T}} (a_{i_1 k_1} \\cdot v_{i_1}) \\cdot\n", " \\prod_{t=2}^T p_{i_{t-1} i_t} \\cdot a_{i_t k_t} $$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This equation is computationally demanding. It would require $O(2Tn^T)$ calculations. To reduce the computational effort, the Forward Algorithm makes use of a dynamic programming strategy to store all intermediate results in a trellis or matrix. We can display the " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Forward-Backward Algorithm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Coming soon" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Viterbi Algorithm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Coming soon" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training an HMM tagger using NLTK" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will use the Penn treebank corpus in the NLTK data to train the HMM tagger. To import the treebank use the following code:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "from nltk.corpus import treebank" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We need to import the HMM module as well, using the following code:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "from nltk.tag import hmm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can instantiate a HMM-Trainer object and assign it to a *trainer* variable using:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "trainer = hmm.HiddenMarkovModelTrainer()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can recall from the Part-of-Speech tagging tutorial, the function *brown.tagged_sents()* returns a list sentences which are lists of tuples of token-tag pairs. The following function returns the first two tagged sentences from the Brown corpus:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[('Pierre', 'NNP'),\n", " ('Vinken', 'NNP'),\n", " (',', ','),\n", " ('61', 'CD'),\n", " ('years', 'NNS'),\n", " ('old', 'JJ'),\n", " (',', ','),\n", " ('will', 'MD'),\n", " ('join', 'VB'),\n", " ('the', 'DT'),\n", " ('board', 'NN'),\n", " ('as', 'IN'),\n", " ('a', 'DT'),\n", " ('nonexecutive', 'JJ'),\n", " ('director', 'NN'),\n", " ('Nov.', 'NNP'),\n", " ('29', 'CD'),\n", " ('.', '.')],\n", " [('Mr.', 'NNP'),\n", " ('Vinken', 'NNP'),\n", " ('is', 'VBZ'),\n", " ('chairman', 'NN'),\n", " ('of', 'IN'),\n", " ('Elsevier', 'NNP'),\n", " ('N.V.', 'NNP'),\n", " (',', ','),\n", " ('the', 'DT'),\n", " ('Dutch', 'NNP'),\n", " ('publishing', 'VBG'),\n", " ('group', 'NN'),\n", " ('.', '.')]]" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "treebank.tagged_sents()[:2]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The NLTK HMM-module offers supervised and unsupervised training methods. Here we train an HMM using a supervised (or Maximum Likelihood Estimate) method and the Brown corpus:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "tagger = trainer.train_supervised(treebank.tagged_sents())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can print out some information about the resulting tagger:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tagger" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The tagger expects a list of tokens as a parameter. We can import the NLTK word_tokenize module to tokenize some input sentence and tag it:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "jupyter": { "outputs_hidden": true } }, "outputs": [], "source": [ "from nltk import word_tokenize" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The function *word_tokenize* takes a string as parameter and returns a list of tokens:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['Today', 'is', 'a', 'good', 'day', '.']" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "word_tokenize(\"Today is a good day.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can tag an input sentence by providing the output from the tokenizer as the input parameter to the *tag* method of the *tagger*-object:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "c:\\Users\\damir\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\nltk\\tag\\hmm.py:334: RuntimeWarning: overflow encountered in cast\n", " X[i, j] = self._transitions[si].logprob(self._states[j])\n", "c:\\Users\\damir\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\nltk\\tag\\hmm.py:336: RuntimeWarning: overflow encountered in cast\n", " O[i, k] = self._output_logprob(si, self._symbols[k])\n", "c:\\Users\\damir\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\nltk\\tag\\hmm.py:332: RuntimeWarning: overflow encountered in cast\n", " P[i] = self._priors.logprob(si)\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "c:\\Users\\damir\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\nltk\\tag\\hmm.py:364: RuntimeWarning: overflow encountered in cast\n", " O[i, k] = self._output_logprob(si, self._symbols[k])\n" ] }, { "data": { "text/plain": [ "[('My', 'PRP$'),\n", " ('funny', 'JJ'),\n", " ('dog', 'NNP'),\n", " ('can', 'NNP'),\n", " ('catch', 'NNP'),\n", " ('a', 'NNP'),\n", " ('stick', 'NNP'),\n", " ('at', 'NNP'),\n", " ('six', 'NNP'),\n", " ('feet', 'NNP'),\n", " ('in', 'NNP'),\n", " ('the', 'NNP'),\n", " ('air', 'NNP'),\n", " ('.', 'NNP')]" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tagger.tag(word_tokenize(\"My funny dog can catch a stick at six feet in the air.\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# References" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Bird, Steven, Ewan Klein, Edward Loper (2009) *[Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit](http://www.nltk.org/book/).* O'Reilly Media.\n", "\n", "Jurafsky, Daniel, James H. Martin (2014) *[Speech and Language Processing](https://web.stanford.edu/~jurafsky/slp3/)*. Online draft of September 1, 2014.\n", "\n", "Manning, Chris and Hinrich Schütze (1999) *[Foundations of Statistical Natural Language Processing](http://nlp.stanford.edu/fsnlp/)*, MIT Press. Cambridge, MA.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**(C) 2016-2024 by [Damir Cavar](http://damir.cavar.me/) <>**" ] } ], "metadata": { "anaconda-cloud": { "environment": null, "summary": "A tutorial on Hidden Markov Models.", "url": "https://anaconda.org/dcavar/python-tutorial-hmm" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.3" }, "latex_envs": { "LaTeX_envs_menu_present": true, "autoclose": false, "autocomplete": true, "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 1, "hotkeys": { "equation": "Ctrl-E", "itemize": "Ctrl-I" }, "labels_anchors": false, "latex_user_defs": false, "report_style_numbering": false, "user_envs_cfg": false }, "latex_metadata": { "affiliation": "Indiana University, Bloomington, IN", "author": "Damir Cavar", "title": "Python Tutorial: Hidden Markov Models" }, "toc": { "base_numbering": 1, "nav_menu": { "height": "177px", "width": "252px" }, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": "block", "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 4 }