{ "cells": [ { "cell_type": "markdown", "id": "08cba77c", "metadata": {}, "source": [ "# Dynamic Models \n", "\n", "- **[1]** (##) Given the Markov property\n", "\\begin{equation*}\n", "p(x_n|x_{n-1},x_{n-2},\\ldots,x_1) = p(x_n|x_{n-1}) \\tag{A1}\n", "\\end{equation*}\n", "proof that, for any $n$,\n", "\\begin{align*}\n", "p(x_n,x_{n-1},&\\ldots,x_{k+1},x_{k-1},\\ldots,x_1|x_k) = \\\\\n", "&p(x_n,x_{n-1},\\ldots,x_{k+1}|x_k) \\cdot p(x_{k-1},x_{k-2},\\ldots,x_1|x_k) \\tag{A2}\\,.\n", "\\end{align*}\n", "In other words, proof that, if the Markov property A1 holds, then, given the \"present\" ($x_k$), the \"future\" $(x_n,x_{n-1},\\ldots,x_{k+1})$ is _independent_ of the \"past\" $(x_{k-1},x_{k-2},\\ldots,x_1)$.\n", "> First, we rewrite A2 as\n", "\\begin{align*}\n", "p(&x_n,x_{n-1},\\ldots,x_{k+1},x_{k-1},\\ldots,x_1|x_k) = \\frac{p(x_n,x_{n-1},\\ldots,x_1)}{p(x_k)} \\\\\n", "&= \\frac{p(x_n,x_{n-1},\\ldots,x_{k+1}|x_k,\\ldots,x_1) \\cdot p(x_k,x_{k-1},\\ldots,x_1)}{p(x_k)} \\\\\n", "&= p(x_n,x_{n-1},\\ldots,x_{k+1}|x_k,\\ldots,x_1) \\cdot p(x_{k-1},\\ldots,x_1|x_k) \\tag{A3}\n", "\\end{align*}\n", "The first term in A3 can be simplified if A1 holds to \n", "\\begin{align*}\n", "p(x_n,&x_{n-1},\\ldots,x_{k+1}|x_k,x_{k-1},\\ldots,x_1) \\\\\n", "&= p(x_n|x_{n-1},x_{n-2},\\ldots,x_1) \\cdot p(x_{n-1}|x_{n-2},x_{n-3},\\ldots,x_1) \\cdots \\\\\n", "&\\quad \\cdots p(x_{k+1}|x_{k},x_{k-2},\\ldots,x_1) \\\\\n", "&= p(x_n|x_{n-1},x_{n-2},\\ldots,x_k) \\cdot p(x_{n-1}|x_{n-2},x_{n-3},\\ldots,x_k) \\cdots \\\\\n", "&\\quad \\cdots p(x_{k+1}|x_{k}) \\\\\n", "&= p(x_n,x_{n-1},\\ldots,x_{k+1}|x_k) \\tag{A4}\n", "\\end{align*}\n", "Substitution of A4 into A3 leads to A2. QED.\n", "\n", "- **[2]** (#) \n", " (a) What's the difference between a hidden Markov model and a linear Dynamical system? \n", " > HMM has binary-valued (on-off) states, where the LDS has continuously valued states. \n", " \n", " (b) For the same number of state variables, which of these two models has a larger memory capacity, and why? \n", " > The latter holds more capacity because, eg, a 16-bit representation of a continuously-valued variable holds $2^{16}$ different states.\n", " \n", " \n", "- **[3]** (#) \n", "(a) What is the 1st-order Markov assumption? \n", "(b) Derive the joint probability distribution $p(x_{1:T},z_{0:T})$ (where $x_t$ and $z_t$ are observed and latent variables respectively) for the state-space model with transition and observation models $p(z_t|z_{t-1})$ and $p(x_t|z_t)$. \n", "(c) What is a Hidden Markov Model (HMM)? \n", "(d) What is a Linear Dynamical System (LDS)? \n", "(e) What is a Kalman Filter? \n", "(f) How does the Kalman Filter relate to the LDS? \n", "(g) Explain the popularity of Kalman filtering and HMMs? \n", "(h) How relates a HMM to a GMM? \n", "> (a) An auto-regressive model is first-order Markov if $$p(x_t|x_{t-1},x_{t-2},\\ldots,x_1) = p(x_t|x_{t-1})\\,.$$ \n", "> (b) $$p(x_{1:T},z_{0:T}) = p(z_0)\\prod_{t=1}^Tp(z_t|z_{t-1}) \\prod_{t=1}^T p(x_t|z_t)$$ \n", "> (c) A HMM is a state-space model (as described in (b)) where the latent variable $z_t$ is discretely valued. Iow, the HMM has hidden clusters. \n", "> (d) An LDS is a state-space model (also described by the eq in (b)), but now the latent variable $z_t$ is continuously valued. \n", "> (e) A Kalman filter is a recursive solution to the inference problem $p(z_t|x_t,x_{t-1},\\dots,x_1)$, based on a state estimate at the previous time step $p(z_{t-1}|x_{t-1},x_{t-2},\\dots,x_1)$ and a new observation $x_t$. Basically, it's a recursive filter that updates the optimal Bayesian estimate of the current state $z_t$ based on all past observations $x_t,x_{t-1},\\dots,x_1$. \n", "> (f) The LDS describes a (generative) *model*. The Kalman filter does not describe a model, but rather describes an *inference task* on the LDS model. \n", "> (g) The LDS and HMM models are both quite general and flexible generative probabilistic models for time series. There exists very efficient algorithms for executing the latent state inference tasks (Kalman filter for LDS and there is a similar algorithm for the HMM). That makes these models flexible and practical. Hence the popularity of these models. \n", "> (h) An HMM can be interpreted as a Gaussian-Mixture-model-over-time. \n" ] }, { "cell_type": "code", "execution_count": null, "id": "95d36191", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Julia 1.5.2", "language": "julia", "name": "julia-1.5" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.5.2" } }, "nbformat": 4, "nbformat_minor": 5 }