{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 分類のためのモデル\n",
"\n",
"前の章で、データとモデルとアルゴリズムの仕組みを紹介したときの具体例として、線形回帰を取り上げた。ここでは、分類課題を見据えたモデルを実装し、前章の内容を補完する。用語については、「__分類__」と「__識別__」を同義的に扱う。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__目次__\n",
"\n",
"- 識別機のベースクラス\n",
"- 多クラスのロジスティック回帰\n",
"\n",
"___"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## 識別機のベースクラス\n",
"\n",
"実数値を予測する回帰課題と並んで、離散的なラベルを予測する分類課題が大変重要である。その予測をするシステムのことを主に「識別機」と呼ぶ。前に用意したモデルのベースクラス`Model`を踏襲して、識別機の新しい範疇を作るため、以下のように`Classifier`を実装する。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import models\n",
"\n",
"class Classifier(models.Model):\n",
" '''\n",
" Generic classifier model, an object with methods\n",
" for both training and evaluating classifiers.\n",
" '''\n",
"\n",
" def __init__(self, data=None, name=None):\n",
" super(Classifier, self).__init__(name=name)\n",
"\n",
" # If given data, collect information about labels.\n",
" if data is not None:\n",
" self.labels = self.get_labels(data=data) # all unique labels.\n",
" self.nc = self.labels.size # number of unique labels.\n",
"\n",
"\n",
" def onehot(self, y):\n",
" '''\n",
" A function for encoding y into a one-hot vector.\n",
" Inputs:\n",
" - y is a (k,1) array, taking values in {0,1,...,nc-1}.\n",
" '''\n",
" nc = self.nc\n",
" k = y.shape[0]\n",
" C = np.zeros((k,nc), dtype=y.dtype)\n",
"\n",
" for i in range(k):\n",
" j = y[i,0] # assumes y has only one column.\n",
" C[i,j] = 1\n",
"\n",
" return C\n",
" \n",
"\n",
" def get_labels(self, data):\n",
" '''\n",
" Get all the (unique) labels that appear in the data.\n",
" '''\n",
" A = (data.y_tr is None)\n",
" B = (data.y_te is None)\n",
"\n",
" if (A and B):\n",
" raise ValueError(\"No label data provided!\")\n",
" else:\n",
" if A:\n",
" out_labels = np.unique(data.y_te)\n",
" elif B:\n",
" out_labels = np.unique(data.y_tr)\n",
" else:\n",
" out_labels = np.unique(np.concatenate((data.y_tr,\n",
" data.y_te), axis=0))\n",
" count = out_labels.size\n",
" return out_labels.reshape((count,1))\n",
"\n",
"\n",
" def classify(self, X):\n",
" '''\n",
" Must be implemented by sub-classes.\n",
" '''\n",
" raise NotImplementedError\n",
"\n",
"\n",
" def class_perf(self, y_est, y_true):\n",
" '''\n",
" Given class label estimates and true values,\n",
" compute the fraction of correct classifications\n",
" made for each label, yielding typical binary\n",
" classification performance metrics.\n",
"\n",
" Input:\n",
" y_est and y_true are (k x 1) matrices of labels.\n",
"\n",
" Output:\n",
" Returns a dictionary with two components, (1) being\n",
" the fraction of correctly classified labels, and\n",
" (2) being a dict of per-label precison/recall/F1\n",
" scores. \n",
" '''\n",
" \n",
" # First, get the classification rate.\n",
" k = y_est.size\n",
" num_correct = (y_est == y_true).sum()\n",
" frac_correct = num_correct / k\n",
" frac_incorrect = 1.0 - frac_correct\n",
"\n",
" # Then, get precision/recall for each class.\n",
" prec_rec = { i:None for i in range(self.nc) } # initialize\n",
"\n",
" for c in range(self.nc):\n",
"\n",
" idx_c = (y_true == c)\n",
" idx_notc = (idx_c == False)\n",
"\n",
" TP = (y_est[idx_c] == c).sum()\n",
" FN = idx_c.sum() - TP\n",
" FP = (y_est[idx_notc] == c).sum()\n",
" TN = idx_notc.sum() - FP\n",
"\n",
" # Precision.\n",
" if (TP == 0 and FP == 0):\n",
" prec = 0\n",
" else:\n",
" prec = TP / (TP+FP)\n",
"\n",
" # Recall.\n",
" if (TP == 0 and FN == 0):\n",
" rec = 0\n",
" else:\n",
" rec = TP / (TP+FN)\n",
"\n",
" # F1 (harmonic mean of precision and recall).\n",
" if (prec == 0 or rec == 0):\n",
" f1 = 0\n",
" else:\n",
" f1 = 2 * prec * rec / (prec + rec)\n",
"\n",
" prec_rec[c] = {\"P\": prec,\n",
" \"R\": rec,\n",
" \"F1\": f1}\n",
"\n",
" return {\"rate\": frac_incorrect,\n",
" \"PRF1\": prec_rec}\n",
"\n",
" # Need to re-implement l_tr, l_te, g_tr, and\n",
" # g_te in order to utilize C_*, the one-hot\n",
" # representation. Doing it this way lets us\n",
" # get around having to compute it every time\n",
" # we evaluate a loss/grad.\n",
" def l_tr(self, w, data, n_idx=None, lamreg=None):\n",
" if n_idx is None:\n",
" return self.l_imp(w=w, X=data.X_tr,\n",
" C=self.C_tr,\n",
" lamreg=lamreg)\n",
" else:\n",
" return self.l_imp(w=w, X=data.X_tr[n_idx,:],\n",
" C=self.C_tr[n_idx,:],\n",
" lamreg=lamreg)\n",
" \n",
" def l_te(self, w, data, n_idx=None, lamreg=None):\n",
" if n_idx is None:\n",
" return self.l_imp(w=w, X=data.X_te,\n",
" C=self.C_te,\n",
" lamreg=lamreg)\n",
" else:\n",
" return self.l_imp(w=w, X=data.X_te[n_idx,:],\n",
" C=self.C_te[n_idx,:],\n",
" lamreg=lamreg)\n",
"\n",
" def g_tr(self, w, data, n_idx=None, lamreg=None):\n",
" if n_idx is None:\n",
" return self.g_imp(w=w, X=data.X_tr,\n",
" C=self.C_tr,\n",
" lamreg=lamreg)\n",
" else:\n",
" return self.g_imp(w=w, X=data.X_tr[n_idx,:],\n",
" C=self.C_tr[n_idx,:],\n",
" lamreg=lamreg)\n",
" \n",
" def g_te(self, w, data, n_idx=None, lamreg=None):\n",
" if n_idx is None:\n",
" return self.g_imp(w=w, X=data.X_te,\n",
" C=self.C_te,\n",
" lamreg=lamreg)\n",
" else:\n",
" return self.g_imp(w=w, X=data.X_te[n_idx,:],\n",
" C=self.C_te[n_idx,:],\n",
" lamreg=lamreg)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__練習問題__\n",
"\n",
"0. 次のメソッドの役割を説明すること。`onehot()`, `get_labels()`, `class_perf()`.\n",
"0. `class_perf`が何を返すか説明すること。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## 多クラスのロジスティック回帰\n",
"\n",
"簡明で使い勝手の良い多クラス識別モデルとして、多クラスロジスティック回帰は代表的である。要素を整理しておこう。\n",
"\n",
"- データ: 事例とラベルからなるペア$(x,y)$. このデータは$x \\in \\mathbb{R}^{d}$および$y \\in \\{0,1,\\ldots,K\\}$となっており、クラスの数は$K+1$である。\n",
"\n",
"- モデル: 事例$x$を与えられたときの各クラスの条件付き確率を、パラメータ$w_{0},w_{1},\\ldots,w_{K} \\in \\mathbb{R}^{d}$を用いながら下記の通りにモデル化する。\n",
"\n",
"\\begin{align*}\n",
"P\\{y = j | x\\} = \\frac{\\exp(w_{j}^{T}x)}{\\sum_{k=0}^{K}\\exp(w_{k}^{T}x)}.\n",
"\\end{align*}\n",
"\n",
"- 損失関数: 負の対数尤度$(-1) \\log P\\{y | x\\}$をサンプル全体に対して最小にする(詳細は後ほど)。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我々の目的はあくまで実装することであるから、計算の手助けとなるような表記を導入する。特に、損失関数を求めやすい形で表わすと便利である。まず、独立に得られた$n$個の標本$(x_{1},y_{1}), \\ldots, (x_{n},y_{n})$を基に、各$i=1,\\ldots,n$と$j=0,\\ldots,K$に対して、以下のように定義する。\n",
"\n",
"\\begin{align*}\n",
"p_{ij} & = P\\{y_{i} = j | x_{i}\\}\\\\\n",
"c_{ij} & = I\\{y_{i} = j\\}.\n",
"\\end{align*}\n",
"\n",
"インデックス$j$を追って、すべての要素をベクトルに並べ、以下のように記す。\n",
"\n",
"\\begin{align*}\n",
"p_{i} & = (p_{i0},\\ldots,p_{iK})\\\\\n",
"c_{i} & = (c_{i0},\\ldots,c_{iK}).\n",
"\\end{align*}\n",
"\n",
"この$c_{i}$は元のラベル$y_{i}$のone-hotベクトル表現で、使い勝手が良いものである。\n",
"\n",
"この表記にしたがって、$i$番目の事例のクラス$j$である確率は$p_{ij}^{c_{ij}}$と書ける。事例ごとの損失関数を定義するならば、以下のごとく負の対数尤度を使うことが自然である。\n",
"\n",
"\\begin{align*}\n",
"L(w;x_{i},y_{i}) & = (-1)\\log \\prod_{j=0}^{K} p_{ij}^{c_{ij}}\\\\\n",
"& = (-1)\\sum_{j=0}^{K} c_{ij} \\log p_{ij}\\\\\n",
"& = (-1)\\sum_{j=0}^{K} c_{ij}\\left(w_{j}^{T} x_{i} - \\log\\left( \\sum_{k=0}^{K}\\exp(w_{k}^{T}x_{i}) \\right) \\right)\\\\\n",
"& = \\log\\left( \\sum_{k=0}^{K}\\exp(w_{k}^{T}x_{i}) \\right) - \\sum_{j=0}^{K} c_{ij}w_{j}^{T} x_{i}.\n",
"\\end{align*}\n",
"\n",
"これまでと同様に、ERM学習則(経験期待損失最小化)に従い、このロスのサンプル平均を目的関数とする(但し、$n$をかけている)。\n",
"\n",
"\\begin{align*}\n",
"\\min_{w} \\sum_{i=1}^{n} L(w;x_{i},y_{i}) \\to \\hat{w}.\n",
"\\end{align*}\n",
"\n",
"$w$の関数として、この目的関数は微分可能であり、凸関数でもある。実際に最小化に手をつけるとき、勾配降下法はしばしば使われる。この$L(w;x,y)$の勾配ベクトルを$(w,x_{i},y_{i})$という点において、$w$について求めると、いささか詳細を省きながら下記のように書ける。\n",
"\n",
"\\begin{align*}\n",
"\\nabla L(w;x_{i},y_{i}) = (p_{i}-c_{i}) \\otimes x_{i}.\n",
"\\end{align*}\n",
"\n",
"この演算子$\\otimes$はクロネッカー積と呼ばれ、2つの行列に対して定義されている。たとえば、$U$が$a \\times b$で、$V$が$c \\times d$であるとすると、これらのクロネッカー積は\n",
"\n",
"\\begin{align*}\n",
"U \\otimes V =\n",
"\\begin{bmatrix}\n",
"u_{1,1}V & \\cdots & u_{1,b}V\\\\\n",
"\\vdots & \\ddots & \\vdots\\\\\n",
"u_{a,1}V & \\cdots & u_{a,b}V\n",
"\\end{bmatrix}\n",
"\\end{align*}\n",
"\n",
"という形を取る。言うまでもなく、各$u_{i,j1}V$は$c \\times d$の区分(ブロック)行列である。よって、全体として、この積$U \\otimes V$の寸法は$ac \\times bd$となる。今回の$\\nabla L(w;x_{i},y_{i})$については、$(p_{i}-c_{i})$の寸法が$1 \\times (K+1)$で、$x_{i}$の寸法が$1 \\times d$であるため、勾配ベクトルは$1 \\times d(K+1)$となる。\n",
"\n",
"クロネッカー積は簡単なループで自分で実装できるのだが、効率的なものはNumpyに標準搭載されているので、それを使うと何の苦労もなくロジスティック回帰の実装ができる。\n",
"\n",
"__パラメータの数について:__ 制御の対象となるパラメータを$w = (w_{0},\\ldots,w_{K}$として、$d(k+1)$を決めていくことはまったく問題ない。ただし、確率をモデル化していることから、$K$個のクラスの条件付き確率が定まれば、必然的に残りの1クラスの確率も決まる。たとえば、すでに$w_{1},\\ldots,w_{K}$の候補が定まっているのであれば、当然$p_{i1},\\ldots,p_{iK}$も定まるので、残っている0番目のクラスの条件付き確率は自ずと$p_{i0} = 1 - \\sum_{j=1}^{K}p_{ij}$で決まってくる。そう考えると、最初から$w_{0}=(0,\\ldots,0)$と固定して、その他の$dK$個のパラメータだけ決めれば良い、ということになる。`Classifier`を踏襲する`LogisticReg`を以下の通りに実装しているが、パラメータの数は$dK$である。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class LogisticReg(Classifier):\n",
" '''\n",
" Multi-class logistic regression model.\n",
" '''\n",
"\n",
" def __init__(self, data=None):\n",
" \n",
" # Given data info, load up the (X,y) data.\n",
" super(LogisticReg, self).__init__(data=data)\n",
" \n",
" # Convert original labels to a one-hot binary representation.\n",
" if data.y_tr is not None:\n",
" self.C_tr = self.onehot(y=data.y_tr)\n",
" if data.y_te is not None:\n",
" self.C_te = self.onehot(y=data.y_te)\n",
" \n",
" \n",
" def classify(self, w, X):\n",
" '''\n",
" Given learned weights (w) and a matrix of one or\n",
" more observations, classify them as {0,...,nc-1}.\n",
"\n",
" Input:\n",
" w is a (d x 1) matrix of weights.\n",
" X is a (k x numfeat) matrix of k observations.\n",
" NOTE: k can be anything, the training/test sample size.\n",
"\n",
" Output:\n",
" A vector of length k, housing labels in {0,...,nc-1}.\n",
" '''\n",
" \n",
" k, numfeat = X.shape\n",
" A = np.zeros((self.nc,k), dtype=np.float32)\n",
" \n",
" # Get activations, with last row as zeros.\n",
" A[:-1,:] = w.reshape((self.nc-1, numfeat)).dot(X.T)\n",
" \n",
" # Now convert activations to conditional probabilities.\n",
" maxes = np.max(A, axis=0) # largest score for each obs.\n",
" A = A - maxes\n",
" A = np.exp(A)\n",
" A = A / A.sum(axis=0) # (nc x k).\n",
" \n",
" # Assign classes with highest probability, (k x 1) array.\n",
" return A.argmax(axis=0).reshape((k,1))\n",
"\n",
"\n",
" def l_imp(self, w, X, C, lamreg=None):\n",
" '''\n",
" Implementation of the multi-class logistic regression\n",
" loss function.\n",
"\n",
" Input:\n",
" w is a (d x 1) matrix of weights.\n",
" X is a (k x numfeat) matrix of k observations.\n",
" C is a (k x nc) matrix giving a binarized encoding of the\n",
" class labels for each observation; each row a one-hot vector.\n",
" lam is a non-negative regularization parameter.\n",
" NOTE: k can be anything, the training/test sample size.\n",
"\n",
" Output:\n",
" A vector of length k with losses evaluated at k points.\n",
" '''\n",
" \n",
" k, numfeat = X.shape\n",
" A = np.zeros((self.nc,k), dtype=np.float64)\n",
" \n",
" # Get activations, with last row as zeros.\n",
" A[:-1,:] = w.reshape((self.nc-1, numfeat)).dot(X.T)\n",
" \n",
" # Raw activations of all the correct weights.\n",
" cvec = (A*C.T).sum(axis=0)\n",
" \n",
" # Compute the negative log-likelihoods.\n",
" maxes = np.max(A, axis=0)\n",
" err = (np.log(np.exp(A-maxes).sum(axis=0))+maxes)-cvec\n",
"\n",
" # Return the losses (all data points), with penalty if needed.\n",
" if lamreg is None:\n",
" return err\n",
" else:\n",
" penalty = lamreg * np.linalg.norm(W)**2\n",
" return err + penalty\n",
" \n",
" \n",
" def l_tr(self, w, data, n_idx=None, lamreg=None):\n",
" if n_idx is None:\n",
" return self.l_imp(w=w, X=data.X_tr,\n",
" C=self.C_tr,\n",
" lamreg=lamreg)\n",
" else:\n",
" return self.l_imp(w=w, X=data.X_tr[n_idx,:],\n",
" C=self.C_tr[n_idx,:],\n",
" lamreg=lamreg)\n",
" \n",
" def l_te(self, w, data, n_idx=None, lamreg=None):\n",
" if n_idx is None:\n",
" return self.l_imp(w=w, X=data.X_te,\n",
" C=self.C_te,\n",
" lamreg=lamreg)\n",
" else:\n",
" return self.l_imp(w=w, X=data.X_te[n_idx,:],\n",
" C=self.C_te[n_idx,:],\n",
" lamreg=lamreg)\n",
" \n",
" \n",
" def g_imp(self, w, X, C, lamreg=0):\n",
" '''\n",
" Implementation of the gradient of the loss function used in\n",
" multi-class logistic regression.\n",
"\n",
" Input:\n",
" w is a (d x 1) matrix of weights.\n",
" X is a (k x numfeat) matrix of k observations.\n",
" C is a (k x nc) matrix giving a binarized encoding of the\n",
" class labels for each observation; each row a one-hot vector.\n",
" lamreg is a non-negative regularization parameter.\n",
" NOTE: k can be anything, the training/test sample size.\n",
"\n",
" Output:\n",
" A (k x d) matrix of gradients eval'd at k points.\n",
" '''\n",
" \n",
" k, numfeat = X.shape\n",
" A = np.zeros((self.nc,k), dtype=np.float32)\n",
" \n",
" # Get activations, with last row as zeros.\n",
" A[:-1,:] = w.reshape((self.nc-1, numfeat)).dot(X.T)\n",
" \n",
" # Now convert activations to conditional probabilities.\n",
" maxes = np.max(A, axis=0) # largest score for each obs.\n",
" A = A - maxes\n",
" A = np.exp(A)\n",
" A = A / A.sum(axis=0) # (nc x k).\n",
" \n",
" # Initialize a large matrix (k x d) to house per-point grads.\n",
" G = np.zeros((k,w.size), dtype=w.dtype)\n",
"\n",
" for i in range(k):\n",
" # A very tall vector (i.e., just one \"axis\").\n",
" G[i,:] = np.kron(a=(A[:-1,i]-C[i,:-1]), b=X[i,:])\n",
" # Note we carefully remove the last elements.\n",
" \n",
" if lamreg is None:\n",
" return G\n",
" else:\n",
" return G + lamreg*2*w.T\n",
" \n",
"\n",
" def g_tr(self, w, data, n_idx=None, lamreg=None):\n",
" if n_idx is None:\n",
" return self.g_imp(w=w, X=data.X_tr,\n",
" C=self.C_tr,\n",
" lamreg=lamreg)\n",
" else:\n",
" return self.g_imp(w=w, X=data.X_tr[n_idx,:],\n",
" C=self.C_tr[n_idx,:],\n",
" lamreg=lamreg)\n",
" \n",
" def g_te(self, w, data, n_idx=None, lamreg=None):\n",
" if n_idx is None:\n",
" return self.g_imp(w=w, X=data.X_te,\n",
" C=self.C_te,\n",
" lamreg=lamreg)\n",
" else:\n",
" return self.g_imp(w=w, X=data.X_te[n_idx,:],\n",
" C=self.C_te[n_idx,:],\n",
" lamreg=lamreg)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__練習問題__\n",
"\n",
"0. 上記`LogisticReg`の`classify`, `l_imp`および`g_imp`というメソッドの中身を説明すること。\n",
"\n",
"0. もしクラスが2つしかなく、$y \\in \\{0,1\\}$という状況であれば、従来のロジスティック回帰では、$d$次元ベクトル$w$を一つだけ決めるようになっている。その$w$は$x \\mapsto f(w^{T}x) = P\\{y = 1 | x\\}$の写像に使う。この$f(u) = 1/(1+\\exp(-u))$はロジスティック関数と呼ばれる。このときの勾配はかなり単純な形になる。そこで、2クラスロジスティック回帰のモデルを、上記の`np.kron`を__使わずに__、実装してみること。自作の`g_imp`と`LogisticReg`の`g_imp`の出力が数値的に一致することを自分で確かめること。\n",
"\n",
"0. なぜ`l_tr`, `l_te`, `g_tr`, `g_te`をわざわざ実装しなおしたか。\n",
"\n",
"0. `maxes`を使って、何をしているか。何のためにこの演算を行なっているか。ちなみに、任意のベクトル$\\mathbf{u}$とスカラー$a$に対して、以下の等式が成り立つことを利用している。\n",
"\n",
"\\begin{align*}\n",
"\\text{softmax}(\\mathbf{u}+a) & = \\text{softmax}(\\mathbf{u})\\\\\n",
"\\log \\left( \\sum_{j} \\exp(u_{j}) \\right) & = a + \\log \\left( \\sum_{j} \\exp(u_{j}-a) \\right)\n",
"\\end{align*}\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"___"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}