{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Tutorial to use explainability metrics to evaluate lime explanations via AIX360\n", "- AIX360/metrics/local_metrics.py defines following explainablity metrics: faithfulness_metric and monotonicity_metric. This notebook borrows an example from original lime tutorial to show how to invoke these AIX360 methods to evaluate the quality of the lime explanations. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['/Users/ppedemon/git.repos/AIX360/examples/metrics', '/Users/ppedemon/miniconda3/envs/aix360/lib/python37.zip', '/Users/ppedemon/miniconda3/envs/aix360/lib/python3.7', '/Users/ppedemon/miniconda3/envs/aix360/lib/python3.7/lib-dynload', '', '/Users/ppedemon/miniconda3/envs/aix360/lib/python3.7/site-packages', '/Users/ppedemon/miniconda3/envs/aix360/lib/python3.7/site-packages/IPython/extensions', '/Users/ppedemon/.ipython', '..\\\\', '..\\\\..\\\\']\n" ] } ], "source": [ "import os, sys\n", "sys.path.append(\"..\\\\\")\n", "sys.path.append(\"..\\\\..\\\\\")\n", "print(sys.path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### LIME Tabular example\n", "- Example from https://github.com/marcotcr/lime/blob/master/doc/notebooks/Tutorial%20-%20continuous%20and%20categorical%20features.ipynb" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from __future__ import print_function\n", "from IPython.display import Markdown, display\n", "from matplotlib import pyplot as plt\n", "import sklearn\n", "import sklearn.datasets\n", "import sklearn.ensemble\n", "import numpy as np\n", "import lime\n", "np.random.seed(1)\n", "\n", "\n", "from lime.lime_tabular import LimeTabularExplainer\n", "from aix360.metrics import faithfulness_metric, monotonicity_metric" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "iris = sklearn.datasets.load_iris()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "train, test, labels_train, labels_test = sklearn.model_selection.train_test_split(iris.data, \n", " iris.target, \n", " train_size=0.80)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',\n", " max_depth=None, max_features='auto', max_leaf_nodes=None,\n", " min_impurity_decrease=0.0, min_impurity_split=None,\n", " min_samples_leaf=1, min_samples_split=2,\n", " min_weight_fraction_leaf=0.0, n_estimators=500,\n", " n_jobs=None, oob_score=False, random_state=None,\n", " verbose=0, warm_start=False)" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rf = sklearn.ensemble.RandomForestClassifier(n_estimators=500)\n", "rf.fit(train, labels_train)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9666666666666667" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sklearn.metrics.accuracy_score(labels_test, rf.predict(test))" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "explainer = LimeTabularExplainer(train, \n", " feature_names=iris.feature_names, \n", " class_names=iris.target_names, \n", " discretize_continuous=True)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n" ] } ], "source": [ "print(type(explainer))" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "i = np.random.randint(0, test.shape[0])\n", "exp = explainer.explain_instance(test[i], rf.predict_proba, num_features=4, top_labels=1)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " \n", "
\n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "exp.show_in_notebook(show_table=True, show_all=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using AIX360 Explainability metrics to evaluate quality of explanations ###" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Get the local explanation and find the weights assigned to the features. Create a array of base (don't care) values for comparison. For iris dataset, we assume a base value of 0 for each atribute." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Faithfulness: 0.9850836218974064\n", "Monotonity: True\n" ] } ], "source": [ "predicted_class = rf.predict(test[i].reshape(1,-1))[0]\n", "le = exp.local_exp[predicted_class]\n", "\n", "m = exp.as_map()\n", "\n", "x = test[i]\n", "coefs = np.zeros(x.shape[0])\n", "\n", "for v in le:\n", " coefs[v[0]] = v[1]\n", "\n", "\n", "base = np.zeros(x.shape[0])\n", "\n", "\n", "print(\"Faithfulness: \", faithfulness_metric(rf, x, coefs, base))\n", "print(\"Monotonity: \", monotonicity_metric(rf, x, coefs, base))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "while the Faithfulness metric deems the explanation to be weak, it is considered to be ok using the Monotonicity metric.\n", "\n", "Lets explore further by evaluating these metrics on the entire test set." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "% of test records where explanation is monotonic 0.8333333333333334\n" ] } ], "source": [ "ncases = test.shape[0]\n", "mon = np.zeros(ncases)\n", "for i in range(ncases):\n", " predicted_class = rf.predict(test[i].reshape(1,-1))[0]\n", " exp = explainer.explain_instance(test[i], rf.predict_proba, num_features=4, top_labels=1)\n", " le = exp.local_exp[predicted_class]\n", " m = exp.as_map()\n", " \n", " x = test[i]\n", " coefs = np.zeros(x.shape[0])\n", " \n", " for v in le:\n", " coefs[v[0]] = v[1]\n", "\n", " mon[i] = monotonicity_metric(rf, test[i], coefs, base)\n", "print(\"% of test records where explanation is monotonic\",np.mean(mon))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "More than 80% of the explanations are monotonic. Hence, the LIME explanations are fairly good using this measure." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Faithfulness metric mean: 0.47354600998894836\n", "Faithfulness metric std. dev.: 0.6256893306331033\n" ] } ], "source": [ "fait = np.zeros(ncases)\n", "for i in range(ncases):\n", " predicted_class = rf.predict(test[i].reshape(1,-1))[0]\n", " exp = explainer.explain_instance(test[i], rf.predict_proba, num_features=4, top_labels=1)\n", " le = exp.local_exp[predicted_class]\n", " m = exp.as_map()\n", " \n", " x = test[i]\n", " coefs = np.zeros(x.shape[0])\n", " \n", " for v in le:\n", " coefs[v[0]] = v[1]\n", " fait[i] = faithfulness_metric(rf, test[i], coefs, base)\n", "\n", "print(\"Faithfulness metric mean: \",np.mean(fait))\n", "print(\"Faithfulness metric std. dev.:\", np.std(fait))\n", "\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The value of the faithfulness metric can be between -1.0 and 1.0. So, a mean value of around 0.5 shows that the LIME explanations are fairly good via this metric.\n", "Moreover, the high std. deviation shows that the distribution has probably a large number of cases with high correlation. So, we look at a histogram of faithfulness metric values for all the cases." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXoAAAEICAYAAABRSj9aAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAWlklEQVR4nO3df7BkZX3n8fdHENxFdxnkBvk1DK4sLjFxtO6iib9QEQEt0V2jM7VGMLijrm7FWrcixqrgktoEd8tYZcgGJzgBo0GihjgpUBxRC82COlAD8kOcAbGYYWQGB1CiYsDv/tHnbppL99x7u/vemXl8v6q6+pznec453zn3zqfPPd19TqoKSVK7nrCnC5AkLS6DXpIaZ9BLUuMMeklqnEEvSY0z6CWpcQa99klJ7kpy8oD2FyW5fU/UJO2tDHo1paq+VlXHzzUuyQeSfGIpapL2NINemrAk++/pGqR+Br32ZSuT3JTkwSSXJXlSkpOSbJ0ZkOS9SbYl+XGS25O8PMmpwO8Db0zyUJIbu7FHJFmfZFeSLUn+c996/kWSS5Lcn+S2JL83azt3ddu6CfjHJPsnOSfJHd22b03yur7xZyX5hyQfTvJAkjuT/GbXfneSHUnOXJK9qOZ55KF92RuAU4GfAf8AnAV8Z6YzyfHAu4B/X1X3JFkB7FdVdyT5I+AZVfWmvvV9CrgZOAJ4JrAhyR1V9WXgXGAF8HTgIODKAfWsBl4F3FdVjyS5A3gR8APgt4BPJHlGVW3vxj8PuAh4KvA/uu3/PfAM4CXAZ5N8tqoeGnkPSXhEr33bR6rqnqraRS8gV87qfxQ4EDghyROr6q6qumPQipIcDbwAeG9V/ayqNtEL4Td3Q94A/FFV3V9VW4GPDKnn7qr6KUBVfbqr7xdVdRmwGTixb/z3quovq+pR4DLgaOC8qnq4qr4I/Jxe6EtjMei1L/tB3/RPgCf3d1bVFuDdwAeAHUk+leSIIes6AthVVT/ua/s+cGRf/919ff3TA9uSvDnJpu7UzAPAs4BD+4bc2zc98+Iwu+0x/yZpFAa9mlZVf11VLwSOAQr44EzXrKH3AIckeUpf23JgWze9HTiqr+/oQZubmUhyDPAX9E4dPbWqDqZ3Wigj/lOkkRn0alaS45O8LMmB9M7j/xT4Rdd9L7AiyRMAqupu4P8Cf9y9qfvrwNnAzEcw/wZ4X5JlSY6kF+C7cxC94N/Z1fIWekf00pIz6NWyA4Hzgfvoneb5FeB9Xd+nu+cfJrmhm15N7w3Xe4DLgXOr6ktd33nAVuB7wJeAzwAPD9twVd0KfAi4lt6Lyq/Re8NYWnLxxiPSwiV5B7Cqql6yp2uR5uIRvTQPSQ5P8oIkT+g+tvkeekf90l7Pz9FL83MA8FHgWOABep95/z97tCJpnjx1I0mN89SNJDVurzx1c+ihh9aKFSv2dBmStM+4/vrr76uqqUF9e2XQr1ixgo0bN+7pMiRpn5Hk+8P6PHUjSY0z6CWpcQa9JDXOoJekxhn0ktQ4g16SGjdn0Cc5OslXunte3pLkd7v2Q5JsSLK5e142ZPkzuzGbvQemJC29+RzRPwK8p6pOAJ4PvDPJCcA5wNVVdRxwdTf/GEkOoXevzefRu4XaucNeECRJi2POoK+q7VV1Qzf9Y+A2erdXOwO4pBt2CfDaAYu/EthQVbuq6n5gA72bOUuSlsiCvhmbZAXwHOAbwGF9d7P/AXDYgEWO5LH30dzKP9+Dc/a61wBrAJYvX76QsiTtpVacc8WeLmGfctf5r1qU9c77zdgkTwY+C7y7qn7U31e9S2COdRnMqlpbVdNVNT01NfByDZKkEcwr6JM8kV7If7Kq/rZrvjfJ4V3/4cCOAYtu47E3UT6Kf77ZsiRpCcznUzcBPgbcVlV/0te1Hpj5FM2ZwOcGLH4VcEp3Q+VlwCldmyRpiczniP4FwG8DL0uyqXucTu+my69Ishk4uZsnyXSSiwCqahfwh8C3usd5XZskaYnM+WZsVX0dyJDulw8YvxF4a9/8OmDdqAVKksbjN2MlqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcQS9JjTPoJalxBr0kNc6gl6TGGfSS1DiDXpIaZ9BLUuMMeklqnEEvSY2b88YjSdYBrwZ2VNWzurbLgOO7IQcDD1TVygHL3gX8GHgUeKSqpidUtyRpnuYMeuBi4ALg4zMNVfXGmekkHwIe3M3yL62q+0YtUJI0nvncSvCaJCsG9XU3Dn8D8LLJliVJmpRxz9G/CLi3qjYP6S/gi0muT7JmzG1JkkYwn1M3u7MauHQ3/S+sqm1JfgXYkOQ7VXXNoIHdC8EagOXLl49ZliRpxshH9En2B/4DcNmwMVW1rXveAVwOnLibsWurarqqpqempkYtS5I0yzinbk4GvlNVWwd1JjkoyVNmpoFTgJvH2J4kaQRzBn2SS4FrgeOTbE1ydte1ilmnbZIckeTKbvYw4OtJbgS+CVxRVV+YXOmSpPmYz6duVg9pP2tA2z3A6d30ncCzx6xPkjQmvxkrSY0z6CWpcQa9JDXOoJekxhn0ktQ4g16SGmfQS1LjDHpJapxBL0mNM+glqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcQS9JjZvPrQTXJdmR5Oa+tg8k2ZZkU/c4fciypya5PcmWJOdMsnBJ0vzM54j+YuDUAe0frqqV3ePK2Z1J9gP+DDgNOAFYneSEcYqVJC3cnEFfVdcAu0ZY94nAlqq6s6p+DnwKOGOE9UiSxjDOOfp3JbmpO7WzbED/kcDdffNbu7aBkqxJsjHJxp07d45RliSp36hB/+fAvwFWAtuBD41bSFWtrarpqpqempoad3WSpM5IQV9V91bVo1X1C+Av6J2mmW0bcHTf/FFdmyRpCY0U9EkO75t9HXDzgGHfAo5LcmySA4BVwPpRtidJGt3+cw1IcilwEnBokq3AucBJSVYCBdwFvK0bewRwUVWdXlWPJHkXcBWwH7Cuqm5ZlH+FJGmoOYO+qlYPaP7YkLH3AKf3zV8JPO6jl5KkpeM3YyWpcQa9JDXOoJekxhn0ktQ4g16SGmfQS1LjDHpJapxBL0mNM+glqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcQS9JjTPoJalxcwZ9knVJdiS5ua/tfyf5TpKbklye5OAhy96V5NtJNiXZOMnCJUnzM58j+ouBU2e1bQCeVVW/DnwXeN9uln9pVa2squnRSpQkjWPOoK+qa4Bds9q+WFWPdLPXAUctQm2SpAmYxDn63wE+P6SvgC8muT7Jmt2tJMmaJBuTbNy5c+cEypIkwZhBn+T9wCPAJ4cMeWFVPRc4DXhnkhcPW1dVra2q6aqanpqaGqcsSVKfkYM+yVnAq4H/VFU1aExVbeuedwCXAyeOuj1J0mhGCvokpwK/B7ymqn4yZMxBSZ4yMw2cAtw8aKwkafHM5+OVlwLXAscn2ZrkbOAC4CnAhu6jkxd2Y49IcmW36GHA15PcCHwTuKKqvrAo/wpJ0lD7zzWgqlYPaP7YkLH3AKd303cCzx6rOknS2PxmrCQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcQS9JjTPoJalxBr0kNc6gl6TGGfSS1DiDXpIaZ9BLUuMMeklqnEEvSY0z6CWpcQa9JDVuXkGfZF2SHUlu7ms7JMmGJJu752VDlj2zG7M5yZmTKlySND/zPaK/GDh1Vts5wNVVdRxwdTf/GEkOAc4FngecCJw77AVBkrQ45hX0VXUNsGtW8xnAJd30JcBrByz6SmBDVe2qqvuBDTz+BUOStIjGOUd/WFVt76Z/ABw2YMyRwN1981u7tsdJsibJxiQbd+7cOUZZkqR+E3kztqoKqDHXsbaqpqtqempqahJlSZIYL+jvTXI4QPe8Y8CYbcDRffNHdW2SpCUyTtCvB2Y+RXMm8LkBY64CTkmyrHsT9pSuTZK0ROb78cpLgWuB45NsTXI2cD7wiiSbgZO7eZJMJ7kIoKp2AX8IfKt7nNe1SZKWyP7zGVRVq4d0vXzA2I3AW/vm1wHrRqpOkjQ2vxkrSY0z6CWpcQa9JDXOoJekxhn0ktQ4g16SGmfQS1LjDHpJapxBL0mNM+glqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcQS9JjRs56JMcn2RT3+NHSd49a8xJSR7sG/MH45csSVqIed1KcJCquh1YCZBkP2AbcPmAoV+rqlePuh1J0ngmderm5cAdVfX9Ca1PkjQhkwr6VcClQ/p+I8mNST6f5FeHrSDJmiQbk2zcuXPnhMqSJI0d9EkOAF4DfHpA9w3AMVX1bOBPgb8btp6qWltV01U1PTU1NW5ZkqTOJI7oTwNuqKp7Z3dU1Y+q6qFu+krgiUkOncA2JUnzNImgX82Q0zZJnpYk3fSJ3fZ+OIFtSpLmaeRP3QAkOQh4BfC2vra3A1TVhcDrgXckeQT4KbCqqmqcbUqSFmasoK+qfwSeOqvtwr7pC4ALxtmGJGk8fjNWkhpn0EtS4wx6SWqcQS9JjTPoJalxBr0kNc6gl6TGGfSS1DiDXpIaZ9BLUuMMeklqnEEvSY0z6CWpcQa9JDXOoJekxhn0ktQ4g16SGjd20Ce5K8m3k2xKsnFAf5J8JMmWJDclee6425Qkzd9YtxLs89Kqum9I32nAcd3jecCfd8+SpCWwFKduzgA+Xj3XAQcnOXwJtitJYjJH9AV8MUkBH62qtbP6jwTu7pvf2rVt7x+UZA2wBmD58uUjF7PinCtGXvaX0V3nv2pPlyBpkU3iiP6FVfVceqdo3pnkxaOspKrWVtV0VU1PTU1NoCxJEkwg6KtqW/e8A7gcOHHWkG3A0X3zR3VtkqQlMFbQJzkoyVNmpoFTgJtnDVsPvLn79M3zgQerajuSpCUx7jn6w4DLk8ys66+r6gtJ3g5QVRcCVwKnA1uAnwBvGXObkqQFGCvoq+pO4NkD2i/smy7gneNsR5I0Or8ZK0mNM+glqXEGvSQ1zqCXpMZN6lo32kf5TeKF8ZvE2hd5RC9JjTPoJalxBr0kNc6gl6TGGfSS1DiDXpIaZ9BLUuMMeklqnEEvSY0z6CWpcQa9JDXOoJekxo0c9EmOTvKVJLcmuSXJ7w4Yc1KSB5Ns6h5/MF65kqSFGufqlY8A76mqG7obhF+fZENV3Tpr3Neq6tVjbEeSNIaRj+irantV3dBN/xi4DThyUoVJkiZjIufok6wAngN8Y0D3byS5Mcnnk/zqbtaxJsnGJBt37tw5ibIkSUwg6JM8Gfgs8O6q+tGs7huAY6rq2cCfAn83bD1VtbaqpqtqempqatyyJEmdsYI+yRPphfwnq+pvZ/dX1Y+q6qFu+krgiUkOHWebkqSFGedTNwE+BtxWVX8yZMzTunEkObHb3g9H3aYkaeHG+dTNC4DfBr6dZFPX9vvAcoCquhB4PfCOJI8APwVWVVWNsU1J0gKNHPRV9XUgc4y5ALhg1G1IksbnN2MlqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcQS9JjTPoJalx41wCQfqls+KcK/Z0CdKCeUQvSY0z6CWpcQa9JDXOoJekxhn0ktQ4g16SGmfQS1Ljxr05+KlJbk+yJck5A/oPTHJZ1/+NJCvG2Z4kaeHGuTn4fsCfAacBJwCrk5wwa9jZwP1V9Qzgw8AHR92eJGk04xzRnwhsqao7q+rnwKeAM2aNOQO4pJv+DPDyJLu9z6wkabLGuQTCkcDdffNbgecNG1NVjyR5EHgqcN/slSVZA6zpZh9KcvuIdR06aP17AetaGOtaGOtamL2yrnxwrLqOGdax11zrpqrWAmvHXU+SjVU1PYGSJsq6Fsa6Fsa6FuaXra5xTt1sA47umz+qaxs4Jsn+wL8GfjjGNiVJCzRO0H8LOC7JsUkOAFYB62eNWQ+c2U2/HvhyVdUY25QkLdDIp266c+7vAq4C9gPWVdUtSc4DNlbVeuBjwF8l2QLsovdisNjGPv2zSKxrYaxrYaxrYX6p6ooH2JLUNr8ZK0mNM+glqXH7ZNAn+a0ktyT5RZKhH0UadomG7g3kb3Ttl3VvJk+irkOSbEiyuXteNmDMS5Ns6nv8LMlru76Lk3yvr2/lUtXVjXu0b9vr+9r35P5ameTa7ud9U5I39vVNdH+Nc0mPJO/r2m9P8spx6hihrv+W5NZu/1yd5Ji+voE/0yWq66wkO/u2/9a+vjO7n/vmJGfOXnaR6/pwX03fTfJAX9+i7K8k65LsSHLzkP4k+UhX801JntvXN/6+qqp97gH8O+B44KvA9JAx+wF3AE8HDgBuBE7o+v4GWNVNXwi8Y0J1/S/gnG76HOCDc4w/hN6b1P+ym78YeP0i7K951QU8NKR9j+0v4N8Cx3XTRwDbgYMnvb929/vSN+a/ABd206uAy7rpE7rxBwLHduvZbwnremnf79A7Zura3c90ieo6C7hgwLKHAHd2z8u66WVLVdes8f+V3gdJFnt/vRh4LnDzkP7Tgc8DAZ4PfGOS+2qfPKKvqtuqaq5vzg68REOSAC+jd0kG6F2i4bUTKq3/kg/zWe/rgc9X1U8mtP1hFlrX/7en91dVfbeqNnfT9wA7gKkJbb/fOJf0OAP4VFU9XFXfA7Z061uSuqrqK32/Q9fR+07LYpvP/hrmlcCGqtpVVfcDG4BT91Bdq4FLJ7TtoarqGnoHdcOcAXy8eq4DDk5yOBPaV/tk0M/ToEs0HEnvEgwPVNUjs9on4bCq2t5N/wA4bI7xq3j8L9n/7P50+3CSA5e4ricl2ZjkupnTSexF+yvJifSO0u7oa57U/hr2+zJwTLc/Zi7pMZ9lF7OufmfTOzKcMehnupR1/cfu5/OZJDNfsNwr9ld3iutY4Mt9zYu1v+YyrO6J7Ku95hIIsyX5EvC0AV3vr6rPLXU9M3ZXV/9MVVWSoZ9d7V6tf43e9xBmvI9e4B1A7/O07wXOW8K6jqmqbUmeDnw5ybfphdnIJry//go4s6p+0TWPvL9alORNwDTwkr7mx/1Mq+qOwWuYuL8HLq2qh5O8jd5fQy9bom3PxyrgM1X1aF/bntxfi2avDfqqOnnMVQy7RMMP6f1ZtH93VDbo0g0j1ZXk3iSHV9X2Lph27GZVbwAur6p/6lv3zNHtw0n+EvjvS1lXVW3rnu9M8lXgOcBn2cP7K8m/Aq6g9yJ/Xd+6R95fAyzkkh5b89hLesxn2cWsiyQn03vxfElVPTzTPuRnOongmrOuquq/3MlF9N6TmVn2pFnLfnUCNc2rrj6rgHf2Nyzi/prLsLonsq9aPnUz8BIN1XuH4yv0zo9D7xINk/oLof+SD3Ot93HnBruwmzkv/lpg4Dv0i1FXkmUzpz6SHAq8ALh1T++v7md3Ob3zl5+Z1TfJ/TXOJT3WA6vS+1TOscBxwDfHqGVBdSV5DvBR4DVVtaOvfeDPdAnrOrxv9jXAbd30VcApXX3LgFN47F+2i1pXV9sz6b25eW1f22Lur7msB97cffrm+cCD3YHMZPbVYrzDvNgP4HX0zlU9DNwLXNW1HwFc2TfudOC79F6R39/X/nR6/xG3AJ8GDpxQXU8FrgY2A18CDunap4GL+satoPdK/YRZy38Z+Da9wPoE8OSlqgv4zW7bN3bPZ+8N+wt4E/BPwKa+x8rF2F+Dfl/onQp6TTf9pO7fv6XbH0/vW/b93XK3A6dN+Pd9rrq+1P0/mNk/6+f6mS5RXX8M3NJt/yvAM/uW/Z1uP24B3rKUdXXzHwDOn7Xcou0vegd127vf5a303kt5O/D2rj/0buR0R7ft6b5lx95XXgJBkhrX8qkbSRIGvSQ1z6CXpMYZ9JLUOINekhpn0EtS4wx6SWrc/wNt8mUfBHlEcwAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "plt.hist(fait, bins = [-1.0,-0.5,0,0.5,1.0]) \n", "plt.title(\"histogram\") \n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This shows that most the explanations produced by LIME are 'faithful'. Only a few of the explanations are not good using this metric" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 2 }