{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# KNOWLEDGE\n", "\n", "The [knowledge](https://github.com/aimacode/aima-python/blob/master/knowledge.py) module covers **Chapter 19: Knowledge in Learning** from Stuart Russel's and Peter Norvig's book *Artificial Intelligence: A Modern Approach*.\n", "\n", "Execute the cell below to get started." ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [], "source": [ "from knowledge import *\n", "\n", "from notebook import pseudocode, psource" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## CONTENTS\n", "\n", "* Overview\n", "* Version-Space Learning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## OVERVIEW\n", "\n", "Like the [learning module](https://github.com/aimacode/aima-python/blob/master/learning.ipynb), this chapter focuses on methods for generating a model/hypothesis for a domain. Unlike though the learning chapter, here we use prior knowledge to help us learn from new experiences and find a proper hypothesis.\n", "\n", "### First-Order Logic\n", "\n", "Usually knowledge in this field is represented as **first-order logic**, a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called **goal predicate**, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples.\n", "\n", "### Representation\n", "\n", "In this module, we use dictionaries to represent examples, with keys the attribute names and values the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions.\n", "\n", "For example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example:\n", "\n", "`{'Species': 'Cat', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}`\n", "\n", "A hypothesis can be the following:\n", "\n", "`[{'Species': 'Cat'}]`\n", "\n", "which means an animal will take an umbrella if and only if it is a cat.\n", "\n", "### Consistency\n", "\n", "We say that an example `e` is **consistent** with an hypothesis `h` if the assignment from the hypothesis for `e` is the same as `e['GOAL']`. If the above example and hypothesis are `e` and `h` respectively, then `e` is consistent with `h` since `e['Species'] == 'Cat'`. For `e = {'Species': 'Dog', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}`, the example is no longer consistent with `h`, since the value assigned to `e` is *False* while `e['GOAL']` is *True*." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## VERSION-SPACE LEARNING\n", "\n", "### Overview\n", "\n", "**Version-Space Learning** is a general method of learning in logic based domains. We generate the set of all the possible hypotheses in the domain and then we iteratively remove hypotheses inconsistent with the examples. The set of remaining hypotheses is called **version space**. Because hypotheses are being removed until we end up with a set of hypotheses consistent with all the examples, the algorithm is sometimes called **candidate elimination** algorithm.\n", "\n", "After we update the set on an example, all the hypotheses in the set are consistent with that example. So, when all the examples have been parsed, all the remaining hypotheses in the set are consistent with all the examples. That means we can pick hypotheses at random and we will always get a valid hypothesis." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Pseudocode" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### AIMA3e\n", "__function__ Version-Space-Learning(_examples_) __returns__ a version space \n", " __local variables__: _V_, the version space: the set of all hypotheses \n", "\n", " _V_ ← the set of all hypotheses \n", " __for each__ example _e_ in _examples_ __do__ \n", "   __if__ _V_ is not empty __then__ _V_ ← Version-Space-Update(_V_, _e_) \n", " __return__ _V_ \n", "\n", "---\n", "__function__ Version-Space-Update(_V_, _e_) __returns__ an updated version space \n", " _V_ ← \\{_h_ ∈ _V_ : _h_ is consistent with _e_\\} \n", "\n", "---\n", "__Figure ??__ The version space learning algorithm. It finds a subset of _V_ that is consistent with all the _examples_." ], "text/plain": [ "" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pseudocode('Version-Space-Learning')" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### Implementation\n", "\n", "The set of hypotheses is represented by a list and each hypothesis is represented by a list of dictionaries, each dictionary a disjunction. For each example in the given examples we update the version space with the function `version_space_update`. In the end, we return the version-space.\n", "\n", "Before we can start updating the version space, we need to generate it. We do that with the `all_hypotheses` function, which builds a list of all the possible hypotheses (including hypotheses with disjunctions). The function works like this: first it finds the possible values for each attribute (using `values_table`), then it builds all the attribute combinations (and adds them to the hypotheses set) and finally it builds the combinations of all the disjunctions (which in this case are the hypotheses build by the attribute combinations).\n", "\n", "You can read the code for all the functions by running the cells below:" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def version_space_learning(examples):\n",
       "    """ [Figure 19.3]\n",
       "    The version space is a list of hypotheses, which in turn are a list\n",
       "    of dictionaries/disjunctions."""\n",
       "    V = all_hypotheses(examples)\n",
       "    for e in examples:\n",
       "        if V:\n",
       "            V = version_space_update(V, e)\n",
       "\n",
       "    return V\n",
       "\n",
       "\n",
       "def version_space_update(V, e):\n",
       "    return [h for h in V if is_consistent(e, h)]\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(version_space_learning, version_space_update)" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def all_hypotheses(examples):\n",
       "    """Build a list of all the possible hypotheses"""\n",
       "    values = values_table(examples)\n",
       "    h_powerset = powerset(values.keys())\n",
       "    hypotheses = []\n",
       "    for s in h_powerset:\n",
       "        hypotheses.extend(build_attr_combinations(s, values))\n",
       "\n",
       "    hypotheses.extend(build_h_combinations(hypotheses))\n",
       "\n",
       "    return hypotheses\n",
       "\n",
       "\n",
       "def values_table(examples):\n",
       "    """Build a table with all the possible values for each attribute.\n",
       "    Returns a dictionary with keys the attribute names and values a list\n",
       "    with the possible values for the corresponding attribute."""\n",
       "    values = defaultdict(lambda: [])\n",
       "    for e in examples:\n",
       "        for k, v in e.items():\n",
       "            if k == 'GOAL':\n",
       "                continue\n",
       "\n",
       "            mod = '!'\n",
       "            if e['GOAL']:\n",
       "                mod = ''\n",
       "\n",
       "            if mod + v not in values[k]:\n",
       "                values[k].append(mod + v)\n",
       "\n",
       "    values = dict(values)\n",
       "    return values\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(all_hypotheses, values_table)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def build_attr_combinations(s, values):\n",
       "    """Given a set of attributes, builds all the combinations of values.\n",
       "    If the set holds more than one attribute, recursively builds the\n",
       "    combinations."""\n",
       "    if len(s) == 1:\n",
       "        # s holds just one attribute, return its list of values\n",
       "        k = values[s[0]]\n",
       "        h = [[{s[0]: v}] for v in values[s[0]]]\n",
       "        return h\n",
       "\n",
       "    h = []\n",
       "    for i, a in enumerate(s):\n",
       "        rest = build_attr_combinations(s[i+1:], values)\n",
       "        for v in values[a]:\n",
       "            o = {a: v}\n",
       "            for r in rest:\n",
       "                t = o.copy()\n",
       "                for d in r:\n",
       "                    t.update(d)\n",
       "                h.append([t])\n",
       "\n",
       "    return h\n",
       "\n",
       "\n",
       "def build_h_combinations(hypotheses):\n",
       "    """Given a set of hypotheses, builds and returns all the combinations of the\n",
       "    hypotheses."""\n",
       "    h = []\n",
       "    h_powerset = powerset(range(len(hypotheses)))\n",
       "\n",
       "    for s in h_powerset:\n",
       "        t = []\n",
       "        for i in s:\n",
       "            t.extend(hypotheses[i])\n",
       "        h.append(t)\n",
       "\n",
       "    return h\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(build_attr_combinations, build_h_combinations)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example\n", "\n", "Since the set of all possible hypotheses is enormous and would take a long time to generate, we will come up with another, even smaller domain. We will try and predict whether we will have a party or not given the availability of pizza and soda. Let's do it:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "party = [\n", " {'Pizza': 'Yes', 'Soda': 'No', 'GOAL': True},\n", " {'Pizza': 'Yes', 'Soda': 'Yes', 'GOAL': True},\n", " {'Pizza': 'No', 'Soda': 'No', 'GOAL': False}\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Even though it is obvious that no-pizza no-party, we will run the algorithm and see what other hypotheses are valid." ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n", "True\n", "False\n" ] } ], "source": [ "V = version_space_learning(party)\n", "for e in party:\n", " guess = False\n", " for h in V:\n", " if guess_value(e, h):\n", " guess = True\n", " break\n", "\n", " print(guess)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The results are correct for the given examples. Let's take a look at the version space:" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "959\n", "[{'Pizza': 'Yes'}, {'Soda': 'Yes'}]\n", "[{'Pizza': 'Yes'}, {'Pizza': '!No', 'Soda': 'No'}]\n", "True\n" ] } ], "source": [ "print(len(V))\n", "\n", "print(V[5])\n", "print(V[10])\n", "\n", "print([{'Pizza': 'Yes'}] in V)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are almost 1000 hypotheses in the set. You can see that even with just two attributes the version space in very large.\n", "\n", "Our initial prediction is indeed in the set of hypotheses. Also, the two other random hypotheses we got are consistent with the examples (since they both include the \"Pizza is available\" disjunction)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Minimal Consistent Determination" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This algorithm is based on a straightforward attempt to find the simplest determination consistent with the observations. A determinaton P > Q says that if any examples match on P, then they must also match on Q. A determination is therefore consistent with a set of examples if every pair that matches on the predicates on the left-hand side also matches on the goal predicate." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Pseudocode" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets look at the pseudocode for this algorithm" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### AIMA3e\n", "__function__ Minimal-Consistent-Det(_E_, _A_) __returns__ a set of attributes \n", " __inputs__: _E_, a set of examples \n", "     _A_, a set of attributes, of size _n_ \n", "\n", " __for__ _i_ = 0 __to__ _n_ __do__ \n", "   __for each__ subset _Ai_ of _A_ of size _i_ __do__ \n", "     __if__ Consistent-Det?(_Ai_, _E_) __then return__ _Ai_ \n", "\n", "---\n", "__function__ Consistent-Det?(_A_, _E_) __returns__ a truth value \n", " __inputs__: _A_, a set of attributes \n", "     _E_, a set of examples \n", " __local variables__: _H_, a hash table \n", "\n", " __for each__ example _e_ __in__ _E_ __do__ \n", "   __if__ some example in _H_ has the same values as _e_ for the attributes _A_ \n", "    but a different classification __then return__ _false_ \n", "   store the class of _e_ in_H_, indexed by the values for attributes _A_ of the example _e_ \n", " __return__ _true_ \n", "\n", "---\n", "__Figure ??__ An algorithm for finding a minimal consistent determination." ], "text/plain": [ "" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pseudocode('Minimal-Consistent-Det')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can read the code for the above algorithm by running the cells below:" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def minimal_consistent_det(E, A):\n",
       "    """Return a minimal set of attributes which give consistent determination"""\n",
       "    n = len(A)\n",
       "\n",
       "    for i in range(n + 1):\n",
       "        for A_i in combinations(A, i):\n",
       "            if consistent_det(A_i, E):\n",
       "                return set(A_i)\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(minimal_consistent_det)" ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def consistent_det(A, E):\n",
       "    """Check if the attributes(A) is consistent with the examples(E)"""\n",
       "    H = {}\n",
       "\n",
       "    for e in E:\n",
       "        attr_values = tuple(e[attr] for attr in A)\n",
       "        if attr_values in H and H[attr_values] != e['GOAL']:\n",
       "            return False\n",
       "        H[attr_values] = e['GOAL']\n",
       "\n",
       "    return True\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(consistent_det)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We already know that no-pizza-no-party but we will still check it through the `minimal_consistent_det` algorithm." ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'Pizza'}\n" ] } ], "source": [ "print(minimal_consistent_det(party, {'Pizza', 'Soda'}))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also check it on some other example. Let's consider the following example :" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "conductance = [\n", " {'Sample': 'S1', 'Mass': 12, 'Temp': 26, 'Material': 'Cu', 'Size': 3, 'GOAL': 0.59},\n", " {'Sample': 'S1', 'Mass': 12, 'Temp': 100, 'Material': 'Cu', 'Size': 3, 'GOAL': 0.57},\n", " {'Sample': 'S2', 'Mass': 24, 'Temp': 26, 'Material': 'Cu', 'Size': 6, 'GOAL': 0.59},\n", " {'Sample': 'S3', 'Mass': 12, 'Temp': 26, 'Material': 'Pb', 'Size': 2, 'GOAL': 0.05},\n", " {'Sample': 'S3', 'Mass': 12, 'Temp': 100, 'Material': 'Pb', 'Size': 2, 'GOAL': 0.04},\n", " {'Sample': 'S4', 'Mass': 18, 'Temp': 100, 'Material': 'Pb', 'Size': 3, 'GOAL': 0.04},\n", " {'Sample': 'S4', 'Mass': 18, 'Temp': 100, 'Material': 'Pb', 'Size': 3, 'GOAL': 0.04},\n", " {'Sample': 'S5', 'Mass': 24, 'Temp': 100, 'Material': 'Pb', 'Size': 4, 'GOAL': 0.04},\n", " {'Sample': 'S6', 'Mass': 36, 'Temp': 26, 'Material': 'Pb', 'Size': 6, 'GOAL': 0.05},\n", "]\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we check the `minimal_consistent_det` algorithm on the above example:" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'Temp', 'Material'}\n" ] } ], "source": [ "print(minimal_consistent_det(conductance, {'Mass', 'Temp', 'Material', 'Size'}))" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'Temp', 'Size', 'Mass'}\n" ] } ], "source": [ "print(minimal_consistent_det(conductance, {'Mass', 'Temp', 'Size'}))\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.3" } }, "nbformat": 4, "nbformat_minor": 2 }