{ "metadata": { "name": "" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Symbols and Symbol-like representation in neurons\n", "\n", "- We've seen how to represent vectors in neurons\n", " - And how to compute functions on those vectors\n", " - And dynamical systems\n", "- But how can we do anything like human language?\n", " - How could we represent the fact that \"the number after 8 is 9\"\n", " - Or \"dogs chase cats\"\n", " - Or \"Anne knows that Bill thinks that Charlie likes Dave\"\n", "- Does the NEF help us at all with this?\n", " - Or is this just too hard a problem yet?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Traditional Cognitive Science\n", "\n", "- Lots of theories that work with structured information like this\n", "- Pretty much all of them use some representation framework like this:\n", " - `after(eight, nine)`\n", " - `chase(dogs, cats)`\n", " - `knows(Anne, thinks(Bill, likes(Charlie, Dave)))`\n", "- Or perhaps\n", " - `[number:eight next:nine]`\n", " - `[subject:dogs action:chase object:cats]`\n", " - `[subject:Anne action:knows object:[subject:Bill action:thinks object:[subject:Charlie action:likes object:Dave]]]`\n", "- Cognitive models manipulate these sorts of representations\n", " - mental arithmetic\n", " - driving a car\n", " - using a GUI\n", " - parsing language\n", " - etc etc\n", "- Seems to match well to behavioural data, so something like this should be right\n", "- So how can we do this in neurons?\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Possible solutions\n", "\n", "- Oscilations\n", " - \"red square and blue circle\"\n", " - Different patterns of activity for RED, SQUARE, BLUE, and CIRCLE\n", " - Have the patterns for RED and SQUARE happen, then BLUE and CIRCLE, then back to RED and SQUARE\n", " - More complex structures possible too:\n", " - E.g. the LISA architecture\n", " \n", "\n", "\n", "- Problems\n", " - What controls this oscillation?\n", " - How is it controlled?\n", " - How do we deal with the exponentional explosion of nodes needed?\n", " \n", "- Implementing Symbol Systems in Neurons\n", " - Build a general-purpose symbol-binding system\n", " - Lots of temporary pools of neurons\n", " - Ways to temporarily associate them with particular concepts\n", " - Ways to temporarily associate pools together\n", " - Neural Blackboard Architecture\n", " \n", "\n", "\n", "- Problems\n", " - Very particular structure (doesn't seem to match biology)\n", " - Uses a very large number of neurons (~500 million) to be flexible enough for simple sentences\n", " - And that's just to represent the sentence, never mind controlling and manipulating it\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Vector Symbolic Architectures\n", "\n", "- There is an alternate approach\n", "- Something that's similar to the symbolic approach, but much more tied to biology\n", " - Most of the same capabilities as the classic symbol systems\n", " - But not all\n", "- Based on vectors and functions on those vectors\n", " - There is a vector for each concept\n", " - Build up structures by doing math on those vectors\n", " \n", "- Example\n", " - blue square and red circle\n", " - can't just do BLUE+SQUARE+RED+CIRCLE\n", " - need some other operation as well\n", " - requirements\n", " - input 2 vectors, get a new vector as output\n", " - reversible (given the output and one of the input vectors, generate the other input vector)\n", " - output vector is highly dissimilar to either input vector\n", " - unlike addition, where the output is highly similar\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Lots of different options\n", " - for binary vectors, XOR works pretty good\n", " - for continuous vectors we use circular convolution\n", "- Why?\n", " - Extensively studied (Plate, 1997: Holographic Reduced Representations)\n", " - Easy to approximately invert (circular correlation)\n", "- `BLUE` $\\circledast$ `SQUARE + RED` $\\circledast$ `CIRCLE`\n", "\n", "\n", "\n", "\n", "\n", "- Lots of nice properties\n", " - Can store complex structures\n", " - `[number:eight next:nine]`\n", " - `NUMBER` $\\circledast$ `EIGHT + NEXT` $\\circledast$ `NINE`\n", " - `[subject:Anne action:knows object:[subject:Bill action:thinks object:[subject:Charlie action:likes object:Dave]]]` \n", " - `SUBJ` $\\circledast$ `ANNE + ACT` $\\circledast$ `KNOWS + OBJ` $\\circledast$ `(SUBJ` $\\circledast$ `BILL + ACT` $\\circledast$ `THINKS + OBJ` $\\circledast$ `(SUBJ` $\\circledast$ `CHARLIE + ACT` $\\circledast$ `LIKES + OBJ` $\\circledast$ `DAVE))`\n", " - But gracefully degrades!\n", " - as representation gets more complex, the accuracy of breaking it apart decreases\n", " - Keeps similarity information\n", " - if `RED` is similar to `PINK` then `RED` $\\circledast$ `CIRCLE` is similar to `PINK` $\\circledast$ `CIRCLE`\n", " \n", "- But rather complicated\n", " - Seems like a weird operation for neurons to do" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Circular convolution in the NEF\n", "\n", "- Or is it?\n", "- Circular convolution is a whole bunch ($D^2$) of multiplies\n", "- But it can also be written as a fourier transform, an elementwise multiply, and another fourier transform\n", "- The discrete fourier transform is just a linear operation\n", "- So that's just $D$ pairwise multiplies\n", "- In fact, circular convolution turns out to be *exactly* what the NEF shows neurons are good at" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Building a memory in Nengo\n", "\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import nengo\n", "import nengo.spa as spa\n", "\n", "D=64\n", "\n", "model = spa.SPA(label='Binding')\n", "with model:\n", " model.a = spa.Buffer(D)\n", " model.b = spa.Buffer(D)\n", " model.c = spa.Buffer(D)\n", " model.q = spa.Buffer(D)\n", " model.r = spa.Buffer(D)\n", " model.cortical = spa.Cortical(spa.Actions(\n", " 'c = a*b',\n", " 'c = c',\n", " 'r = c*~q'), synapse=0.1)\n", "\n", " \n", " nengo.Probe(model.a.state.output)\n", " nengo.Probe(model.b.state.output)\n", " nengo.Probe(model.c.state.output)\n", " nengo.Probe(model.q.state.output)\n", " nengo.Probe(model.r.state.output)\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 2 }, { "cell_type": "markdown", "metadata": {}, "source": [ "- How does this work so well?\n", " - Exploiting the features of high-dimensional space\n", "\n", "\n", "\n", "- Memory capacity increases with dimensionality\n", " - Also dependent on the number of different possible items in memory (vocabulary size)\n", "- 512 dimensions is suffienct to store ~8 pairs, with a vocabulary size of 100,000 terms\n", " - Note that this is what's needed for storing simple sentences" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Symbol-like manipulation\n", "\n", "- Can do a lot of standard symbol stuff\n", "- Have to explicitly bind and unbind to manipulate the data\n", "- Less accuracy for more complex structures\n", "- But we can also do more with these representations\n", "\n", "### Raven's Progressive Matrices\n", "\n", "- An IQ test that's generally considered to be the best at measuring general-purpose \"fluid\" intelligence\n", " - nonverbal (so it's not measuring language skills, and fairly unbiased across cultures, hopefully)\n", " - fill in the blank\n", " - given eight possible answers; pick one\n", " \n", "\n", "\n", "\n", "- This is not an actual question on the test\n", " - The test is copyrighted\n", " - They don't want the test to leak out, since it's been the same set of 60 questions since 1936\n", " - But they do look like that\n", "\n", "- How can we model people doing this task?\n", "- A fair number of different attempts\n", " - None neural\n", " - Generally use the approach of building in a large set of different types of patterns to look for, and then trying them all in turn\n", " - Which seems wrong for a test that's supposed to be about flexible, fluid intelligence\n", " \n", "- Does this vector approach offer an alternative?\n", "\n", "- First we need to represent the different patterns as a vector\n", " - This is a hard image interpretation problem\n", " - Still ongoing work here\n", " - So we'll skip it and start with things in vector form\n", " \n", " \n", " \n", "- How do we represent a picture?\n", " - `SHAPE` $\\circledast$ `ARROW + NUMBER` $\\circledast$ `ONE + DIRECTION` $\\circledast$ `UP`\n", " - can do variations like this for all the pictures\n", " - fairly standard with most assumptions about how people represent complex scenes\n", " - but that part is not being modelled (yet!)\n", " \n", "- We have shown that it's possible to build these sorts of representations up directly from visual stimuli\n", " - With a very simple vision system that can only recognize a few different shapes\n", " - And where items have to be shown sequentially as it has no way of moving its eyes\n", " \n", "\n", "\n", "\n", "- The memory of the list is built up by using a basal ganglia action selection system to control feeding values into an integrator\n", " - The thought bubble shows how close the decoded values are to the ideal\n", " - Notice the forgetting! \n", " \n", "- The same system can be used to do a version of the Raven's Matrices task\n", "\n", "\n", "\n", "- `S1 = ONE` $\\circledast$ `P1`\n", "- `S2 = ONE` $\\circledast$ `P1 + ONE` $\\circledast$ `P2`\n", "- `S3 = ONE` $\\circledast$ `P1 + ONE` $\\circledast$ `P2 + ONE` $\\circledast$ `P3`\n", "- `S4 = FOUR` $\\circledast$ `P1`\n", "- `S5 = FOUR` $\\circledast$ `P1 + FOUR` $\\circledast$ `P2`\n", "- `S6 = FOUR` $\\circledast$ `P1 + FOUR` $\\circledast$ `P2 + FOUR` $\\circledast$ `P3`\n", "- `S7 = FIVE` $\\circledast$ `P1`\n", "- `S8 = FIVE` $\\circledast$ `P1 + FIVE` $\\circledast$ `P2`\n", "\n", "- what is `S9`?\n", "\n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Let's figure out what the transformation is\n", "- `T1 = S2` $\\circledast$ `S1'`\n", "- `T2 = S3` $\\circledast$ `S2'`\n", "- `T3 = S5` $\\circledast$ `S4'`\n", "- `T4 = S6` $\\circledast$ `S5'`\n", "- `T5 = S8` $\\circledast$ `S7'`\n", "\n", "- `T = (T1 + T2 + T3 + T4 + T5)/5`\n", "- `S9 = S8` $\\circledast$ `T`\n", "\n", "- `S9 = FIVE` $\\circledast$ `P1 + FIVE` $\\circledast$ `P2 + FIVE` $\\circledast$ `P3`\n", "\n", "- This becomes a novel way of manipulating structured information\n", " - Exploiting the fact that it is a vector underneath\n", " - [A spiking neural model applied to the study of human performance and cognitive decline on Raven's Advanced Progressive Matrices](http://www.sciencedirect.com/science/article/pii/S0160289613001542)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Things to note\n", " - Memory slowly decays\n", " - If you push in a new pair for too long, it can wipe out the old pair(s)\n", " - Note that this relies on the saturation behaviour of NEF networks\n", " - Kind of like implicit normalization\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Cognitive Control\n", "\n", "- How do we control these systems?\n", " - Lots of components\n", " - Each component computes some particular function\n", " - Need to selectively route information between components\n", "- Standard cortex-basal ganglia-thalamus loop\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- compute functions of cortical state to determine utility of actions\n", "- basal ganglia select the action with the highest utility\n", " - using [Gurney, Prescott, and Redgrave, 2001](http://neuroinformatics.usc.edu/mediawiki/images/3/37/Gurney_etal_01_A_computational_model_of_action_selection_in_the_basal_ganglia_-_II.pdf) model, converted to spiking neurons\n", "- thalamus has routing connections between cortical areas\n", " - if action is not selected, routing neurons are inhibited\n", "\n", "- good timing data\n", "\n", "\n", "\n", "- [Dynamic Behaviour of a Spiking Model of Action Selection in the Basal Ganglia](http://compneuro.uwaterloo.ca/files/publications/stewart.2010.pdf)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example: Simple Association" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import nengo\n", "import nengo.spa as spa\n", "\n", "model = spa.SPA(label=\"SPA1\")\n", "with model:\n", " model.state = spa.Buffer(16)\n", " model.motor = spa.Buffer(16)\n", " actions = spa.Actions(\n", " 'dot(state, DOG) --> motor=BARK',\n", " 'dot(state, CAT) --> motor=MEOW',\n", " 'dot(state, RAT) --> motor=SQUEAK',\n", " 'dot(state, COW) --> motor=MOO',\n", " ) \n", " model.bg = spa.BasalGanglia(actions)\n", " model.thalamus = spa.Thalamus(model.bg)\n", " \n", " nengo.Probe(model.state.state.output)\n", " nengo.Probe(model.motor.state.output)\n", " nengo.Probe(model.bg.input)\n", " nengo.Probe(model.thalamus.actions.output) " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example: Sequence" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import nengo\n", "import nengo.spa as spa\n", "\n", "model = spa.SPA(label=\"SPA2\")\n", "with model:\n", " model.state = spa.Buffer(16)\n", " actions = spa.Actions(\n", " 'dot(state, A) --> state=B',\n", " 'dot(state, B) --> state=C',\n", " 'dot(state, C) --> state=D',\n", " 'dot(state, D) --> state=E',\n", " 'dot(state, E) --> state=A',\n", " ) \n", " model.bg = spa.BasalGanglia(actions)\n", " model.thalamus = spa.Thalamus(model.bg)\n", " \n", " nengo.Probe(model.state.state.output)\n", " nengo.Probe(model.bg.input)\n", " nengo.Probe(model.thalamus.actions.output) " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example: Input" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import nengo\n", "import nengo.spa as spa\n", "\n", "model = spa.SPA(label=\"SPA1\")\n", "with model:\n", " model.state = spa.Buffer(16)\n", " actions = spa.Actions(\n", " 'dot(state, A) --> state=B',\n", " 'dot(state, B) --> state=C',\n", " 'dot(state, C) --> state=D',\n", " 'dot(state, D) --> state=E',\n", " 'dot(state, E) --> state=A',\n", " ) \n", " model.bg = spa.BasalGanglia(actions)\n", " model.thalamus = spa.Thalamus(model.bg)\n", " \n", " def state_in(t):\n", " if t<0.1:\n", " return 'C'\n", " else:\n", " return '0'\n", " model.input = spa.Input(state=state_in)\n", " \n", " nengo.Probe(model.state.state.output)\n", " nengo.Probe(model.bg.input)\n", " nengo.Probe(model.thalamus.actions.output)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example: Routing" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import nengo\n", "import nengo.spa as spa\n", "\n", "model = spa.SPA(label=\"SPA1\")\n", "with model:\n", " model.vision = spa.Buffer(16)\n", " model.state = spa.Buffer(16)\n", " actions = spa.Actions(\n", " 'dot(vision, A+B+C+D+E) --> state=vision',\n", " 'dot(state, A) --> state=B',\n", " 'dot(state, B) --> state=C',\n", " 'dot(state, C) --> state=D',\n", " 'dot(state, D) --> state=E',\n", " 'dot(state, E) --> state=A',\n", " ) \n", " model.bg = spa.BasalGanglia(actions)\n", " model.thalamus = spa.Thalamus(model.bg)\n", " \n", " def vision_in(t):\n", " if t<0.1:\n", " return 'C'\n", " else:\n", " return '0'\n", " model.input = spa.Input(vision=vision_in)\n", " \n", " nengo.Probe(model.state.state.output)\n", " nengo.Probe(model.vision.state.output)\n", " nengo.Probe(model.bg.input)\n", " nengo.Probe(model.thalamus.actions.output)\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Spaun\n", "\n", "- This process is the basis for building Spaun\n", "\n", "\n", "\n", "\n" ] } ], "metadata": {} } ] }