{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": { "ExecuteTime": { "end_time": "2016-12-02T17:30:29.181539", "start_time": "2016-12-02T17:30:29.172204" }, "run_control": { "frozen": false, "read_only": false }, "slideshow": { "slide_type": "skip" } }, "outputs": [ { "data": { "text/plain": [ "{'controls': 'false',\n", " 'progress': 'true',\n", " 'start_slideshow_at': 'selected',\n", " 'theme': 'white',\n", " 'transition': 'none'}" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Reveal.js\n", "from notebook.services.config import ConfigManager\n", "cm = ConfigManager()\n", "cm.update('livereveal', {\n", " 'theme': 'white',\n", " 'transition': 'none',\n", " 'controls': 'false',\n", " 'progress': 'true',\n", "})" ] }, { "cell_type": "markdown", "metadata": { "run_control": { "frozen": false, "read_only": false }, "slideshow": { "slide_type": "slide" } }, "source": [ "# Representating Words as Vectors " ] }, { "cell_type": "markdown", "metadata": { "ExecuteTime": { "end_time": "2016-12-02T16:54:38.546624", "start_time": "2016-12-02T16:54:38.540051" }, "cell_style": "center", "slideshow": { "slide_type": "slide" } }, "source": [ "# Outline\n", "\n", "* Representations of Words\n", " * Motivation\n", " * Sparse Binary Representations\n", " * Dense Continuous Representations\n", "* Unsupervised Learning of Word Representations\n", " * Motivation\n", " * Sparse Co-occurence Representations\n", " * Neural Word Representations\n", "* Additional Reading" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "![Word representations visualised in two dimensions](../img/word_representations.svg)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Why talk about representations? ##\n", "\n", "* Machine Learning, features are representations\n", "* Better representations, better performance\n", "* Representation Learning (\"Deep Learning\"), trendy" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## What makes a good representation? ##\n", "\n", "1. Representations are **distinct**\n", "2. **Similar** words have **similar** representations" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Formal Task ##\n", "\n", "* Words: $w$\n", "* Vocabulary: $\\mathbb{V} (\\forall_{i} w_{i} \\in \\mathbb{V})$\n", "* Find representation function: $f(w_{i}) = r_{i}$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Sparse Binary Representations ##" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Sparse Binary Representations ##\n", "\n", "* Map words to unique positive non-zero integers\n", "* $f_{id}(w) \\mapsto \\mathbb{N^{*}}$\n", "* $g(w, i) = {\\left\\lbrace\n", " \\begin{array}{ll}\n", " 1 & \\textrm{if }~i = f_{id}(w) \\\\\n", " 0 & \\textrm{otherwise} \\\\\n", " \\end{array}\\right.}$\n", "* \"One-hot\" vector\n", "* $f_{sb}(w) = (g(w, 1), \\ldots, g(w, |V|))$\n", "* $f_{sb}(w) \\mapsto \\{0,1\\}^{|V|}$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Sparse Binary Example ##\n", "\n", "* $\\mathbb{V} = \\{\\textrm{apple}, \\textrm{orange}, \\textrm{rabbit}\\}$\n", "* $f_{id}(\\textrm{apple}) = 1, \\ldots$, $f_{id}(\\textrm{rabbit}) = 3$\n", "* $f_{sb}(\\textrm{apple}) = (1, 0, 0)$\n", "* $f_{sb}(\\textrm{orange}) = (0, 1, 0)$\n", "* $f_{sb}(\\textrm{rabbit}) = (0, 0, 1)$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Sparse Binary Visualised ##\n", "\n", "![Sparse binary representations visualised](../img/sparse_binary.svg)\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Cosine Similarity ##\n", "\n", "* $cos(u, v) = \\frac{u \\cdot v}{||u|| ||v||}$\n", "* $cos(u, v) \\mapsto [-1, 1]$\n", "* $cos(u, v) = 1$; identical\n", "* $cos(u, v) = -1$; opposites\n", "* $cos(u, v) = 0$; orthogonal" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Cosine Similarity Visualised ##\n", "\n", "![Cosine similarity visualisation](http://blog.christianperone.com/wp-content/uploads/2013/09/cosinesimilarityfq1.png)\n", "\n", "http://blog.christianperone.com/2013/09/machine-learning-cosine-similarity-for-vector-space-models-part-iii/" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Sparse Binary Similarities ##\n", "\n", "* $cos(f_{sb}(\\textrm{apple}), f_{sb}(\\textrm{rabbit})) = 0$\n", "* $cos(f_{sb}(\\textrm{apple}), f_{sb}(\\textrm{orange})) = 0$\n", "* $cos(f_{sb}(\\textrm{orange}), f_{sb}(\\textrm{rabbit})) = 0 $" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Dense Continuous Representations ##" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Dense Continuous Representations ##\n", "\n", "* $f_{id}(w) \\mapsto \\mathbb{N}^{*}$\n", "* \"Embed\" words as matrix rows\n", "* Dimensionality: $d$ (hyperparameter)\n", "* $W \\in \\mathbb{R}^{|\\mathbb{V}| \\times d}$\n", "* $f_{dc}(w) = W_{f_{id}(w), :}$\n", "* $f_{dc}(w) \\mapsto \\mathbb{R}^{d}$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Dense Continuous Example ##\n", "\n", "* $\\mathbb{V} = \\{\\textrm{apple}, \\textrm{orange}, \\textrm{rabbit}\\}$\n", "* $d = 2$\n", "* $W \\in \\mathbb{R}^{3 \\times 2}$\n", "* $f_{id}(\\textrm{apple}) = 1, \\ldots, f_{id}(\\textrm{rabbit}) = 3$\n", "* $f_{dc}(\\textrm{apple}) = (1.0, 1.0)$\n", "* $f_{dc}(\\textrm{orange}) = (0.9, 1.0)$\n", "* $f_{dc}(\\textrm{rabbit}) = (0.1, 0.5)$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Dense Continuous Visualised ##\n", "\n", "![Visualisation of dense continuous word representations](../img/dense_continuous.svg)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Dense Continuous Similarities ##\n", "\n", "* $cos(f_{dc}(\\textrm{apple}),f_{dc}(\\textrm{rabbit})) \\approx 0.83$\n", "* $cos(f_{dc}(\\textrm{apple}),f_{dc}(\\textrm{orange})) \\approx 1.0$\n", "* $cos(f_{dc}(\\textrm{orange}),f_{dc}(\\textrm{rabbit})) \\approx 0.86$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Unsupervised Learning of Word Representations #" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Why not supervised? ##\n", "\n", "![Size comparison between annotated and unannotated data](../img/annotated_vs_unannotated_data.svg)\n", "\n", "* Also, inherent incompleteness..." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Linguistic Inspirations ##\n", "\n", "* \"Oculist and eye-doctor … occur in almost the same environments. … If $A$ and $B$ have almost identical environments we say that they are synonyms.\" – Zellig Harris (1954)\n", "* \"You shall know a word by the company it keeps.\" – John Rupert Firth (1957)\n", "* Akin to \"meaning is use\" – Wittgenstein (1953)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Sparse Co-occurence Representations ##" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Co-occurences ##\n", "\n", "* Collected from a large collection of *raw* text\n", "\n", "1. \"…comparing an **apple** to an **orange**…\"\n", "2. \"…an **apple** and **orange** from Florida…\"\n", "3. \"…my **rabbit** is not shaped like an **orange**…\" (yes, there is always **noise** in the data)\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Sparse Co-occurence Representations ##\n", "\n", "* The number of times words co-occur in a text collection\n", "* $C \\in \\mathbb{N}^{|V| \\times |V|}$\n", "* $f_{id}(\\textrm{apple}) = 1, \\ldots, f_{id}(\\textrm{rabbit}) = 3$\n", "* $C = \\begin{pmatrix}\n", " 2 & 2 & 0 \\\\\n", " 2 & 3 & 1 \\\\\n", " 0 & 1 & 1 \\\\\n", " \\end{pmatrix}$\n", "* $f_{cs}(w) = C_{f_{id}(w), :}$\n", "* $f_{cs}(w) \\mapsto \\mathbb{N}^{|V|}$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Sparse Co-occurence Example ##\n", "\n", "* $\\mathbb{V} = \\{\\textrm{apple}, \\textrm{orange}, \\textrm{rabbit}\\}$\n", "* $f_{id}(\\textrm{apple}) = 1, \\ldots, f_{id}(\\textrm{rabbit}) = 3$\n", "* $f_{cs}(\\textrm{apple}) = (2, 2, 0)$\n", "* $f_{cs}(\\textrm{orange}) = (2, 3, 1)$\n", "* $f_{cs}(\\textrm{rabbit}) = (0, 1, 1)$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Sparse Co-occurence Similarities ##\n", "\n", "* $cos(f_{cs}(\\textrm{apple}), f_{cs}(\\textrm{rabbit})) \\approx 0.50$\n", "* $cos(f_{cs}(\\textrm{apple}), f_{cs}(\\textrm{orange})) \\approx 0.94$\n", "* $cos(f_{cs}(\\textrm{orange}), f_{cs}(\\textrm{rabbit})) \\approx 0.76$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Dense Co-occurence Representations #" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Matrix Factorisation ##\n", "\n", "![Matrix factorisation visualisation](../img/matrix_factorisation.svg)\n", "\n", "* $C \\in \\mathbb{R}^{|V| \\times |V|}$\n", "* $U \\in \\mathbb{R}^{|V| \\times d}$\n", "* $V \\in \\mathbb{R}^{d \\times |V|}$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Dense Co-occurence Representations ##\n", "\n", "* Factorise $C$\n", "* $U \\approx \\begin{pmatrix}\n", " -1.26 & 0.65 \\\\\n", " -1.72 & -0.24 \\\\\n", " -0.46 & -0.89 \\\\\n", " \\end{pmatrix}$\n", "* $f_{id}(\\textrm{apple}) = 1, \\ldots, f_{id}(\\textrm{rabbit}) = 3$\n", "* $f_{cd}(w) = U_{f_{id}(w), :}$\n", "* $f_{cd}(w) \\mapsto \\mathbb{R}^{d}$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Dense Co-occurence Example ##\n", "\n", "* $\\mathbb{V} = \\{\\textrm{apple}, \\textrm{orange}, \\textrm{rabbit}\\}$\n", "* $f_{id}(\\textrm{apple}) = 1, \\ldots, f_{id}(\\textrm{rabbit}) = 3$\n", "* $f_{cd}(\\textrm{apple}) = (-1.26, 0.65)$\n", "* $f_{cd}(\\textrm{orange}) = (-1.72, -0.24)$\n", "* $f_{cd}(\\textrm{rabbit}) = (-0.46, -0.89)$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Dense Co-occurence Visualised ##\n", "\n", "![Dene co-occurence representations visualised](../img/dense_cooccurences.svg)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Dense Co-occurence Similarities ##\n", "\n", "* $cos(f_{cd}(\\textrm{apple}), f_{cd}(\\textrm{rabbit})) \\approx 0.00$\n", "* $cos(f_{cd}(\\textrm{apple}), f_{cd}(\\textrm{orange})) \\approx 0.82$\n", "* $cos(f_{cd}(\\textrm{orange}), f_{cd}(\\textrm{rabbit})) \\approx 0.58$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Neural Word Representations #" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Learning by Slot Filling ##\n", "\n", "* \"…I had some **_____** for breakfast today…\"\n", "* Good: *cereals*\n", "* Bad: *airplanes*" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Unsupervised Loss Function ##\n", "\n", "* $w \\in \\mathbb{V}$; $c \\in \\mathbb{V}$\n", "* $D = ((c, w),\\ldots)$; observed co-occurences\n", "* $D' = ((c, w),\\ldots)$; \"noise samples\"\n", "* $\\textrm{max}~p((c, w) \\in D | W) - p((c, w) \\in D' | W)$\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Neural Skip-Gram Model ##\n", "\n", "* $W^{w} \\in \\mathbb{R}^{|\\mathbb{V}| \\times d}$; $W^{c} \\in \\mathbb{R}^{|\\mathbb{V}| \\times d}$\n", "* $D = ((c, w),\\ldots)$; $D' = ((c, w),\\ldots)$\n", "* $\\sigma(x) = \\frac{1}{1 + \\textrm{exp}(-x)}$\n", "* $p((c, w) \\in D | W^{w}, W^{c}) = \\sigma(W^{c}_{f_{id}(c),:} \\cdot W^{w}_{f_{id}(w),:})$\n", "* $\\arg\\max\\limits_{W^{w},W^{c}} \\sum\\limits_{(w,c) \\in D} \\log \\sigma(W^{c}_{f_{id}(c),:} \\cdot W^{w}_{f_{id}(w),:}) \\\\ + \\sum\\limits_{(w,c) \\in D'} \\log \\sigma(-W^{c}_{f_{id}(c),:} \\cdot W^{w}_{f_{id}(w),:})$\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Neural Representation ##\n", "\n", "* Learned using [word2vec](https://code.google.com/p/word2vec/)\n", "* Google News data, $~1,000,000,000$ words\n", "* $|\\mathbb{V}| = 3,000,000$\n", "* $d = 300$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Neural Representation Example ##\n", "\n", "* $f_{n}(\\textrm{apple}) = (-0.06, -0.16, \\ldots, 0.34)$\n", "* $f_{n}(\\textrm{orange}) = (-0.10, -0.18, \\ldots, 0.08)$\n", "* $f_{n}(\\textrm{rabbit}) = (0.02, 0.11, \\ldots, 0.11)$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Neural Representation Similarities ##\n", "\n", "* $cos(f_{n}(\\textrm{apple}), f_{n}(\\textrm{rabbit})) \\approx 0.34$\n", "* $cos(f_{n}(\\textrm{apple}), f_{n}(\\textrm{orange})) \\approx 0.39$\n", "* $cos(f_{n}(\\textrm{orange}), f_{n}(\\textrm{rabbit})) \\approx 0.20$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Neural Representations Visualised ##\n", "\n", "![Word representations visualised in two dimensions](../img/word_representations.svg)\n", "\n", "* Dimensionality reduction using [t-SNE](https://lvdmaaten.github.io/tsne/)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Neural Representations Visualised (zoomed) ##\n", "\n", "![Word representations visualised in two dimensions, zoomed in on a small cluster](../img/word_representations_zoom.svg)\n", "\n", "* Dimensionality reduction using [t-SNE](https://lvdmaaten.github.io/tsne/)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Word Representation Algebra ##\n", "\n", "* $f_{n}(\\textrm{king}) - f_{n}(\\textrm{man}) + f_{n}(\\textrm{woman}) \\approx f_{n}(\\textrm{queen})$\n", "* $f_{n}(\\textrm{Paris}) - f_{n}(\\textrm{France}) + f_{n}(\\textrm{Italy}) \\approx f_{n}(\\textrm{Rome})$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Beyond Word Representations #" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Beyond Word Representations #\n", "\n", "![TreeRNN composing a simple sentence](../img/sashimi_socher_tree.svg)\n", "\n", "* How do we compose phrases and sentences?\n", "* Tree Recurrent Neural Networks ([Socher et al., 2010](http://www.socher.org/uploads/Main/2010SocherManningNg.pdf))\n", "* Long Short-Term Memory Recurrent Neural Networks ([Hochreiter and Schmidhuber, 1997](http://www.mitpressjournals.org/doi/abs/10.1162/neco.1997.9.8.1735))\n", "* The latter is now **much** more popular.\n", "* More about this on Friday with Pasquale!" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Summary #" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Summary #\n", "\n", "* Moving from features to \"representations\"\n", "* Representations limits what we can learn and our generalisation power\n", "* Many ways to learn representations (there are more than what we covered)\n", "* Neural representations\n", " * Most popular today\n", " * Only ones to capture analogies (why, we don't know...)\n", "* Adding representations trained unsupervisedly improves performance on supervised tasks (semi-supervised learning)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Additional Reading #\n", "\n", "* [\"Word Representations: A Simple and General Method for Semi-Supervised Learning\"](http://www.aclweb.org/anthology/P/P10/P10-1040.pdf) by Turian et al. (2010)\n", "* [\"Representation Learning: A Review and New Perspectives\"](https://arxiv.org/abs/1206.5538) by Bengio et al. (2012)\n", "* [\"Linguistic Regularities in Continuous Space Word Representations\"](http://www.aclweb.org/anthology/N/N13/N13-1090.pdf) by Mikolov et al. (2013a) ([video](http://techtalks.tv/talks/linguistic-regularities-in-continuous-space-word-representations/58471/))\n", "* [\"Distributed Representations of Words and Phrases and their Compositionality\"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality) by Mikolov et al. (2013b)\n", "* [\"word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method\"](https://arxiv.org/abs/1402.3722) by Goldberg and Levy (2014)\n", "* [\"Neural Word Embedding as Implicit Matrix Factorization\"](http://papers.nips.cc/paper/5477-neural-word-embedding-as-implicit-matrix-factorization) by Levy and Goldberg (2014)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "# Potential MSc Projects #" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "## MSc Project Supervision ##\n", "\n", "* \"Great tradition\" of the UCLMR group\n", "* This year – so far – three projects turned into papers! (matters in industry too!)\n", "* I intend to supervise up to two students\n", "* We also have more members that supervise projects\n", "* Tim will present project proposals next Monday" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "## Unsupervisedly Learning Phrase/Sentence Representations ##\n", "\n", "![Skip-Thought model visualisation](../img/skip-thought.svg)\n", "\n", "* Can we apply the lessons of this lecture to phrases and sentences?\n", "* Can we do so in a computationally efficient manner?\n", "* Will similar patterns arise? If so, can we observe them?\n", "* Can we detect abstractions and transfer them?\n", "\n", "## Reading ##\n", "\n", "* [\"Skip-Thought Vectors\"](https://arxiv.org/abs/1402.3722) by Kiros et al. (2015)\n", "\n", "* [\"Semi-supervised Sequence Learning\"](https://papers.nips.cc/paper/5949-semi-supervised-sequence-learning) by Dai and Le (2016)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "## Learning to Generate ##\n", "\n", "![Visualisation of the idea of learning to generate data](../img/learning_to_generate.png)\n", "\n", "* Use Reinforcement Learning to generate data that fits your training distribution.\n", "* Can we enforce diversity in the generated data?\n", "* What happens when training data is abundant?\n", "\n", "### Reading ###\n", "\n", "* [\"Learning to Generate Textual Data\"](http://www.aclweb.org/anthology/D/D16/D16-1167.pdf) by Bouchard et al. (2016)\n", "* [\"Data Programming: Creating Large Training Sets, Quickly\"](http://papers.nips.cc/paper/6523-data-programming-creating-large-training-sets-quickly) by Ratner et al. (2016)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "skip" } }, "source": [ "# Thank you for your attention #\n", "\n", "### ご清聴ありがとうございました ###\n", "\n", "### Tack för er uppmärksamhet ###" ] } ], "metadata": { "celltoolbar": "Slideshow", "hide_input": false, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.2" } }, "nbformat": 4, "nbformat_minor": 1 }