{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Perceptron Learning in Python" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**(C) 2017-2024 by [Damir Cavar](http://damir.cavar.me/)**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Download:** This and various other Jupyter notebooks are available from my [GitHub repo](https://github.com/dcavar/python-tutorial-for-ipython)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**License:** [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/) ([CA BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Prerequisites:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install -U numpy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a tutorial related to the discussion of machine learning and NLP in the class *Machine Learning for NLP: Deep Learning* (Topics in Artificial Intelligence) taught at Indiana University in Spring 2018, and Fall 2023." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What is a Perceptron?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are many online examples and tutorials on perceptrons and learning. Here is a list of some articles:\n", "- [Wikipedia on Perceptrons](https://en.wikipedia.org/wiki/Perceptron)\n", "- Jurafsky and Martin (ed. 3), Chapter 8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is an example that I have taken from a draft of the 3rd edition of Jurafsky and Martin, with slight modifications:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We import *numpy* and use its *exp* function. We could use the same function from the *math* module, or some other module like *scipy*. The *sigmoid* function is defined as in the textbook:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "def sigmoid(z):\n", " return 1 / (1 + np.exp(-z))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our example data, **weights** $w$, **bias** $b$, and **input** $x$ are defined as:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "w = np.array([0.2, 0.3, 0.8])\n", "b = 0.5\n", "x = np.array([0.5, 0.6, 0.1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our neural unit would compute $z$ as the **dot-product** $w \\cdot x$ and add the **bias** $b$ to it. The sigmoid function defined above will convert this $z$ value to the **activation value** $a$ of the unit:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "z: 0.86\n", "a: 0.7026606543447315\n" ] } ], "source": [ "z = w.dot(x) + b\n", "print(\"z:\", z)\n", "print(\"a:\", sigmoid(z))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The XOR Problem" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The power of neural units comes from combining them into larger networks. Minsky and Papert (1969): A single neural unit cannot compute the simple logical function XOR." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The task is to implement a simple **perceptron** to compute logical operations like AND, OR, and XOR.\n", "\n", "- Input: $x_1$ and $x_2$\n", "- Bias: $b = -1$ for AND; $b = 0$ for OR\n", "- Weights: $w = [1, 1]$\n", "\n", "with the following activation function:\n", "\n", "$$\n", "y = \\begin{cases}\n", " \\ 0 & \\quad \\text{if } w \\cdot x + b \\leq 0\\\\\n", " \\ 1 & \\quad \\text{if } w \\cdot x + b > 0\n", " \\end{cases}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can define this threshold function in Python as:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def activation(z):\n", " if z > 0:\n", " return 1\n", " return 0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For AND we could implement a perceptron as:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 AND 0: 0\n", "1 AND 0: 0\n", "0 AND 1: 0\n", "1 AND 1: 1\n" ] } ], "source": [ "w = np.array([1, 1])\n", "b = -1\n", "x = np.array([0, 0])\n", "print(\"0 AND 0:\", activation(w.dot(x) + b))\n", "x = np.array([1, 0])\n", "print(\"1 AND 0:\", activation(w.dot(x) + b))\n", "x = np.array([0, 1])\n", "print(\"0 AND 1:\", activation(w.dot(x) + b))\n", "x = np.array([1, 1])\n", "print(\"1 AND 1:\", activation(w.dot(x) + b))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For OR we could implement a perceptron as:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 OR 0: 0\n", "1 OR 0: 1\n", "0 OR 1: 1\n", "1 OR 1: 1\n" ] } ], "source": [ "w = np.array([1, 1])\n", "b = 0\n", "x = np.array([0, 0])\n", "print(\"0 OR 0:\", activation(w.dot(x) + b))\n", "x = np.array([1, 0])\n", "print(\"1 OR 0:\", activation(w.dot(x) + b))\n", "x = np.array([0, 1])\n", "print(\"0 OR 1:\", activation(w.dot(x) + b))\n", "x = np.array([1, 1])\n", "print(\"1 OR 1:\", activation(w.dot(x) + b))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With this narrow definition of a perceptron, it seems not possible to implement an XOR logic perceptron. The restriction is that there is a threshold function that is binary and piecewise linear." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As one student in my 2020 L645 class, Kazuki Yabe, points out, with a different activation function and a different weight vector, one unit can of course handle XOR. If we use the following activation function:\n", "\n", "$$\n", "y = \\begin{cases}\n", " \\ 0 & \\quad \\text{if } w \\cdot x + b \\neq 0.5\\\\\n", " \\ 1 & \\quad \\text{if } w \\cdot x + b = 0.5\n", " \\end{cases}\n", "$$" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "def bactivation(z):\n", " if z == 0.5:\n", " return 1\n", " else: return 0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we assume the weights to be set to 0.5 and the bias to 0, one unit can handle the XOR logic:\n", "\n", "- Input: $x_1$ and $x_2$\n", "- Bias: $b = 0$ for XOR\n", "- Weights: $w = [0.5, 0.5]$" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 OR 0: 0\n", "1 OR 0: 1\n", "0 OR 1: 1\n", "1 OR 1: 0\n" ] } ], "source": [ "w = np.array([0.5, 0.5])\n", "b = 0\n", "x = np.array([0, 0])\n", "print(\"0 OR 0:\", bactivation(w.dot(x) + b))\n", "x = np.array([1, 0])\n", "print(\"1 OR 0:\", bactivation(w.dot(x) + b))\n", "x = np.array([0, 1])\n", "print(\"0 OR 1:\", bactivation(w.dot(x) + b))\n", "x = np.array([1, 1])\n", "print(\"1 OR 1:\", bactivation(w.dot(x) + b))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This particular activation function is of course not differentiable, and it remains to be shown that the weights can be learned, but nevertheless, a single unit can be identified that solves the XOR problem.\n", "\n", "The difference between Minsky and Papert's (1969) definition of a perceptron and this unit is that - as Julia Hockenmaier pointed out - a perceptron is defined to have a decision function that would be binary and piecewise linear. This means that the unit that solves the XOR problem is not compatible with the definition of perceptron as in Minsky and Papert (1969) (p.c. Julia Hockenmaier)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Tri-Perceptron XOR Solution" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is a proposed solution in [Goodfellow et al. (2016)](https://www.deeplearningbook.org/) for the XOR problem, using a network with two layers of ReLU-based units." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![XOR Network](XOR_Network.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This two layer and three perceptron network solves the problem." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For more deiscussion on this problem, consult:\n", "\n", "- [Wikipedia on the XOR problem](https://en.wikipedia.org/wiki/Perceptron)\n", "- [Solving XOR with a single Perceptron](https://medium.com/@lucaspereira0612/solving-xor-with-a-single-perceptron-34539f395182)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**(C) 2017-2024 by [Damir Cavar](http://damir.cavar.me/)**" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.1" }, "latex_envs": { "LaTeX_envs_menu_present": true, "autoclose": false, "autocomplete": true, "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 1, "hotkeys": { "equation": "Ctrl-E", "itemize": "Ctrl-I" }, "labels_anchors": false, "latex_user_defs": false, "report_style_numbering": false, "user_envs_cfg": false }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": false, "sideBar": false, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": false, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 4 }