{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Complementary Notebook: Appropriate Operators to Approximate Connectives and Quantifiers\n", "\n", "This notebook is a complement to the tutorial on operators.\n", "\n", "Logical connectives in LTN are grounded using fuzzy semantics. However, while all fuzzy logic operators make sense when simply *querying* the language, not every operator is equally suited for *learning* in LTN. This notebook details the motivations behind some choice of operators over the others." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Init Plugin\n", "Init Graph Optimizer\n", "Init Kernel\n" ] } ], "source": [ "import ltn\n", "import tensorflow as tf" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Querying\n", "\n", "One can access the most common fuzzy logic operators in `ltn.fuzzy_ops`. They are implemented using tensorflow primitives.\n", "\n", "We here compare\n", "- the product t-norm: $u \\land_{\\mathrm{prod}} v = uv$,\n", "- the Lukasiewicz t-norm: $u \\land_{\\mathrm{luk}} v = \\max(u+v-1,0)$,\n", "- the minimum aggregator: $\\min(u_1,\\dots,u_n)$,\n", "- the p-mean error aggregator (generalized mean of the deviations w.r.t. the truth): $\\mathrm{pME}(u_1,\\dots,u_n) = 1 - \\biggl( \\frac{1}{n} \\sum\\limits_{i=1}^n (1-u_i)^p \\biggr)^{\\frac{1}{p}}$.\n", "\n", "Each operator obviously conveys very different meanings, but they can all make sense depending on the intent of the query." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Metal device set to: Apple M1\n", "\n", "systemMemory: 16.00 GB\n", "maxCacheSize: 5.33 GB\n", "\n", "tf.Tensor(0.28, shape=(), dtype=float32)\n", "tf.Tensor(0.100000024, shape=(), dtype=float32)\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2021-09-24 17:11:13.306148: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.\n", "2021-09-24 17:11:13.306250: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )\n" ] } ], "source": [ "x1 = tf.constant(0.4)\n", "x2 = tf.constant(0.7)\n", "\n", "# the stable keyword is explained at the end of the notebook\n", "and_prod = ltn.fuzzy_ops.And_Prod(stable=False)\n", "and_luk = ltn.fuzzy_ops.And_Luk()\n", "\n", "print(and_prod(x1,x2))\n", "print(and_luk(x1,x2))" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(0.1, shape=(), dtype=float32)\n", "tf.Tensor(0.31339926, shape=(), dtype=float32)\n" ] } ], "source": [ "xs = tf.constant([1.,1.,1.,0.5,0.3,0.2,0.2,0.1])\n", "\n", "# the stable keyword is explained at the end of the notebook\n", "forall_min = ltn.fuzzy_ops.Aggreg_Min()\n", "forall_pME = ltn.fuzzy_ops.Aggreg_pMeanError(p=4, stable=False)\n", "\n", "print(forall_min(xs))\n", "print(forall_pME(xs))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Learning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While all operators are suitable in a querying setting, this not the case in a learning setting. Indeed, many fuzzy logic operators have derivatives not suitable for gradient-based algorithms. For more details, read [van Krieken et al., *Analyzing Differentiable Fuzzy Logic Operators*, 2020](https://arxiv.org/abs/2002.06100).\n", "\n", "We here give simple illustrations of such gradient issues.\n", "\n", "#### 1. Vanishing Gradients\n", "\n", "Some operators have vanishing gradients on some part of their domains.\n", "\n", "e.g. in $u \\land_{\\mathrm{luk}} v = \\max(u+v-1,0)$, if $u+v-1 < 0$, the gradients vanish." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.0\n", "[0.0, 0.0]\n" ] } ], "source": [ "x1 = tf.constant(0.3)\n", "x2 = tf.constant(0.5)\n", "\n", "with tf.GradientTape() as tape:\n", " tape.watch(x1)\n", " tape.watch(x2)\n", " y = and_luk(x1,x2)\n", "res = y.numpy()\n", "gradients = [v.numpy() for v in tape.gradient(y,[x1,x2])]\n", "print(res)\n", "print(gradients)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 2. Single-Passing Gradients\n", "\n", "Some operators have gradients propagating to only one input at a time, meaning that all other inputs will not benefit from learning at this step.\n", "\n", "e.g. in $\\min(u_1,\\dots,u_n)$." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.1\n", "[0. 0. 0. 0. 0. 0. 0. 1.]\n" ] } ], "source": [ "xs = tf.constant([1.,1.,1.,0.5,0.3,0.2,0.2,0.1])\n", "\n", "with tf.GradientTape() as tape:\n", " tape.watch(xs)\n", " y = forall_min(xs)\n", "res = y.numpy()\n", "gradients = tape.gradient(y,xs).numpy()\n", "print(res)\n", "print(gradients)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3. Exploding Gradients\n", "\n", "Some operators have exploding gradients on some part of their domains.\n", "\n", "e.g. in $\\mathrm{pME}(u_1,\\dots,u_n) = 1 - \\biggl( \\frac{1}{n} \\sum\\limits_{i=1}^n (1-u_i)^p \\biggr)^{\\frac{1}{p}}$, on the edge case where all inputs are $1.0$." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1.0\n", "[nan nan nan]\n" ] } ], "source": [ "xs = tf.constant([1.,1.,1.])\n", "\n", "with tf.GradientTape() as tape:\n", " tape.watch(xs)\n", " y = forall_pME(xs,p=4)\n", "res = y.numpy()\n", "gradients = tape.gradient(y,xs).numpy()\n", "print(res)\n", "print(gradients)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Stable Product Configuration\n", "\n", "#### Product Configuration\n", "\n", "Our general recommendation is to use the following \"product configuration\":\n", "* not: the standard negation $\\lnot u = 1-u$,\n", "* and: the product t-norm $u \\land v = uv$,\n", "* or: the product t-conorm (probabilistic sum) $u \\lor v = u+v-uv$,\n", "* implication: the Reichenbach implication $u \\rightarrow v = 1 - u + uv$,\n", "* existential quantification (\"exists\"): the generalized mean (p-mean) $\\mathrm{pM}(u_1,\\dots,u_n) = \\biggl( \\frac{1}{n} \\sum\\limits_{i=1}^n u_i^p \\biggr)^{\\frac{1}{p}} \\qquad p \\geq 1$,\n", "* universal quantification (\"for all\"): the generalized mean of \"the deviations w.r.t. the truth\" (p-mean error) $\\mathrm{pME}(u_1,\\dots,u_n) = 1 - \\biggl( \\frac{1}{n} \\sum\\limits_{i=1}^n (1-u_i)^p \\biggr)^{\\frac{1}{p}} \\qquad p \\geq 1$.\n", "\n", "#### \"Stable\"\n", "\n", "As is, this configuration is not fully exempt from issues. The product t-norm has vanishing gradients on the edge case $u=v=0$. The product t-conorm has vanishing gradients on the edge case $u=v=1$. The Reichenbach implication has vanishing gradients on the edge case $u=0$,$v=1$. p-mean has exploding gradients on the edge case $u_1=\\dots=u_n=0$. p-mean error has exploding gradients on the edge case $u_1=\\dots=u_n=1$. \n", "However, all these issues happen on edge cases and can easily be fixed using the following \"trick\":\n", "- if the edge case happens when an input $u$ is $0$, we modify every input with $u' = (1-\\epsilon)u+\\epsilon$,\n", "- if the edge case happens when an input $u$ is $1$, we modify every input with $u' = (1-\\epsilon)u$,\n", "\n", "where $\\epsilon$ is a small positive value (e.g. $1\\mathrm{e}{-5}$).\n", "\n", "This gives us stabilized versions of such operators. One can trigger the stable behavior by using the boolean keyword `stable`. One can set a default value for `stable` when initializing the operator, or can use different values at each call of the operator." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.9999\n", "[0.33329996 0.33329996 0.33329996]\n" ] } ], "source": [ "xs = tf.constant([1.,1.,1.])\n", "\n", "with tf.GradientTape() as tape:\n", " tape.watch(xs)\n", " # the exploding gradient problem is solved\n", " y = forall_pME(xs,p=4,stable=True)\n", "res = y.numpy()\n", "gradients = tape.gradient(y,xs).numpy()\n", "print(res)\n", "print(gradients)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### The hyper-parameter $p$ in the generalized means\n", "\n", "$p$ offers flexibility in writing more or less strict formulas, to account for outliers in the data depending on the application. Note that this can have strong implications for training." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.31339926\n", "[0. 0. 0. 0.0482733 0.13246194 0.19772749\n", " 0.19772749 0.28152987]\n" ] } ], "source": [ "xs = tf.constant([1.,1.,1.,0.5,0.3,0.2,0.2,0.1])\n", "\n", "with tf.GradientTape() as tape:\n", " tape.watch(xs)\n", " y = forall_pME(xs,p=4)\n", "res = y.numpy()\n", "gradients = tape.gradient(y,xs).numpy()\n", "print(res)\n", "print(gradients)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.1815753\n", "[0.0000000e+00 0.0000000e+00 0.0000000e+00 1.0733546e-05 6.4146910e-03\n", " 8.1100456e-02 8.1100456e-02 7.6018697e-01]\n" ] } ], "source": [ "xs = tf.constant([1.,1.,1.,0.5,0.3,0.2,0.2,0.1])\n", "\n", "with tf.GradientTape() as tape:\n", " tape.watch(xs)\n", " y = forall_pME(xs,p=20)\n", "res = y.numpy()\n", "gradients = tape.gradient(y,xs).numpy()\n", "print(res)\n", "print(gradients)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While it can be tempting to set a high value for $p$ when querying, in a learning setting, this can quickly lead to a \"single-passing\" operator that will focus too much on outliers at each step (i.e., gradients overfitting one input at this step, potentially harming the training of the others). We recommend not to set a too high $p$ when learning." ] } ], "metadata": { "interpreter": { "hash": "889985fd10eb245a43f2ae5f5aa0c555254f5b898fe16071f1c89d06fa8d76a2" }, "kernelspec": { "display_name": "Python 3.9.6 64-bit ('tf-py39': conda)", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.6" } }, "nbformat": 4, "nbformat_minor": 4 }