{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Grounding in LTN (cont.)\n", "\n", "This tutorial explains how to ground connectives and quantifiers. It expects some familiarity with the first tutorial on grounding non-logical symbols (constants, variables, functions and predicates)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Init Plugin\n", "Init Graph Optimizer\n", "Init Kernel\n" ] } ], "source": [ "import ltn\n", "import numpy as np\n", "import tensorflow as tf" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Connectives\n", "\n", "LTN suppports various logical connectives. They are grounded using fuzzy semantics. We have implemented the most common fuzzy logic operators using tensorflow primitives in `ltn.fuzzy_ops`. We recommend to use the following configuration:\n", "* not: the standard negation $\\lnot u = 1-u$,\n", "* and: the product t-norm $u \\land v = uv$,\n", "* or: the product t-conorm (probabilistic sum) $u \\lor v = u+v-uv$,\n", "* implication: the Reichenbach implication $u \\rightarrow v = 1 - u + uv$,\n", "\n", "where $u$ and $v$ denote two truth values in $[0,1]$. For more details on choosing the right operators for your task, read the complementary notebook." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "Not = ltn.Wrapper_Connective(ltn.fuzzy_ops.Not_Std())\n", "And = ltn.Wrapper_Connective(ltn.fuzzy_ops.And_Prod())\n", "Or = ltn.Wrapper_Connective(ltn.fuzzy_ops.Or_ProbSum())\n", "Implies = ltn.Wrapper_Connective(ltn.fuzzy_ops.Implies_Reichenbach())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The wrapper `ltn.Wrapper_Connective` allows to use the operators with LTN formulas. It takes care of combining sub-formulas that have different variables appearing in them (the sub-formulas may have different dimensions that need to be \"broadcasted\")." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2021-09-24 17:02:27.556414: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.\n", "2021-09-24 17:02:27.556508: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Metal device set to: Apple M1\n", "\n", "systemMemory: 16.00 GB\n", "maxCacheSize: 5.33 GB\n", "\n" ] }, { "data": { "text/plain": [ "ltn.Formula(tensor=0.01775427535176277, free_vars=[])" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = ltn.Variable('x',np.random.normal(0.,1.,(10,2))) # 10 values in R²\n", "y = ltn.Variable('y',np.random.normal(0.,2.,(5,2))) # 5 values in R²\n", "\n", "c1 = ltn.Constant([0.5,0.0], trainable=False)\n", "c2 = ltn.Constant([4.0,2.0], trainable=False)\n", "\n", "Eq = ltn.Predicate.Lambda(lambda args: tf.exp(-tf.norm(args[0]-args[1],axis=1))) # predicate measuring similarity\n", "\n", "Eq([c1,c2])" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=0.9822457432746887, free_vars=[])" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Not(Eq([c1,c2]))" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=0.9824644327163696, free_vars=[])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Implies(Eq([c1,c2]), Eq([c2,c1]))" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=[4.4328589e-03 1.8856935e-04 3.9932053e-03 2.6313865e-03 8.2341093e-04\n", " 9.7307027e-05 9.2917297e-04 1.2955565e-03 5.4630609e-03 5.0503397e-03], free_vars=['x'])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Notice the dimension of the outcome: the result is evaluated for every x. \n", "And(Eq([x,c1]), Eq([x,c2]))" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=[[0.52351284 0.5660489 0.502086 0.54841334 0.8179424 ]\n", " [0.12897979 0.279613 0.09810974 0.3607202 0.25842512]\n", " [0.45820022 0.35860953 0.32815725 0.4508675 0.6152009 ]\n", " [0.53618777 0.1908272 0.17470731 0.29135692 0.3339786 ]\n", " [0.25108603 0.342749 0.20701537 0.41542915 0.50837725]\n", " [0.11609086 0.11124759 0.0567009 0.7048274 0.15309101]\n", " [0.2687245 0.30622828 0.20789589 0.47038752 0.51323897]\n", " [0.3076315 0.3449486 0.25006413 0.45706868 0.59890985]\n", " [0.49300438 0.39925045 0.37364164 0.46637693 0.64494586]\n", " [0.28069535 0.30437022 0.3101134 0.28034654 0.38135415]], free_vars=['x', 'y'])" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Notice the dimensions of the outcome: the result is evaluated for every x and y.\n", "# Notice also that y did not appear in the 1st argument of `Or`; \n", "# the connective broadcasts the results of its two arguments to match.\n", "Or(Eq([x,c1]), Eq([x,y]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Quantifiers\n", "\n", "LTN suppports universal and existential quantification. They are grounded using aggregation operators. We recommend using the following two operators:\n", "\n", "- existential quantification (\"exists\"): \n", "the generalized mean (`pMean`) $\\mathrm{pM}(u_1,\\dots,u_n) = \\biggl( \\frac{1}{n} \\sum\\limits_{i=1}^n u_i^p \\biggr)^{\\frac{1}{p}} \\qquad p \\geq 1$,\n", "- universal quantification (\"for all\"): \n", "the generalized mean of \"the deviations w.r.t. the truth\" (`pMeanError`) $\\mathrm{pME}(u_1,\\dots,u_n) = 1 - \\biggl( \\frac{1}{n} \\sum\\limits_{i=1}^n (1-u_i)^p \\biggr)^{\\frac{1}{p}} \\qquad p \\geq 1$,\n", "\n", "where $u_1,\\dots,u_n$ is a list of truth values in $[0,1]$. " ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "Forall = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMeanError(p=2),semantics=\"forall\")\n", "Exists = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMean(p=5),semantics=\"exists\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The wrapper `ltn.Wrapper_Quantifier` allows to use the aggregators with LTN formulas. It takes care of selecting the tensor dimensions to aggregate, given some variables in arguments." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=[[0.39133054 0.19750305 0.07793675 0.03600257 0.3931468 ]\n", " [0.32958886 0.1688326 0.0260815 0.0207574 0.07129584]\n", " [0.1663024 0.19604874 0.01294116 0.02727364 0.04528237]\n", " [0.4060248 0.32324475 0.04142912 0.0436288 0.19317953]\n", " [0.33524016 0.09340193 0.03288566 0.0114226 0.05844909]\n", " [0.21770334 0.41895252 0.01864527 0.05260331 0.08970136]\n", " [0.07839028 0.59526646 0.009564 0.18803924 0.07536539]\n", " [0.05541972 0.27946767 0.00499568 0.09165297 0.03131543]\n", " [0.09379627 0.3161083 0.00786038 0.06133056 0.04147319]\n", " [0.20024009 0.5956605 0.02581004 0.08952475 0.17981495]], free_vars=['x', 'y'])" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = ltn.Variable('x',np.random.normal(0.,1.,(10,2))) # 10 values in R²\n", "y = ltn.Variable('y',np.random.normal(0.,2.,(5,2))) # 5 values in R²\n", "\n", "Eq = ltn.Predicate.Lambda(lambda args: tf.exp(-tf.norm(args[0]-args[1],axis=1))) # predicate measuring similarity\n", "\n", "Eq([x,y])" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=[0.21741337 0.29905307 0.02559453 0.06093419 0.11155587], free_vars=['y'])" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Forall(x,Eq([x,y]))" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=0.1369343400001526, free_vars=[])" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Forall((x,y),Eq([x,y]))" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=0.3350968658924103, free_vars=[])" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Exists((x,y),Eq([x,y]))" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=0.28244274854660034, free_vars=[])" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Forall(x, Exists(y, Eq([x,y])))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`pMean` can be understood as a smooth-maximum that depends on the hyper-paramer $p$:\n", "- $p \\to 1$: the operator tends to `mean`,\n", "- $p \\to +\\infty$: the operator tends to `max`.\n", "\n", "Similarly, `pMeanError` can be understood as a smooth-minimum:\n", "- $p \\to 1$: the operator tends to `mean`,\n", "- $p \\to +\\infty$: the operator tends to `min`.\n", "\n", "Therefore, $p$ offers flexibility in writing more or less strict formulas, to account for outliers in the data depending on the application. Note that this can have strong implications for training (see complementary notebook). One can set a default value for $p$ when initializing the operator, or can use different values at each call of the operator." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=0.3222336173057556, free_vars=[])" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Forall(x,Eq([x,c1]),p=2)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=0.22595995664596558, free_vars=[])" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Forall(x,Eq([x,c1]),p=10)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=0.40868890285491943, free_vars=[])" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Exists(x,Eq([x,c1]),p=2)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=0.6606776118278503, free_vars=[])" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Exists(x,Eq([x,c1]),p=10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Diagonal Quantification\n", "\n", "Given 2 (or more) variables, there are scenarios where one wants to express statements about specific pairs (or tuples) only, such that the $i$-th tuple contains the $i$-th instances of the variables. We allow this using `ltn.diag`. **Note**: diagonal quantification assumes that the variables have the same number of individuals.\n", "\n", "In simplified pseudo-code, the usual quantification would compute:\n", "```python\n", "for x_i in x:\n", " for y_j in y:\n", " results.append(P(x_i,y_j))\n", "aggregate(results)\n", "```\n", "In contrast, diagonal quantification would compute:\n", "```python\n", "for x_i, y_i in zip(x,y):\n", " results.append(P(x_i,y_i))\n", "aggregate(results)\n", "```\n", "\n", "We illustrate `ltn.diag` in the following setting: \n", "- the variable $x$ denotes 100 individuals in $\\mathbb{R}^{2\\times2}$, \n", "- the variable $l$ denotes 100 one-hot labels in $\\mathbb{N}^3$ (3 possible classes),\n", "- $l$ is grounded according to $x$ such that each pair $(x_i,l_i)$, for $i=0..100$ denotes one correct example from the dataset,\n", "- the classifier $C(x,l)$ gives a confidence value in $[0,1]$ of the sample $x$ corresponding to the label $l$." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "# The values are generated at random, for the sake of illustration.\n", "# In a real scenario, they would come from a dataset.\n", "samples = np.random.rand(100,2,2) # 100 R^{2x2} values \n", "labels = np.random.randint(3, size=100) # 100 labels (class 0/1/2) that correspond to each sample \n", "onehot_labels = tf.one_hot(labels,depth=3)\n", "\n", "x = ltn.Variable(\"x\",samples) \n", "l = ltn.Variable(\"l\",onehot_labels)\n", "\n", "class ModelC(tf.keras.Model):\n", " def __init__(self):\n", " super(ModelC, self).__init__()\n", " self.flatten = tf.keras.layers.Flatten()\n", " self.dense1 = tf.keras.layers.Dense(5, activation=tf.nn.elu)\n", " self.dense2 = tf.keras.layers.Dense(3, activation=tf.nn.softmax)\n", " def call(self, inputs):\n", " x, l = inputs[0], inputs[1]\n", " x = self.flatten(x)\n", " x = self.dense1(x)\n", " x = self.dense2(x)\n", " return tf.math.reduce_sum(x*l,axis=1)\n", "\n", "C = ltn.Predicate(ModelC())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If some variables are marked using `ltn.diag`, LTN will only compute their \"zipped\" results (instead of the usual \"broadcasting\")." ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(100, 100)\n", "(100,)\n", "(100, 100)\n" ] } ], "source": [ "print(C([x,l]).tensor.shape) # Computes the 100x100 combinations\n", "ltn.diag(x,l) # sets the diag behavior for x and l\n", "print(C([x,l]).tensor.shape) # Computes the 100 zipped combinations\n", "ltn.undiag(x,l) # resets the normal behavior\n", "print(C([x,l]).tensor.shape) # Computes the 100x100 combinations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In practice, `ltn.diag` is designed to be used with quantifiers. \n", "Every quantifier automatically calls `ltn.undiag` after the aggregation is performed, so that the variables keep their normal behavior outside of the formula.\n", "Therefore, it is recommended to use `ltn.diag` only in quantified formulas as follows." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=0.32587695121765137, free_vars=[])" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Forall(ltn.diag(x,l), C([x,l])) # Aggregates only on the 100 \"zipped\" pairs.\n", " # Automatically calls `ltn.undiag` so the behavior of x/l is unchanged outside of this formula." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Guarded Quantifiers\n", "\n", "One may wish to quantify over a set of elements whose grounding satisfy some **boolean** condition. Let us assume $x$ is a variable from some domain and $m$ is a mask function that returns boolean values (that is, $0$ or $1$, not continuous truth degrees in $[0,1]$) for each element of the domain. \n", "In guarded quantification, one has quantifications of the form\n", "$$\n", "(\\forall x: m(x)) \\phi(x)\n", "$$\n", "which means \"every x satisfying $m(x)$ also satisfies $\\phi(x)$\", or\n", "$$\n", "(\\exists x: m(x)) \\phi(x)\n", "$$\n", "which means \"some x satisfying $m(x)$ also satisfies $\\phi(x)$\".\n", "\n", "The mask $m$ can also depend on other variables in the formula. For instance, the quantification $\\exists y (\\forall x:m(x,y)) \\phi(x,y)$ is also a valid sentence.\n", "\n", "Let us consider the following example, that states that there exists an euclidian distance $d$ below which all pairs of points $x$, $y$ should be considered as similar:\n", "$\\exists d \\ \\forall x,y : \\mathrm{dist}(x,y) < d \\ ( \\mathrm{Eq}(x,y))) $" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "Eq = ltn.Predicate.Lambda(lambda args: tf.exp(-tf.norm(args[0]-args[1],axis=1))) # predicate measuring similarity\n", "\n", "points = np.random.rand(50,2) # 50 values in [0,1]^2\n", "x = ltn.Variable(\"x\",points)\n", "y = ltn.Variable(\"y\",points)\n", "d = ltn.Variable(\"d\",[.1,.2,.3,.4,.5,.6,.7,.8,.9])" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ltn.Formula(tensor=0.7828403115272522, free_vars=[])" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "is_greater_than = ltn.Predicate.Lambda(lambda inputs: inputs[0] > inputs[1]) # boolean predicate measuring greater than\n", "eucl_dist = ltn.Function.Lambda(\n", " lambda inputs: tf.expand_dims(tf.norm(inputs[0]-inputs[1],axis=1),axis=1)\n", ") # function measuring euclidian distance\n", "\n", "\n", "Exists(d, \n", " Forall([x,y],\n", " Eq([x,y]),\n", " mask = is_greater_than([d, eucl_dist([x,y])])\n", " )\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The guarded option is particularly useful to propagate gradients (see notebook on learning) over only a subset of the domains, that verifies the condition $m$." ] } ], "metadata": { "interpreter": { "hash": "889985fd10eb245a43f2ae5f5aa0c555254f5b898fe16071f1c89d06fa8d76a2" }, "kernelspec": { "display_name": "Python 3.9.6 64-bit ('tf-py39': conda)", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.6" } }, "nbformat": 4, "nbformat_minor": 4 }