{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Grounding in Logic Tensor Networks (LTN)\n", "\n", "## Real Logic\n", "\n", "The semantics of LTN differs from the standard abstract semantics of First-order Logic (FOL) in that domains are interpreted concretely by tensors in the Real field.\n", "To emphasize the fact that LTN interprets symbols which are grounded on real-valued features, we use the term *grounding*, denoted by $\\mathcal{G}$, instead of interpretation. \n", "$\\mathcal{G}$ associates a tensor of real numbers to any term of the language, and a real number in the interval $[0,1]$ to any formula $\\phi$. \n", "In the rest of the tutorials, we commonly use \"tensor\" to designate \"tensor in the Real field\".\n", "\n", "The language consists of a non-logical part (the signature) and logical connectives and quantifiers.\n", "* **constants** denote individuals from some space of tensors $\\bigcup\\limits_{n_1 \\dots n_d \\in \\mathbb{N}^*} \\mathbb{R}^{n_1 \\times \\dots \\times n_d}$ (tensor of any rank). The individual can be pre-defined (data point) or learnable (embedding).\n", "* **variables** denote sequence of individuals.\n", "* **functions** can be any mathematical function either pre-defined or learnable. Examples of functions can be distance functions, regressors, etc. Functions can be defined using any operations in Tensorflow. They can be linear functions, Deep Neural Networks, and so forth.\n", "* **predicates** are represented as mathematical functions that map from some n-ary domain of individuals to a real from $[0,1]$ that can be interpreted as a truth degree. Examples of predicates can be similarity measures, classifiers, etc.\n", "* **connectives** -- not, and, or, implies -- are modeled using fuzzy semantics.\n", "* **quantifiers** are defined using aggregators.\n", "\n", "This tutorial explains how to ground constants, variables, functions and predicates." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Init Plugin\n", "Init Graph Optimizer\n", "Init Kernel\n" ] } ], "source": [ "import ltn\n", "import tensorflow as tf\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Constants\n", "\n", "LTN constants are grounded as some real tensor. Each constant $c$ is mapped to a point in $\\mathcal{G}(c) \\in \\bigcup\\limits_{n_1 \\dots n_d \\in \\mathbb{N}^*} \\mathbb{R}^{n_1 \\times \\dots \\times n_d}$. Notice that the objects in the domain may be tensors of any rank. A tensor of rank $0$ corresponds to a scalar, a tensor of rank $1$ to a vector, a tensor of rank $2$ to a matrix and so forth, in the usual way. \n", "Here we define $\\mathcal{G}(c_1)=(2.1,3)$ and $\\mathcal{G}(c_2)=\\begin{pmatrix}\n", "4.2 & 3 & 2.5\\\\\n", "4 & -1.3 & 1.8\n", "\\end{pmatrix}$." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2021-09-24 16:44:05.428136: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.\n", "2021-09-24 16:44:05.428252: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Metal device set to: Apple M1\n", "\n", "systemMemory: 16.00 GB\n", "maxCacheSize: 5.33 GB\n", "\n" ] } ], "source": [ "c1 = ltn.Constant([2.1,3], trainable=False)\n", "c2 = ltn.Constant([[4.2,3,2.5],[4,-1.3,1.8]], trainable=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that a constant can be set as learnable by using the keyword argument `trainable=True`. This is useful to learn embeddings for some individuals.\n", "The features of the tensor will be considered as trainable parameters (learning in LTN is explained in a further notebook)." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "c3 = ltn.Constant([0.,0.], trainable=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can access the TensorFlow value of a LTN constant or any LTN expression `x` by querying `x.tensor`." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ltn.Constant(tensor=[2.1 3. ], trainable=False, free_vars=[])\n", "tf.Tensor([2.1 3. ], shape=(2,), dtype=float32)\n", "ltn.Constant(tensor=, trainable=True, free_vars=[])\n", "\n" ] } ], "source": [ "print(c1)\n", "print(c1.tensor)\n", "print(c3)\n", "print(c3.tensor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Predicates\n", "\n", "LTN Predicates are grounded in mappings that assign a value between zero and one to some n-ary space of input values. Predicates in LTN can be neural networks or any other function that achieves such a mapping. \n", "\n", "There are different ways to construct a predicate in LTN:\n", "- the default constructor `ltn.Predicate(model)` takes in argument a `tf.keras.Model` instance; it can be used to ground any custom function (succession of operations, Deep Neural Network, ...) that return a value in $[0,1]$,\n", "- the lambda constructor `ltn.Predicate.Lambda(function)` takes in argument a lambda function; it is appropriate for small mathematical operations with **no trainable weights** (non-trainable function) that return a value in $[0,1]$.\n", "\n", "The following defines a predicate $P_1$ using the similarity to the point $\\vec{\\mu}=\\left<2,3\\right>$ with $\\mathcal{G}(P_1):\\vec{x}\\mapsto \\exp(-\\|\\vec{x}-\\vec{\\mu} \\|^2)$, and a predicate $P_2$ defined using a Tensorflow model." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "mu = tf.constant([2.,3.])\n", "P1 = ltn.Predicate.Lambda(lambda x: tf.exp(-tf.norm(x-mu,axis=1)))\n", "\n", "class ModelP2(tf.keras.Model):\n", " \"\"\"For more info on how to use tf.keras.Model:\n", " https://www.tensorflow.org/api_docs/python/tf/keras/Model\"\"\"\n", " def __init__(self):\n", " super(ModelP2, self).__init__()\n", " self.dense1 = tf.keras.layers.Dense(5, activation=tf.nn.elu)\n", " self.dense2 = tf.keras.layers.Dense(1, activation=tf.nn.sigmoid) # returns one value in [0,1]\n", " def call(self, x):\n", " x = self.dense1(x)\n", " return self.dense2(x)\n", "\n", "modelP2 = ModelP2()\n", "P2 = ltn.Predicate(modelP2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One can easily query predicates using LTN constants and LTN variables (see further in this notebook to query using variables)." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ltn.Formula(tensor=0.904837429523468, free_vars=[])\n" ] } ], "source": [ "c1 = ltn.Constant([2.1,3],trainable=False)\n", "c2 = ltn.Constant([4.5,0.8],trainable=False)\n", "\n", "print(P1(c1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "NOTE: \n", "- If an LTN predicate (or an LTN function) takes several inputs, e.g. $P_3(x_1,x_2)$, the arguments must be passed as a list (cf Tensorflow's conventions).\n", "- LTN converts inputs such that there is a \"batch\" dimension on the first axis. Therefore, most operations should work with `axis=1`." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ltn.Formula(tensor=0.999293863773346, free_vars=[])\n" ] } ], "source": [ "class ModelP3(tf.keras.Model):\n", " def __init__(self):\n", " super(ModelP3, self).__init__()\n", " self.dense1 = tf.keras.layers.Dense(5, activation=tf.nn.elu)\n", " self.dense2 = tf.keras.layers.Dense(1, activation=tf.nn.sigmoid) # returns one value in [0,1]\n", " \n", " def call(self, inputs):\n", " x1, x2 = inputs[0], inputs[1] # multiple arguments are passed as a list\n", " x = tf.concat([x1,x2],axis=1) # axis=0 is the batch axis\n", " x = self.dense1(x)\n", " return self.dense2(x)\n", " \n", "P3 = ltn.Predicate(ModelP3())\n", "print(P3([c1,c2])) # multiple arguments are passed as a list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can define trainable or non trainable 0-ary predicates (propositional variables) using `ltn.Proposition`. They are grounded as a mathematical constant in $[0,1]$." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ltn.Proposition(tensor=0.30000001192092896, trainable=False, free_vars=[])\n" ] } ], "source": [ "# Declaring a trainable 0-ary predicate with initial truth value 0.3\n", "A = ltn.Proposition(0.3, trainable=False)\n", "print(A)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For more details and useful ways to create predicates (incl. how to integrate multiclass classifiers as binary predicates), see the complementary notebook." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Functions\n", "\n", "LTN functions are grounded as any mathematical function that maps $n$ individuals to one individual in the tensor domains. \n", "\n", "There are different ways to construct an LTN function in LTN:\n", "- the default constructor `ltn.Function(model)` takes in argument a `tf.keras.Model` instance; it can be used to ground any custom function (succession of operations, Deep Neural Network, ...),\n", "- the lambda constructor `ltn.Function.Lambda(function)` takes in argument a lambda function; it is appropriate for small mathematical operations with **no weight tracking** (non-trainable function).\n", "\n", "The following defines the grounding of a function $f_1$ that computes the difference of two inputs with $\\mathcal{G}(f_1):\\vec{u},\\vec{v}\\mapsto \\vec{u} - \\vec{v}$ and a function $f_2$ that uses a deep neural network that projects a value to $\\mathbb{R}^5$." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "f1 = ltn.Function.Lambda(lambda args: args[0]-args[1])\n", "\n", "class MyModel(tf.keras.Model):\n", " def __init__(self):\n", " super(MyModel, self).__init__()\n", " self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)\n", " self.dense2 = tf.keras.layers.Dense(5)\n", " def call(self, x):\n", " x = self.dense1(x)\n", " return self.dense2(x)\n", " \n", "model_f2 = MyModel()\n", "f2 = ltn.Function(model_f2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One can easily query predicates using LTN constants and LTN variables (see further in this notebook to query using variables)." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ltn.Term(tensor=[-2.4 2.2], free_vars=[])\n", "ltn.Term(tensor=[-1.2620945 2.789203 0.17895734 -3.853095 2.1472168 ], free_vars=[])\n" ] } ], "source": [ "c1 = ltn.Constant([2.1,3], trainable=False)\n", "c2 = ltn.Constant([4.5,0.8], trainable=False)\n", "print(f1([c1,c2])) # multiple arguments are passed as a list\n", "print(f2(c1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Variables\n", "\n", "LTN variables are sequences of individuals/constants from a domain. Variables are useful to write quantified statements, e.g. $\\forall x\\ P(x)$. Notice that a variable is a sequence and not a set; the same value can occur twice in the sequence.\n", "\n", "The following defines two variables $x$ and $y$ with respectively 10 and 5 individuals, sampled from normal distributions in $\\mathbb{R}^2$. \n", "In LTN, variables need to be labelled (see the arguments `'x'` and `'y'` below)." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ltn.Variable(label=x, tensor=[[-3.3031723 -1.1864299 ]\n", " [ 0.88466865 0.91883767]\n", " [ 0.04525238 1.4658298 ]\n", " [-2.4912388 0.1417496 ]\n", " [ 1.0241538 -0.24183248]\n", " [-0.27830693 -0.75147146]\n", " [ 1.9639659 -1.0334796 ]\n", " [-0.19559991 -0.48291555]\n", " [ 0.4123148 -0.82985425]\n", " [-0.62613714 0.9860003 ]], free_vars=['x'])\n" ] } ], "source": [ "x = ltn.Variable('x',np.random.normal(0.,1.,(10,2)))\n", "y = ltn.Variable('y',np.random.normal(0.,4.,(5,2)))\n", "\n", "print(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Evaluating a term/predicate with one variable of $n$ individuals yields $n$ output values, where the $i$-th output value corresponds to the term calculated with the $i$-th individual.\n", "\n", "For example, if $x$ has 10 individuals, and $y$ has 5 individuals, $P_3(x,y)$ returns $10 \\times 5$ values.\n", "LTN keeps track of these dimensions and how they are connected to the variables using the attribute `free_vars`." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ltn.Formula(tensor=[[0.16020344 0.99925524 0.23648833 0.11888503 0.09269554]\n", " [0.37003732 0.9999889 0.46002787 0.6520825 0.3044056 ]\n", " [0.31776428 0.9999651 0.37986833 0.49643877 0.24240157]\n", " [0.2072321 0.99957293 0.24654248 0.16752328 0.12342633]\n", " [0.40251592 0.99999356 0.47012708 0.7431883 0.3429705 ]\n", " [0.32867122 0.9999753 0.37844056 0.4107827 0.23043956]\n", " [0.4734048 0.99999833 0.59228337 0.93088204 0.5602832 ]\n", " [0.33399373 0.99997556 0.377496 0.42079565 0.2359993 ]\n", " [0.37253213 0.99998915 0.42888838 0.60755545 0.29154813]\n", " [0.29656985 0.9999354 0.3264082 0.36363715 0.20904014]], free_vars=['x', 'y'])\n", "ltn.Formula(tensor=0.3177642822265625, free_vars=[])\n" ] } ], "source": [ "# Notice that the outcome is a 2 dimensional tensor where each cell\n", "# represents the satisfiability of P3 with each individual in x and in y.\n", "res1 = P3([x,y])\n", "print(res1) \n", "print(res1.take('x',2).take('y',0)) # gives the result calculated with the 3rd individual in x and the 1st individual in y" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(10, 5, 2)\n", "['x', 'y']\n", "ltn.Term(tensor=[7.6769648 2.0221434], free_vars=[])\n" ] } ], "source": [ "# This is also valid with the outputs of `ltn.Function`\n", "res2 = f1([x,y])\n", "print(res2.tensor.shape)\n", "print(res2.free_vars)\n", "print(res2.take('x',2).take('y',0)) # gives the result calculated with the 3rd individual in x and the 1st individual in y" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ltn.Formula(tensor=[0.3644095 0.9999945 0.71159357 0.83346635 0.45896447], free_vars=['y'])\n" ] } ], "source": [ "res3 = P3([c1,y])\n", "print(res3) # Notice that no axis is associated to a constant." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Variables made of trainable constants\n", "\n", "We can create a variable made of trainable constants. \n", "In this case, we need to define the variable within the scope of a `tf.GradientTape` object (used for training, see tutorial 3). \n", "The tape will track the weights between the variable and the constants." ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([-0.00143811, 0.01174497], dtype=float32)" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "c1 = ltn.Constant([2.1,3], trainable=True)\n", "c2 = ltn.Constant([4.5,0.8], trainable=True)\n", "\n", "with tf.GradientTape() as tape:\n", " # Notice that the assignation must be done within a tf.GradientTape.\n", " # Tensorflow will keep track of the gradients between c1/c2 and x.\n", " # Read tutorial 3 for more details.\n", " x = ltn.Variable.from_constants(\"x\", [c1,c2], tape=tape)\n", " res = P2(x)\n", "tape.gradient(res.tensor,c1.tensor).numpy() # the tape keeps track of gradients between P2(x), x and c1" ] } ], "metadata": { "interpreter": { "hash": "889985fd10eb245a43f2ae5f5aa0c555254f5b898fe16071f1c89d06fa8d76a2" }, "kernelspec": { "display_name": "Python 3.9.6 64-bit ('tf-py39': conda)", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.6" } }, "nbformat": 4, "nbformat_minor": 4 }