{ "cells": [ { "cell_type": "markdown", "id": "95fdf1c6-20c1-4695-92c3-1a30d118da9e", "metadata": { "tags": [] }, "source": [ "# <span style=\"color:green\"><center>Aprendizaje Profundo</center></span>" ] }, { "cell_type": "markdown", "id": "9ab2d584-a384-4517-96c0-b085f6adfe24", "metadata": { "tags": [] }, "source": [ "# <span style=\"color:red\"><center>Transformers- Procesos Internos </center></span>" ] }, { "cell_type": "markdown", "id": "ac6d5c0f-edf4-4e0a-8260-4f6a251eedd3", "metadata": {}, "source": [ "<center>Behind HuggingFace</center>" ] }, { "cell_type": "markdown", "id": "2814b837-8220-4f6c-8d62-4f76d8652736", "metadata": {}, "source": [ "## <span style=\"color:blue\">Autores</span>" ] }, { "cell_type": "markdown", "id": "f9f2eeab-4acc-4a4f-bb79-9e2d45a810d0", "metadata": {}, "source": [ "1. Alvaro Mauricio Montenegro Díaz, ammontenegrod@unal.edu.co\n", "2. Daniel Mauricio Montenegro Reyes, dextronomo@gmail.com " ] }, { "cell_type": "markdown", "id": "b002dc78-cb0a-47d5-a17f-c6df384b96f8", "metadata": { "tags": [] }, "source": [ "## <span style=\"color:blue\">Diseño gráfico y Marketing digital</span>\n", " " ] }, { "cell_type": "markdown", "id": "05789c43-41b3-494d-afaa-a370002d9ccf", "metadata": {}, "source": [ "1. Maria del Pilar Montenegro Reyes, pmontenegro88@gmail.com " ] }, { "cell_type": "markdown", "id": "bcda43ed-5f98-4ffa-b3d6-dd268a5fdc0e", "metadata": {}, "source": [ "## <span style=\"color:blue\">Asistentes</span>" ] }, { "cell_type": "markdown", "id": "d7a21b86-13b2-4c1b-824f-96dce45f3e4c", "metadata": {}, "source": [] }, { "cell_type": "markdown", "id": "e4447d4d-a448-40d6-98cd-cbaebf381d7c", "metadata": {}, "source": [ "## <span style=\"color:blue\">Referencias</span> " ] }, { "cell_type": "markdown", "id": "d23caca7-9f44-4ca3-a8c9-2c4cfd44ee39", "metadata": {}, "source": [ "1. [HuggingFace. Transformers ](https://huggingface.co/transformers/)\n", "1. [HuggingFace. Behind pipeline](https://huggingface.co/course/chapter2/2?fw=tf)\n", "1. [Tutorial Transformer de Google](https://www.tensorflow.org/text/tutorials/transformer)\n", "1. [Transformer-chatbot-tutorial-with-tensorflow-2](https://blog.tensorflow.org/2019/05/transformer-chatbot-tutorial-with-tensorflow-2.html) \n", "1. [Transformer Architecture: The positional encoding](https://kazemnejad.com/blog/transformer_architecture_positional_encoding/)\n", "1. [Illustrated Auto-attención](https://towardsdatascience.com/illustrated-self-attention-2d627e33b20a)\n", "1. [Illustrated Attention](https://towardsdatascience.com/attn-illustrated-attention-5ec4ad276ee3#0458)\n", "1. [Neural Machine Translation by Jointly Learning to Align and Translate (Bahdanau et. al, 2015)](https://arxiv.org/pdf/1409.0473.pdf)\n", "1. [Effective Approaches to Attention-based Neural Machine Translation (Luong et. al, 2015)](https://arxiv.org/pdf/1508.04025.pdf)\n", "1. [Attention Is All You Need (Vaswani et. al, 2017)](https://arxiv.org/pdf/1706.03762.pdf)\n", "1. [Self-Attention GAN (Zhang et. al, 2018)](https://arxiv.org/pdf/1805.08318.pdf)\n", "1. [Sequence to Sequence Learning with Neural Networks (Sutskever et. al, 2014)](https://arxiv.org/pdf/1409.3215.pdf)\n", "1. [TensorFlow’s seq2seq Tutorial with Attention (Tutorial on seq2seq+attention)](https://github.com/tensorflow/nmt)\n", "1. [Lilian Weng’s Blog on Attention (Great start to attention)](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)\n", "1. [Jay Alammar’s Blog on Seq2Seq with Attention (Great illustrations and worked example on seq2seq+attention)](https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/)\n", "1. [Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation (Wu et. al, 2016)](https://arxiv.org/pdf/1609.08144.pdf)\n", "1. [Adam: A method for stochastic optimization](https://arxiv.org/pdf/1412.6980.pdf)" ] }, { "cell_type": "markdown", "id": "2a0da79b-44dc-460c-ac1e-b71ade087a9c", "metadata": {}, "source": [ "## <span style=\"color:blue\">Contenido</span>" ] }, { "cell_type": "markdown", "id": "15195d1b-1dfc-4e35-a28d-5c6dad544252", "metadata": {}, "source": [ "* [Introducción](#Introducción)\n", "* [Etapas del procesamiento natural de texto](#Etapas-del-procesamiento-natural-de-texto)\n", "* [Preprocesamiento con un tokenizador](#Preprocesamiento-con-un-tokenizador)\n", "* [Paso a través del modelo](#Paso-a-través-del-modelo)\n", "* [Postprocesamiento de la salida](#Postprocesamiento-de-la-salida)" ] }, { "cell_type": "markdown", "id": "afe895d3-c25d-49d7-9b47-ff4835ba2b95", "metadata": {}, "source": [ "## <span style=\"color:blue\">Introducción</span>" ] }, { "cell_type": "markdown", "id": "1c2ba28d-5780-4438-9084-be9c17530b1b", "metadata": {}, "source": [ "Las tareas del procesamiento del lenguaje natural moderno se dividen escencialmente en:\n", "\n", "1. Clasificación de textos. Por ejemplo, análisis de sentimiento.\n", "1. Generación automática de textos, basados en una frase inicial.\n", "1. Generación de texto en contexto, llenando espacios vacios enmascarados con máscaras `masked text`.\n", "1. Clasificación de cada una de las palabras en una frase: Por ejemplo: sustantivo, verbo adjetivo, o por ejemplo `ner`: named entity recognition. ciudad, nombre de persona, localización, organización. \n", "1. Generación de una respuesta a partir de una pregunta.\n", "1. Traducción de un lenguaje a otro." ] }, { "cell_type": "markdown", "id": "47815890-5b20-446c-95a4-af3eac8bc5a4", "metadata": {}, "source": [ "## <span style=\"color:blue\">Etapas del procesamiento natural de textos</span>" ] }, { "cell_type": "markdown", "id": "97ed58d5-2229-4d92-93cf-acbfac654b48", "metadata": {}, "source": [ "En esta lección se introducen los procesos que `pipeline` de HuggingFace corre internamente. Básicamente *pipeline* agrupa tres pasos\n", "\n", "* Pre-procesamiento.\n", "* Paso a través del modelo.\n", "* Post-procesamiento." ] }, { "cell_type": "markdown", "id": "52fbf622-8e20-437f-aea7-04d15837b430", "metadata": {}, "source": [ "<figure>\n", "<center>\n", "<img src=\"../Imagenes/full_nlp_pipeline.png\" width=\"800\" height=\"600\" align=\"center\"/>\n", "</center>\n", "<figcaption>\n", "<p style=\"text-align:center\">Etapas del procesamiento del lenguaje natural</p>\n", "</figcaption>\n", "</figure>\n", "\n", "Fuente: [HuggingFace Transformers course](https://huggingface.co/course/chapter2/2?fw=tf)" ] }, { "cell_type": "markdown", "id": "fc0ef327-6d72-4b13-9606-aeb0c3baa893", "metadata": {}, "source": [ "Para correr el cuaderno original en Tensorflow de HuggingFace en Colab vaya a [HuggingFace notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter2/section2_tf.ipynb). Versión \n", "[Pytorch](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter2/section2_pt.ipynb). En esta lección usamos la versión de Tensorflow." ] }, { "cell_type": "markdown", "id": "a791a8ca-c0ed-4e60-bcf8-eabb7c29b3ce", "metadata": {}, "source": [ "### Clasificación de textos de textos" ] }, { "cell_type": "raw", "id": "e2d8ca0d-b529-432a-a424-5e0e0037bc25", "metadata": {}, "source": [ "!conda install -c huggingface transformers\n", "!conda install -c conda-forge sentencepiece" ] }, { "cell_type": "code", "execution_count": 4, "id": "7f2ae66c-ff6e-473c-9050-c26bb8c7b394", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'label': 'NEGATIVE', 'score': 0.9950999617576599}]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from transformers import pipeline\n", "\n", "classifier = pipeline(\"sentiment-analysis\")\n", "classifier(\"I've been waiting for a HuggingFace course my hole life\")" ] }, { "cell_type": "code", "execution_count": 3, "id": "e1d7b84b-3d4e-409a-8cba-9f2b621c12cc", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'label': 'POSITIVE', 'score': 0.9598047137260437},\n", " {'label': 'NEGATIVE', 'score': 0.9994558095932007}]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "classifier([\n", " \"I've been waiting for a HuggingFace course my whole life.\", \n", " \"I hate this so much!\"\n", "])" ] }, { "cell_type": "markdown", "id": "40165737-b736-4408-8353-7abfe922b485", "metadata": {}, "source": [ "## <span style=\"color:blue\">Preprocesamiento con un tokenizador</span>" ] }, { "cell_type": "markdown", "id": "6e24f7b7-b682-4e7d-90d3-ae0a17460ee3", "metadata": {}, "source": [ "Como cualquier red neuronal, los transformers no pueden procesar el texto crudo directamente. El primer paso es entonces convertir los textos en números que tengan sentido para la red. Este paso es hecho por un tokenizador. El tokenizador hace lo siguiente:\n", "\n", "1. Divide la entrada en palabras, subpalabras y en genral en unidades mínimas textuales como los signos de puntuación. Estas unidades mínimas se denominan tokens.\n", "1. Convierte (*map*) cada token único en un número entero, que será el indice (ID) con el cual se identificará el token a lo largo del proceso.\n", "1. Addiciona símbolos adicionales que será útiles para el modelo.\n", "\n", "Para usar un modelo preentrenado, es necesario hacer este proceso exáctamente de la misma manera como se hizo cuando se pre-entrenó al modelo, por lo que es necesario bajar esta información del [Hub del modelo](https://huggingface.co/models).\n", "\n", "Esto se hace con la clase AutoTokenizer y su método *from_pretrained*. Se usa el `checkpoint` del modelo, que es el conjunto de pesos del último modelo entrenado.\n", "\n", "En HugginFace el checkpoint por defecto para el *pipeline* **sentiment-analysis** es **distilbert-base-uncased-finetuned-sst-2-english**. La tarjeta de presentación del modelo (`card`) se puede consultar [aquí](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english). Veámos:" ] }, { "cell_type": "code", "execution_count": 5, "id": "14203455-3b2f-4ed1-87d7-c3a8b2a883c4", "metadata": {}, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", "\n", "checkpoint = 'distilbert-base-uncased-finetuned-sst-2-english'\n", "tokenizer = AutoTokenizer.from_pretrained(checkpoint)" ] }, { "cell_type": "markdown", "id": "8919782f-bd97-4170-99d3-b2a27e95cc75", "metadata": {}, "source": [ "Ya disponemos del tokenizador adecuado y podemos pasarle nuestras frases para conseguir de vuelta un diccionario que está listo para alimentar al modelo. Lo único que queda por hacer es convertir la lista de ID's a tensores. Los tensores pueden ser Pytorch, Tensorflow y más recientemente Flax. En todo caso es importante tener en cuenta que los transformers en Hugginface solamente aceptan tensores. " ] }, { "cell_type": "code", "execution_count": 6, "id": "d7b61839-4abd-4b4c-a200-01cfa9f0e9eb", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'input_ids': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=\n", "array([[ 101, 2023, 2607, 2003, 10392, 1012, 102, 0],\n", " [ 101, 1045, 5223, 2023, 2061, 2172, 999, 102]],\n", " dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=\n", "array([[1, 1, 1, 1, 1, 1, 1, 0],\n", " [1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>}\n" ] } ], "source": [ "raw_inputs = [\n", " 'This course is fantastic.',\n", " 'I hate this so much!'\n", "]\n", "inputs = tokenizer(raw_inputs, padding=True, truncation=True, return_tensors='tf') # return ='pt' para tensores pytorch\n", "print(inputs)" ] }, { "cell_type": "markdown", "id": "e7519d2f-e446-404e-a625-46d0414270f2", "metadata": {}, "source": [ "### Notas" ] }, { "cell_type": "markdown", "id": "9653d9ca-569d-4a5b-891d-92cdc7a8f04f", "metadata": {}, "source": [ "1. Se han agregado los token de inicio de sentencia(101) y fin de sentencia(102). La primera sentencia tiene un padding (relleno) al cual se asigna el código 0. \n", "1. La máscara de atención marca los tokens de relleno con cero. Esto se usa en la red transformer para enmascarar los token de relleno de tal manera que el softmax que codifica finalmente la autoatención no le de peso a esos token." ] }, { "cell_type": "markdown", "id": "56847e5a-4a84-4e0f-a2b5-a5b5240418a1", "metadata": {}, "source": [ "## <span style=\"color:blue\">Paso a través del modelo</span>" ] }, { "cell_type": "markdown", "id": "05916055-936f-4d0b-89b5-c59eff6ce396", "metadata": {}, "source": [ "Podemos bajar el modelo preentrenado mediante el uso de la clase *TFAutoModel* que también tiene el método *from-trained*." ] }, { "cell_type": "code", "execution_count": 8, "id": "3e9f6526-4eff-4208-86de-91d2ae3191d4", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Some layers from the model checkpoint at distilbert-base-uncased-finetuned-sst-2-english were not used when initializing TFDistilBertModel: ['pre_classifier', 'classifier', 'dropout_19']\n", "- This IS expected if you are initializing TFDistilBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", "- This IS NOT expected if you are initializing TFDistilBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", "All the layers of TFDistilBertModel were initialized from the model checkpoint at distilbert-base-uncased-finetuned-sst-2-english.\n", "If your task is similar to the task the model of the checkpoint was trained on, you can already use TFDistilBertModel for predictions without further training.\n" ] } ], "source": [ "from transformers import TFAutoModel # AutoModel en Pythorch\n", "\n", "checkpoint = 'distilbert-base-uncased-finetuned-sst-2-english'\n", "model = TFAutoModel.from_pretrained(checkpoint)" ] }, { "cell_type": "raw", "id": "edc20437-2112-4786-aafd-a6a27b6e53b7", "metadata": {}, "source": [ "# torch\n", "from transformers import AutoModel \n", "checkpoint = 'distilbert-base-uncased-finetuned-sst-2-english'\n", "model = AutoModel.from_pretrained(checkpoint)" ] }, { "cell_type": "code", "execution_count": 9, "id": "c2f7e209-b943-4ba8-b21f-d41609036575", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Model: \"tf_distil_bert_model_1\"\n", "_________________________________________________________________\n", "Layer (type) Output Shape Param # \n", "=================================================================\n", "distilbert (TFDistilBertMain multiple 66362880 \n", "=================================================================\n", "Total params: 66,362,880\n", "Trainable params: 66,362,880\n", "Non-trainable params: 0\n", "_________________________________________________________________\n" ] } ], "source": [ "model.summary()" ] }, { "cell_type": "markdown", "id": "b116f348-2ccc-433b-88f5-ceea45d5121b", "metadata": {}, "source": [ "### Vector de salida " ] }, { "cell_type": "markdown", "id": "4edae72c-ec00-4eb7-bfbd-301210ee1f73", "metadata": {}, "source": [ "Es usualmente un tensor de tamaño grande y con tres dimensiones:\n", "\n", "* Tamaño del lote (**Batch size**): tamaño de secuencias a procesar cada vez.\n", "* Longitud de secuencia (**sequence length**) Tamaño de cada secuencia.\n", "* Dimensión del modelo (**Hidden size**): tamaño del embedding. Usualmente 768 para modelos pequeños y hasta 3072 para modelos grandes)" ] }, { "cell_type": "code", "execution_count": 10, "id": "26987d3a-c49b-45dc-bf8e-42da1eea4b48", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(2, 8, 768)\n" ] } ], "source": [ "outputs = model(inputs)\n", "print(outputs.last_hidden_state.shape)" ] }, { "cell_type": "code", "execution_count": 7, "id": "6feaaf5e-51cf-417e-bc3b-08799ec745a7", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "TFBaseModelOutput(last_hidden_state=<tf.Tensor: shape=(2, 8, 768), dtype=float32, numpy=\n", "array([[[ 0.71443266, 0.16590782, 0.22164305, ..., 0.4517184 ,\n", " 0.8582398 , -0.4670435 ],\n", " [ 0.833101 , 0.1852059 , 0.14624734, ..., 0.42707193,\n", " 0.8372486 , -0.39773327],\n", " [ 0.9286826 , 0.1862842 , 0.2808665 , ..., 0.38525626,\n", " 0.7030744 , -0.4284996 ],\n", " ...,\n", " [ 0.8273146 , 0.05676208, 0.45793867, ..., 0.43878704,\n", " 0.83444375, -0.67980033],\n", " [ 1.0110863 , 0.14425977, 0.80684364, ..., 0.4608204 ,\n", " 0.61149776, -0.76684564],\n", " [ 0.6806943 , 0.1321691 , 0.23668604, ..., 0.40009487,\n", " 0.85364753, -0.45703 ]],\n", "\n", " [[-0.29370627, 0.7282562 , -0.14972652, ..., -0.11868126,\n", " -1.0226722 , -0.04215673],\n", " [-0.22063597, 0.9383844 , -0.09512459, ..., -0.36431718,\n", " -0.66052145, 0.24069718],\n", " [-0.15360779, 0.89875007, -0.07276374, ..., -0.21891779,\n", " -0.85275996, 0.07099426],\n", " ...,\n", " [-0.24425443, 0.7034866 , -0.11993726, ..., -0.3340568 ,\n", " -0.9157687 , 0.17108764],\n", " [-0.02888484, 0.90061086, -0.21417457, ..., -0.21345952,\n", " -0.92661613, -0.13903481],\n", " [ 0.04716622, 0.36029524, -0.17284958, ..., -0.31908724,\n", " -0.92990047, -0.10469045]]], dtype=float32)>, hidden_states=None, attentions=None)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "outputs" ] }, { "cell_type": "markdown", "id": "8261d5df-dd4d-4080-b96e-62b23dc1a388", "metadata": {}, "source": [ "El vector de salida *outputs* contiene la salida de la red transformer. Es decir la salida de la capa codificadora en este caso de clasificación de sentencias en positivas o negativas." ] }, { "cell_type": "markdown", "id": "a00e64d1-f222-4c7d-ad4b-2283d293b788", "metadata": {}, "source": [ "### Cabezas del modelo (model heads)" ] }, { "cell_type": "markdown", "id": "9883321d-02fd-4389-901f-4ca9f87f0498", "metadata": {}, "source": [ "En los modelos de HuggingFace la cabeza corresponde al conjunto final de capas lineales que dan sentido a la tarea a realizar. La imagen ilustra el concepto. Todo el mecanismo de auto-atención está incorporado en la red transformer. En otras palabras, la cabeza es la parte que se agrega al modelo pre-entrenado, luego de quitarle la capa final, para luergo hacer el ajuste fino (fine tunning)." ] }, { "cell_type": "markdown", "id": "1f0ad346-4ef9-4656-8289-2cf50ff2842a", "metadata": {}, "source": [ "<figure>\n", "<center>\n", "<img src=\"../Imagenes/transformer_and_head.png\" width=\"800\" height=\"600\" align=\"center\"/>\n", "</center>\n", "<figcaption>\n", "<p style=\"text-align:center\">Modelo transformer en huggingface</p>\n", "</figcaption>\n", "</figure>\n", "\n", "Fuente: [HuggingFace Transformers course](https://huggingface.co/course/chapter2/2?fw=tf)" ] }, { "cell_type": "markdown", "id": "b2be31b4-8945-4aef-9900-44785bc14e1b", "metadata": {}, "source": [ "Hay diferentes arquitecturas disponibles en los transformers de HuggingFace, cada diseñada para atacar una tarea específica. Algunas de estas tareas son:\n", " \n", "* Model: para recupera los estados ocultos al final de la capa transformer(retrieve the hidden states)\n", "* ForCausalLM\n", "* ForMaskedLM\n", "* ForMultipleChoice\n", "* ForQuestionAnswering\n", "* ForSequenceClassification\n", "* ForTokenClassification\n", "* Otras\n", "\n", "Por ejemplo para la tarea de clasificación no usaremos Model (TFAutoModel) que mostramos arriba, sino que usaremos *TFAutoModelForSequenceClassification*." ] }, { "cell_type": "code", "execution_count": 12, "id": "b7bd0bdc-3c4c-4df6-9411-35daab81dbb9", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Some layers from the model checkpoint at distilbert-base-uncased-finetuned-sst-2-english were not used when initializing TFDistilBertForSequenceClassification: ['dropout_19']\n", "- This IS expected if you are initializing TFDistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", "- This IS NOT expected if you are initializing TFDistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", "Some layers of TFDistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased-finetuned-sst-2-english and are newly initialized: ['dropout_57']\n", "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "(2, 2)\n" ] } ], "source": [ "from transformers import TFAutoModelForSequenceClassification\n", "\n", "checkpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\n", "model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint)\n", "\n", "outputs = model(inputs)\n", "\n", "print(outputs.logits.shape)\n" ] }, { "cell_type": "raw", "id": "2f814f80-853a-43c5-9993-c9b5206d6b03", "metadata": {}, "source": [ "# torch\n", "from transformers import AutoModelForSequenceClassification\n", "checkpoint = \"distilbert-base-uncased-finetuned-sst-2-english\"\n", "model = AutoModelForSequenceClassification.from_pretrained(checkpoint)" ] }, { "cell_type": "markdown", "id": "56725d07-21a2-48a9-b15d-5db834cb2ca2", "metadata": {}, "source": [ "## <span style=\"color:blue\">Pos-procesamiento de la salida</span>" ] }, { "cell_type": "markdown", "id": "a85560bc-8a54-4efa-b985-2b49dd68e2d2", "metadata": {}, "source": [ "La salida en nuestro ejmplo son logits:" ] }, { "cell_type": "code", "execution_count": 15, "id": "bb15481b-447a-40af-b6f0-c261a75d1b65", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[-4.358263 4.692012 ]\n", " [ 4.169232 -3.3464477]], shape=(2, 2), dtype=float32)\n" ] } ], "source": [ "print(outputs.logits)" ] }, { "cell_type": "markdown", "id": "705e2aa4-e519-4e22-977d-cf91c734c82a", "metadata": {}, "source": [ "El modelo predice [-4.358263, 4.692012 ] para la primera sentencia y [ 4.169232, -3.3464477]. Estos valores no son probabilidades sino logits. Para convertirlas a probabilidades se usa la función *softmax*." ] }, { "cell_type": "code", "execution_count": 17, "id": "7c8f46fb-ff27-44de-a2f0-e7453e4c0f25", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[1.17345015e-04 9.99882698e-01]\n", " [9.99455869e-01 5.44183713e-04]], shape=(2, 2), dtype=float32)\n" ] } ], "source": [ "import tensorflow as tf\n", "\n", "predictions = tf.math.softmax(outputs.logits)\n", "print(predictions)" ] }, { "cell_type": "raw", "id": "1fa4b43d-9b88-44fb-b0e6-1bb11383ed89", "metadata": {}, "source": [ "# torch\n", "import torch\n", "predictions = torch.nn.functional.softmax(outputs.logits)" ] }, { "cell_type": "markdown", "id": "018d2871-53f3-4be9-968c-7a681f44af93", "metadata": {}, "source": [ "Para obtener las etiquetas (labels) asociadas a cada posición se puede utilizar *model.config.id2label*." ] }, { "cell_type": "code", "execution_count": 16, "id": "63d623dc-f70d-4008-bc86-3644a55667d9", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{0: 'NEGATIVE', 1: 'POSITIVE'}" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model.config.id2label" ] }, { "cell_type": "markdown", "id": "fdcfc602-9d89-4554-92fa-4f5196ba703c", "metadata": {}, "source": [ "Por lo que la primera sentencia es calificada como positiva y la segunda como negativa." ] }, { "cell_type": "markdown", "id": "2794f261-c952-4419-817f-f03a7922c43f", "metadata": {}, "source": [ "</span>" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 5 }