{ "cells": [ { "cell_type": "markdown", "id": "95fdf1c6-20c1-4695-92c3-1a30d118da9e", "metadata": { "tags": [] }, "source": [ "# <span style=\"color:green\"><center>Aprendizaje Profundo</center></span>" ] }, { "cell_type": "markdown", "id": "9ab2d584-a384-4517-96c0-b085f6adfe24", "metadata": { "tags": [] }, "source": [ "# <span style=\"color:red\"><center>Transformers- Modelos</center></span>" ] }, { "cell_type": "markdown", "id": "ac6d5c0f-edf4-4e0a-8260-4f6a251eedd3", "metadata": {}, "source": [ "<center>Modelos HuggingFace</center>" ] }, { "cell_type": "markdown", "id": "2814b837-8220-4f6c-8d62-4f76d8652736", "metadata": {}, "source": [ "## <span style=\"color:blue\">Autores</span>" ] }, { "cell_type": "markdown", "id": "f9f2eeab-4acc-4a4f-bb79-9e2d45a810d0", "metadata": {}, "source": [ "1. Alvaro Mauricio Montenegro Díaz, ammontenegrod@unal.edu.co\n", "2. Daniel Mauricio Montenegro Reyes, dextronomo@gmail.com " ] }, { "cell_type": "markdown", "id": "b002dc78-cb0a-47d5-a17f-c6df384b96f8", "metadata": {}, "source": [ "## <span style=\"color:blue\">Diseño gráfico y Marketing digital</span>\n", " " ] }, { "cell_type": "markdown", "id": "05789c43-41b3-494d-afaa-a370002d9ccf", "metadata": {}, "source": [ "1. Maria del Pilar Montenegro Reyes, pmontenegro88@gmail.com " ] }, { "cell_type": "markdown", "id": "bcda43ed-5f98-4ffa-b3d6-dd268a5fdc0e", "metadata": {}, "source": [ "## <span style=\"color:blue\">Asistentes</span>" ] }, { "cell_type": "markdown", "id": "d7a21b86-13b2-4c1b-824f-96dce45f3e4c", "metadata": {}, "source": [] }, { "cell_type": "markdown", "id": "e4447d4d-a448-40d6-98cd-cbaebf381d7c", "metadata": {}, "source": [ "## <span style=\"color:blue\">Referencias</span> " ] }, { "cell_type": "markdown", "id": "d23caca7-9f44-4ca3-a8c9-2c4cfd44ee39", "metadata": {}, "source": [ "1. [HuggingFace. Transformers ](https://huggingface.co/transformers/)\n", "1. [HuggingFace. Models](https://huggingface.co/course/chapter2/3?fw=tf)\n", "1. [Tutorial Transformer de Google](https://www.tensorflow.org/text/tutorials/transformer)\n", "1. [Transformer-chatbot-tutorial-with-tensorflow-2](https://blog.tensorflow.org/2019/05/transformer-chatbot-tutorial-with-tensorflow-2.html) \n", "1. [Transformer Architecture: The positional encoding](https://kazemnejad.com/blog/transformer_architecture_positional_encoding/)\n", "1. [Illustrated Auto-attención](https://towardsdatascience.com/illustrated-self-attention-2d627e33b20a)\n", "1. [Illustrated Attention](https://towardsdatascience.com/attn-illustrated-attention-5ec4ad276ee3#0458)\n", "1. [Neural Machine Translation by Jointly Learning to Align and Translate (Bahdanau et. al, 2015)](https://arxiv.org/pdf/1409.0473.pdf)\n", "1. [Effective Approaches to Attention-based Neural Machine Translation (Luong et. al, 2015)](https://arxiv.org/pdf/1508.04025.pdf)\n", "1. [Attention Is All You Need (Vaswani et. al, 2017)](https://arxiv.org/pdf/1706.03762.pdf)\n", "1. [Self-Attention GAN (Zhang et. al, 2018)](https://arxiv.org/pdf/1805.08318.pdf)\n", "1. [Sequence to Sequence Learning with Neural Networks (Sutskever et. al, 2014)](https://arxiv.org/pdf/1409.3215.pdf)\n", "1. [TensorFlow’s seq2seq Tutorial with Attention (Tutorial on seq2seq+attention)](https://github.com/tensorflow/nmt)\n", "1. [Lilian Weng’s Blog on Attention (Great start to attention)](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)\n", "1. [Jay Alammar’s Blog on Seq2Seq with Attention (Great illustrations and worked example on seq2seq+attention)](https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/)\n", "1. [Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation (Wu et. al, 2016)](https://arxiv.org/pdf/1609.08144.pdf)\n", "1. [Adam: A method for stochastic optimization](https://arxiv.org/pdf/1412.6980.pdf)\n", "1. [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/pdf/1810.04805.pdf)" ] }, { "cell_type": "markdown", "id": "2a0da79b-44dc-460c-ac1e-b71ade087a9c", "metadata": {}, "source": [ "## <span style=\"color:blue\">Contenido</span>" ] }, { "cell_type": "markdown", "id": "15195d1b-1dfc-4e35-a28d-5c6dad544252", "metadata": {}, "source": [ "* [Introducción](#Introducción)\n", "* [Creación-de-un-Transformer](#Creación-de-un-Transformer)" ] }, { "cell_type": "markdown", "id": "afe895d3-c25d-49d7-9b47-ff4835ba2b95", "metadata": {}, "source": [ "## <span style=\"color:blue\">Introducción</span>" ] }, { "cell_type": "markdown", "id": "1c2ba28d-5780-4438-9084-be9c17530b1b", "metadata": {}, "source": [ "En esta lección hacemos un acercamiento a la creación y uso de un modelo transformer en HuggingFace. Usaremos la clase *TFAutoModel* en Tensoflow, *AutoModel* en Torch.\n", "\n", "*TFAutoModel* y clases cercanas son envoltorios (wrappers) de una gran variedad de modelos disponibles en la librería transformers." ] }, { "cell_type": "markdown", "id": "47815890-5b20-446c-95a4-af3eac8bc5a4", "metadata": {}, "source": [ "## <span style=\"color:blue\">Creación de un Transformer</span>" ] }, { "cell_type": "markdown", "id": "fda461a6-bc48-498c-8949-eb525b84b403", "metadata": {}, "source": [ "Primero inicializaremos un modelo BERT, cargandolo desde su archivo de configuración." ] }, { "cell_type": "markdown", "id": "fc0ef327-6d72-4b13-9606-aeb0c3baa893", "metadata": {}, "source": [ "Para correr el cuaderno original de HuggingFace para Tensorflow en Colab vaya a [Huggingface notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter2/section3_tf.ipynb). Para Torch [aquí](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter2/section3_pt.ipynb)." ] }, { "cell_type": "raw", "id": "e76fd28f-ef5a-452b-b562-ffd48a9fb361", "metadata": {}, "source": [ "!conda install -c huggingface transformers\n", "!conda install -c conda-forge sentencepiece" ] }, { "cell_type": "code", "execution_count": 1, "id": "7f2ae66c-ff6e-473c-9050-c26bb8c7b394", "metadata": {}, "outputs": [], "source": [ "from transformers import BertConfig, TFBertModel\n", "\n", "# Instancia configuración\n", "config = BertConfig()\n", "\n", "# Instancia el modelo desde la configuración\n", "model = TFBertModel(config)\n", "\n", "# El modelo fue inicializado aleatoriamente por defecto##" ] }, { "cell_type": "raw", "id": "b7de20b1-b856-4234-a416-7700f2849908", "metadata": {}, "source": [ "# torch\n", "from transformers import BertConfig, BertModel\n", "\n", "# Building the config\n", "config = BertConfig()\n", "\n", "# Building the model from the config\n", "model = BertModel(config)" ] }, { "cell_type": "code", "execution_count": 2, "id": "7a3e2d9c-8f1f-4e07-9ed8-30942f1b6caf", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "BertConfig {\n", " \"attention_probs_dropout_prob\": 0.1,\n", " \"gradient_checkpointing\": false,\n", " \"hidden_act\": \"gelu\",\n", " \"hidden_dropout_prob\": 0.1,\n", " \"hidden_size\": 768,\n", " \"initializer_range\": 0.02,\n", " \"intermediate_size\": 3072,\n", " \"layer_norm_eps\": 1e-12,\n", " \"max_position_embeddings\": 512,\n", " \"model_type\": \"bert\",\n", " \"num_attention_heads\": 12,\n", " \"num_hidden_layers\": 12,\n", " \"pad_token_id\": 0,\n", " \"position_embedding_type\": \"absolute\",\n", " \"transformers_version\": \"4.8.1\",\n", " \"type_vocab_size\": 2,\n", " \"use_cache\": true,\n", " \"vocab_size\": 30522\n", "}\n", "\n" ] } ], "source": [ "print(config)" ] }, { "cell_type": "markdown", "id": "66966487-1bff-4252-a5d7-b93b805f365d", "metadata": {}, "source": [ "El modelo está listo para ser entrenado para una tarea específica. Sin embargo esto requiere tiempo dinero posiblemente y una gran cantidad de datos. Por estas razones, lo razonable actualmente es empezar con un modelo preentrenado." ] }, { "cell_type": "markdown", "id": "c58dab01-e655-4f99-9619-49ff894408f8", "metadata": {}, "source": [ "### Carga de un modelo preentrado" ] }, { "cell_type": "code", "execution_count": null, "id": "69b142db-7bab-49ed-a8f3-40da2f642028", "metadata": {}, "outputs": [], "source": [ "from transformers import TFBertModel\n", "\n", "model = TFBertModel.from_pretrained('bert-base-cased')" ] }, { "cell_type": "raw", "id": "edefac11-467f-47c6-b4dc-3733f2004658", "metadata": {}, "source": [ "# torch\n", "from transformers import BertModel\n", "\n", "model = BertModel.from_pretrained('bert-base-cased')" ] }, { "cell_type": "markdown", "id": "d8368cd7-d7ba-4917-95a3-cf42d7e80e8d", "metadata": {}, "source": [ "Aquí el uso de *TFBertModel* es equivalente a *TFAutoModel*. Para los detalles del modelo entrenado revise su [model card](https://huggingface.co/bert-base-cased). Este modelo fue inicializado con los pesos del checkpoint de pesos actualmente disponible en la fuente (Hugging Face). Ya se puede usar para inferencia en tareas para las cuales fue entrenado. en este caso predecir palabras o sentencias enmascaradas. Para una tarea diferente se puede hacer un ajuste fino (*fine tunning*) que reentrena el modelo con muy pocos pasos.\n", "\n", "Los pesos fueron cargados y colocados en una carpeta caché. que por defecto es ~/.cache/huggingface/transformers. Se puede modificar eata direccción de cache con la variable ambiente **HF_HOME**. La lista completa de checkpoints disponibles para BERT se encuentra [aquí](https://huggingface.co/models?filter=bert)." ] }, { "cell_type": "markdown", "id": "81d49b4f-0c2a-4f4b-8127-21bf9bef9a46", "metadata": {}, "source": [ "### Métodos para guardar modelos preentrenados." ] }, { "cell_type": "markdown", "id": "f933cd58-5ecc-4fd4-b058-f204ce93a92e", "metadata": {}, "source": [ "Se usa el método **save_pretrained**, el cual es análogo al métdos **from_pretrained**" ] }, { "cell_type": "raw", "id": "77f0ae3b-5e3d-46d3-b221-764fbe5eb7ec", "metadata": {}, "source": [ "model.save_pretrained('directorio/en/mi/computador')." ] }, { "cell_type": "markdown", "id": "4ebe9ca3-64b3-49c4-bbeb-70ea9c244a11", "metadata": {}, "source": [ "Se almacena dos archivos en el disco:" ] }, { "cell_type": "raw", "id": "840546bf-a81d-4e2b-85d5-0e01e7487252", "metadata": {}, "source": [ "ls directorio/en/mi/computador\n", "config.json tf_model.h5" ] }, { "cell_type": "markdown", "id": "0d5621fa-d1f3-47ef-a5bd-a511546babc3", "metadata": {}, "source": [ "El archivo *config.json* contiene la configuración del modelo preentrenado en formato json. El archivo *tf_model.h5* es conocido como diccionario de estado (state dictionary) y contiene todos los pesos del modelo para este checkpoint. El formato h5 corresponde a archivos de tipo [hdf5](https://abrirarchivos.info/extension/h5#:~:text=Un%20archivo%20H5%20es%20un,de%20Datos%20Jer%C3%A1rquicos%20(HDF).&text=Los%20archivos%20H5%20se%20usan,instrumentos%20electr%C3%B3nicos%20y%20campos%20m%C3%A9dicos.). Los dos archivos van de la mano. " ] }, { "cell_type": "markdown", "id": "582a19a5-b469-4a5b-9bef-36b5c5c7992b", "metadata": {}, "source": [ "### Uso de un modelo Transformer para inferencia" ] }, { "cell_type": "markdown", "id": "1414cc63-0ceb-4836-99d6-0989f3cd275d", "metadata": {}, "source": [ "Consideremos el siguiente conjunto de sentencias." ] }, { "cell_type": "code", "execution_count": 4, "id": "8679c9d3-2c03-414d-a9a1-0c06117076c7", "metadata": {}, "outputs": [], "source": [ "sequences = [\n", " 'Hello',\n", " 'Cool',\n", " 'Nice!'\n", "]" ] }, { "cell_type": "markdown", "id": "6ee93d80-e51e-446b-b50c-0c820e0591b7", "metadata": {}, "source": [ "El tokenizer convierte esta secuencias en índices del respectivo vocabulario." ] }, { "cell_type": "code", "execution_count": 5, "id": "5bbbcfd4-9a93-4cd5-980f-5b3a57ccb1d9", "metadata": {}, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", "\n", "checkpoint = 'bert-base-cased'\n", "tokenizer = AutoTokenizer.from_pretrained(checkpoint)" ] }, { "cell_type": "markdown", "id": "0be2be52-8799-4592-9a6a-6f0b682707d9", "metadata": {}, "source": [ "#### Tokeniza la secuencia" ] }, { "cell_type": "code", "execution_count": 6, "id": "41c82615-02f0-4206-89c5-bcd220539f5b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 101 8667 102 0]\n", " [ 101 13297 102 0]\n", " [ 101 8835 106 102]]\n" ] } ], "source": [ "encoded_secuences = tokenizer(sequences, padding=True, truncation=True, return_tensors='tf') # return ='pt' para tensores pytorch\n", "\n", "print(encoded_secuences.input_ids.numpy())" ] }, { "cell_type": "code", "execution_count": 7, "id": "489c0520-7ba4-4b95-91bb-63ebd013ec74", "metadata": {}, "outputs": [], "source": [ "encoded_secuences = encoded_secuences.input_ids.numpy()" ] }, { "cell_type": "markdown", "id": "b8657299-6c4e-4f46-80c9-5d06e92b6849", "metadata": {}, "source": [ "Esta es una lista de secuencias codificadas. Si vamos a comenzar nuestro trabajo con estas secuencias codificadas, las convertimos en tensores" ] }, { "cell_type": "code", "execution_count": 8, "id": "f9c2fe36-c8ba-44b4-ac77-4945587e360c", "metadata": {}, "outputs": [], "source": [ "import tensorflow as tf\n", "\n", "model_inputs = tf.constant(encoded_secuences)" ] }, { "cell_type": "raw", "id": "778b9d62-ba44-4bee-8a37-445d6f945536", "metadata": {}, "source": [ "# torch\n", "import torch\n", "\n", "model_inputs = torch.tensor(encoded_sequences)" ] }, { "cell_type": "markdown", "id": "2c792392-f72f-43b5-aff5-320ab14dc892", "metadata": {}, "source": [ "### Uso de los tensores como entradas al modelo" ] }, { "cell_type": "code", "execution_count": 9, "id": "63047f7c-c9eb-429f-88c3-55027d38b525", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TFBaseModelOutputWithPooling(last_hidden_state=<tf.Tensor: shape=(3, 4, 768), dtype=float32, numpy=\n", "array([[[-0.90261275, 0.46482232, -1.0680807 , ..., -0.699818 ,\n", " 0.0272803 , 0.8792781 ],\n", " [-0.2942962 , 0.43228415, -0.28740913, ..., 0.3589468 ,\n", " -1.4553845 , -0.7646338 ],\n", " [ 0.6877541 , 0.7636266 , -0.20870464, ..., 0.9895973 ,\n", " -0.38200825, 0.54974 ],\n", " [ 0.07113194, 0.6081523 , -0.17413221, ..., 0.5636239 ,\n", " -0.80637443, 0.35920945]],\n", "\n", " [[-0.9534947 , 0.51484615, -0.96658885, ..., -0.759168 ,\n", " 0.02149527, 0.84670496],\n", " [ 0.20863365, 1.0134907 , -0.2225053 , ..., 0.81137526,\n", " -0.75574476, -0.21008678],\n", " [ 0.6778561 , 0.77220726, 0.02414589, ..., 1.0647261 ,\n", " -0.39467815, 0.5474687 ],\n", " [ 0.05586064, 0.6965945 , -0.03238491, ..., 0.5867801 ,\n", " -0.73757684, 0.2639028 ]],\n", "\n", " [[-0.3660255 , 0.6517344 , -1.5038463 , ..., -0.7160782 ,\n", " 0.1564751 , 0.63766223],\n", " [ 0.00262259, -0.05316934, -0.38248605, ..., -0.36637494,\n", " -0.73271465, -0.23060991],\n", " [ 2.1378229 , 0.26028013, 0.1468118 , ..., 0.69814235,\n", " -2.1301665 , -0.93348044],\n", " [-0.2108147 , 0.9443397 , -0.9976817 , ..., 0.7214987 ,\n", " 0.40497112, 0.87027246]]], dtype=float32)>, pooler_output=<tf.Tensor: shape=(3, 768), dtype=float32, numpy=\n", "array([[ 0.30539322, 0.86028224, 0.6725495 , ..., 0.6405281 ,\n", " 0.17553997, -0.12844808],\n", " [ 0.15537432, 0.8595787 , 0.66877705, ..., 0.6579369 ,\n", " 0.23296279, -0.21999657],\n", " [ 0.17398041, 0.8622835 , 0.6576183 , ..., 0.60278434,\n", " 0.21288712, -0.03698413]], dtype=float32)>, hidden_states=None, attentions=None)\n" ] } ], "source": [ "output = model(model_inputs)\n", "print(output)" ] }, { "cell_type": "code", "execution_count": 10, "id": "f52970be-aab7-4094-8dfc-e5155e0a2bbc", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Help on TFBaseModelOutputWithPooling in module transformers.modeling_tf_outputs object:\n", "\n", "class TFBaseModelOutputWithPooling(transformers.file_utils.ModelOutput)\n", " | TFBaseModelOutputWithPooling(last_hidden_state: tensorflow.python.framework.ops.Tensor = None, pooler_output: tensorflow.python.framework.ops.Tensor = None, hidden_states: Union[Tuple[tensorflow.python.framework.ops.Tensor], NoneType] = None, attentions: Union[Tuple[tensorflow.python.framework.ops.Tensor], NoneType] = None) -> None\n", " | \n", " | Base class for model's outputs that also contains a pooling of the last hidden states.\n", " | \n", " | Args:\n", " | last_hidden_state (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`):\n", " | Sequence of hidden-states at the output of the last layer of the model.\n", " | pooler_output (:obj:`tf.Tensor` of shape :obj:`(batch_size, hidden_size)`):\n", " | Last layer hidden-state of the first token of the sequence (classification token) further processed by a\n", " | Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence\n", " | prediction (classification) objective during pretraining.\n", " | \n", " | This output is usually *not* a good summary of the semantic content of the input, you're often better with\n", " | averaging or pooling the sequence of hidden-states for the whole input sequence.\n", " | hidden_states (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):\n", " | Tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of\n", " | shape :obj:`(batch_size, sequence_length, hidden_size)`.\n", " | \n", " | Hidden-states of the model at the output of each layer plus the initial embedding outputs.\n", " | attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):\n", " | Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length,\n", " | sequence_length)`.\n", " | \n", " | Attentions weights after the attention softmax, used to compute the weighted average in the self-attention\n", " | heads.\n", " | \n", " | Method resolution order:\n", " | TFBaseModelOutputWithPooling\n", " | transformers.file_utils.ModelOutput\n", " | collections.OrderedDict\n", " | builtins.dict\n", " | builtins.object\n", " | \n", " | Methods defined here:\n", " | \n", " | __eq__(self, other)\n", " | \n", " | __init__(self, last_hidden_state: tensorflow.python.framework.ops.Tensor = None, pooler_output: tensorflow.python.framework.ops.Tensor = None, hidden_states: Union[Tuple[tensorflow.python.framework.ops.Tensor], NoneType] = None, attentions: Union[Tuple[tensorflow.python.framework.ops.Tensor], NoneType] = None) -> None\n", " | \n", " | __repr__(self)\n", " | \n", " | ----------------------------------------------------------------------\n", " | Data and other attributes defined here:\n", " | \n", " | __annotations__ = {'attentions': typing.Union[typing.Tuple[tensorflow....\n", " | \n", " | __dataclass_fields__ = {'attentions': Field(name='attentions',type=typ...\n", " | \n", " | __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,or...\n", " | \n", " | __hash__ = None\n", " | \n", " | attentions = None\n", " | \n", " | hidden_states = None\n", " | \n", " | last_hidden_state = None\n", " | \n", " | pooler_output = None\n", " | \n", " | ----------------------------------------------------------------------\n", " | Methods inherited from transformers.file_utils.ModelOutput:\n", " | \n", " | __delitem__(self, *args, **kwargs)\n", " | Delete self[key].\n", " | \n", " | __getitem__(self, k)\n", " | x.__getitem__(y) <==> x[y]\n", " | \n", " | __post_init__(self)\n", " | \n", " | __setattr__(self, name, value)\n", " | Implement setattr(self, name, value).\n", " | \n", " | __setitem__(self, key, value)\n", " | Set self[key] to value.\n", " | \n", " | pop(self, *args, **kwargs)\n", " | od.pop(k[,d]) -> v, remove specified key and return the corresponding\n", " | value. If key is not found, d is returned if given, otherwise KeyError\n", " | is raised.\n", " | \n", " | setdefault(self, *args, **kwargs)\n", " | Insert key with a value of default if key is not in the dictionary.\n", " | \n", " | Return the value for key if key is in the dictionary, else default.\n", " | \n", " | to_tuple(self) -> Tuple[Any]\n", " | Convert self to a tuple containing all the attributes/keys that are not ``None``.\n", " | \n", " | update(self, *args, **kwargs)\n", " | D.update([E, ]**F) -> None. Update D from dict/iterable E and F.\n", " | If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\n", " | If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\n", " | In either case, this is followed by: for k in F: D[k] = F[k]\n", " | \n", " | ----------------------------------------------------------------------\n", " | Methods inherited from collections.OrderedDict:\n", " | \n", " | __ge__(self, value, /)\n", " | Return self>=value.\n", " | \n", " | __gt__(self, value, /)\n", " | Return self>value.\n", " | \n", " | __iter__(self, /)\n", " | Implement iter(self).\n", " | \n", " | __le__(self, value, /)\n", " | Return self<=value.\n", " | \n", " | __lt__(self, value, /)\n", " | Return self<value.\n", " | \n", " | __ne__(self, value, /)\n", " | Return self!=value.\n", " | \n", " | __reduce__(...)\n", " | Return state information for pickling\n", " | \n", " | __reversed__(...)\n", " | od.__reversed__() <==> reversed(od)\n", " | \n", " | __sizeof__(...)\n", " | D.__sizeof__() -> size of D in memory, in bytes\n", " | \n", " | clear(...)\n", " | od.clear() -> None. Remove all items from od.\n", " | \n", " | copy(...)\n", " | od.copy() -> a shallow copy of od\n", " | \n", " | items(...)\n", " | D.items() -> a set-like object providing a view on D's items\n", " | \n", " | keys(...)\n", " | D.keys() -> a set-like object providing a view on D's keys\n", " | \n", " | move_to_end(self, /, key, last=True)\n", " | Move an existing element to the end (or beginning if last is false).\n", " | \n", " | Raise KeyError if the element does not exist.\n", " | \n", " | popitem(self, /, last=True)\n", " | Remove and return a (key, value) pair from the dictionary.\n", " | \n", " | Pairs are returned in LIFO order if last is true or FIFO order if false.\n", " | \n", " | values(...)\n", " | D.values() -> an object providing a view on D's values\n", " | \n", " | ----------------------------------------------------------------------\n", " | Class methods inherited from collections.OrderedDict:\n", " | \n", " | fromkeys(iterable, value=None) from builtins.type\n", " | Create a new ordered dictionary with keys from iterable and values set to value.\n", " | \n", " | ----------------------------------------------------------------------\n", " | Data descriptors inherited from collections.OrderedDict:\n", " | \n", " | __dict__\n", " | \n", " | ----------------------------------------------------------------------\n", " | Methods inherited from builtins.dict:\n", " | \n", " | __contains__(self, key, /)\n", " | True if the dictionary has the specified key, else False.\n", " | \n", " | __getattribute__(self, name, /)\n", " | Return getattr(self, name).\n", " | \n", " | __len__(self, /)\n", " | Return len(self).\n", " | \n", " | get(self, key, default=None, /)\n", " | Return the value for key if key is in the dictionary, else default.\n", " | \n", " | ----------------------------------------------------------------------\n", " | Static methods inherited from builtins.dict:\n", " | \n", " | __new__(*args, **kwargs) from builtins.type\n", " | Create and return a new object. See help(type) for accurate signature.\n", "\n" ] } ], "source": [ "help(output)" ] }, { "cell_type": "code", "execution_count": null, "id": "02a762a4-6068-4fff-b07d-42424728cfaa", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 5 }