{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Transformers installation\n", "! pip install transformers datasets\n", "# To install from source instead of the last release, comment the command above and uncomment the following one.\n", "# ! pip install git+https://github.com/huggingface/transformers.git" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Multi-lingual models" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Most of the models available in this library are mono-lingual models (English, Chinese and German). A few multi-lingual\n", "models are available and have a different mechanisms than mono-lingual models. This page details the usage of these\n", "models." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## XLM" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "XLM has a total of 10 different checkpoints, only one of which is mono-lingual. The 9 remaining model checkpoints can\n", "be split in two categories: the checkpoints that make use of language embeddings, and those that don't" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### XLM & Language Embeddings" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This section concerns the following checkpoints:\n", "\n", "- `xlm-mlm-ende-1024` (Masked language modeling, English-German)\n", "- `xlm-mlm-enfr-1024` (Masked language modeling, English-French)\n", "- `xlm-mlm-enro-1024` (Masked language modeling, English-Romanian)\n", "- `xlm-mlm-xnli15-1024` (Masked language modeling, XNLI languages)\n", "- `xlm-mlm-tlm-xnli15-1024` (Masked language modeling + Translation, XNLI languages)\n", "- `xlm-clm-enfr-1024` (Causal language modeling, English-French)\n", "- `xlm-clm-ende-1024` (Causal language modeling, English-German)\n", "\n", "These checkpoints require language embeddings that will specify the language used at inference time. These language\n", "embeddings are represented as a tensor that is of the same shape as the input ids passed to the model. The values in\n", "these tensors depend on the language used and are identifiable using the `lang2id` and `id2lang` attributes from\n", "the tokenizer.\n", "\n", "Here is an example using the `xlm-clm-enfr-1024` checkpoint (Causal language modeling, English-French):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "from transformers import XLMTokenizer, XLMWithLMHeadModel\n", "\n", "tokenizer = XLMTokenizer.from_pretrained(\"xlm-clm-enfr-1024\")\n", "model = XLMWithLMHeadModel.from_pretrained(\"xlm-clm-enfr-1024\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The different languages this model/tokenizer handles, as well as the ids of these languages are visible using the\n", "`lang2id` attribute:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'en': 0, 'fr': 1}" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(tokenizer.lang2id)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These ids should be used when passing a language parameter during a model pass. Let's define our inputs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "input_ids = torch.tensor([tokenizer.encode(\"Wikipedia was used to\")]) # batch size of 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We should now define the language embedding by using the previously defined language id. We want to create a tensor\n", "filled with the appropriate language ids, of the same size as input_ids. For english, the id is 0:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "language_id = tokenizer.lang2id[\"en\"] # 0\n", "langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0])\n", "\n", "# We reshape it to be of size (batch_size, sequence_length)\n", "langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can then feed it all as input to your model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "outputs = model(input_ids, langs=langs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The example [run_generation.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-generation/run_generation.py) can generate text\n", "using the CLM checkpoints from XLM, using the language embeddings." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### XLM without Language Embeddings" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This section concerns the following checkpoints:\n", "\n", "- `xlm-mlm-17-1280` (Masked language modeling, 17 languages)\n", "- `xlm-mlm-100-1280` (Masked language modeling, 100 languages)\n", "\n", "These checkpoints do not require language embeddings at inference time. These models are used to have generic sentence\n", "representations, differently from previously-mentioned XLM checkpoints." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## BERT" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "BERT has two checkpoints that can be used for multi-lingual tasks:\n", "\n", "- `bert-base-multilingual-uncased` (Masked language modeling + Next sentence prediction, 102 languages)\n", "- `bert-base-multilingual-cased` (Masked language modeling + Next sentence prediction, 104 languages)\n", "\n", "These checkpoints do not require language embeddings at inference time. They should identify the language used in the\n", "context and infer accordingly." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## XLM-RoBERTa" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "XLM-RoBERTa was trained on 2.5TB of newly created clean CommonCrawl data in 100 languages. It provides strong gains\n", "over previously released multi-lingual models like mBERT or XLM on downstream tasks like classification, sequence\n", "labeling and question answering.\n", "\n", "Two XLM-RoBERTa checkpoints can be used for multi-lingual tasks:\n", "\n", "- `xlm-roberta-base` (Masked language modeling, 100 languages)\n", "- `xlm-roberta-large` (Masked language modeling, 100 languages)" ] } ], "metadata": {}, "nbformat": 4, "nbformat_minor": 4 }