{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Finetuner un modèle de langage masqué (TensorFlow)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Install the Transformers, Datasets, and Evaluate libraries to run this notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install datasets evaluate transformers[sentencepiece]\n", "!apt install git-lfs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You will need to setup git, adapt your email and name in the following cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!git config --global user.email \"you@example.com\"\n", "!git config --global user.name \"Your Name\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You will also need to be logged in to the Hugging Face Hub. Execute the following and enter your credentials." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from huggingface_hub import notebook_login\n", "\n", "notebook_login()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import TFAutoModelForMaskedLM\n", "\n", "model_checkpoint = \"distilbert-base-uncased\"\n", "model = TFAutoModelForMaskedLM.from_pretrained(model_checkpoint)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Model: \"tf_distil_bert_for_masked_lm\"\n", "_________________________________________________________________\n", "Layer (type) Output Shape Param # \n", "=================================================================\n", "distilbert (TFDistilBertMain multiple 66362880 \n", "_________________________________________________________________\n", "vocab_transform (Dense) multiple 590592 \n", "_________________________________________________________________\n", "vocab_layer_norm (LayerNorma multiple 1536 \n", "_________________________________________________________________\n", "vocab_projector (TFDistilBer multiple 23866170 \n", "=================================================================\n", "Total params: 66,985,530\n", "Trainable params: 66,985,530\n", "Non-trainable params: 0\n", "_________________________________________________________________" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model(model.dummy_inputs) # Construire le modèle\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "text = \"This is a great [MASK].\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", "\n", "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'>>> This is a great deal.' # C'est une bonne affaire\n", "'>>> This is a great success.' # C'est un grand succès\n", "'>>> This is a great adventure.' # C'est une grande aventure\n", "'>>> This is a great idea.' # C'est une bonne idée\n", "'>>> This is a great feat.' # C'est un grand exploit" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "import tensorflow as tf\n", "\n", "inputs = tokenizer(text, return_tensors=\"np\")\n", "token_logits = model(**inputs).logits\n", "# Trouve l'emplacement de [MASK] et extrait ses logits\n", "mask_token_index = np.argwhere(inputs[\"input_ids\"] == tokenizer.mask_token_id)[0, 1]\n", "mask_token_logits = token_logits[0, mask_token_index, :]\n", "# On choisit les candidats [MASK] avec les logits les plus élevés\n", "# Nous annulons le tableau avant argsort pour obtenir le plus grand, et non le plus petit, logits\n", "top_5_tokens = np.argsort(-mask_token_logits)[:5].tolist()\n", "\n", "for token in top_5_tokens:\n", " print(f\">>> {text.replace(tokenizer.mask_token, tokenizer.decode([token]))}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "DatasetDict({\n", " train: Dataset({\n", " features: ['text', 'label'],\n", " num_rows: 25000\n", " })\n", " test: Dataset({\n", " features: ['text', 'label'],\n", " num_rows: 25000\n", " })\n", " unsupervised: Dataset({\n", " features: ['text', 'label'],\n", " num_rows: 50000\n", " })\n", "})" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from datasets import load_dataset\n", "\n", "imdb_dataset = load_dataset(\"imdb\")\n", "imdb_dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\n", "'>>> Review: This is your typical Priyadarshan movie--a bunch of loony characters out on some silly mission. His signature climax has the entire cast of the film coming together and fighting each other in some crazy moshpit over hidden money. Whether it is a winning lottery ticket in Malamaal Weekly, black money in Hera Pheri, \"kodokoo\" in Phir Hera Pheri, etc., etc., the director is becoming ridiculously predictable. Don\\'t get me wrong; as clichéd and preposterous his movies may be, I usually end up enjoying the comedy. However, in most his previous movies there has actually been some good humor, (Hungama and Hera Pheri being noteworthy ones). Now, the hilarity of his films is fading as he is using the same formula over and over again.

Songs are good. Tanushree Datta looks awesome. Rajpal Yadav is irritating, and Tusshar is not a whole lot better. Kunal Khemu is OK, and Sharman Joshi is the best.'\n", "'>>> Label: 0'\n", "\n", "'>>> Review: Okay, the story makes no sense, the characters lack any dimensionally, the best dialogue is ad-libs about the low quality of movie, the cinematography is dismal, and only editing saves a bit of the muddle, but Sam\" Peckinpah directed the film. Somehow, his direction is not enough. For those who appreciate Peckinpah and his great work, this movie is a disappointment. Even a great cast cannot redeem the time the viewer wastes with this minimal effort.

The proper response to the movie is the contempt that the director San Peckinpah, James Caan, Robert Duvall, Burt Young, Bo Hopkins, Arthur Hill, and even Gig Young bring to their work. Watch the great Peckinpah films. Skip this mess.'\n", "'>>> Label: 0'\n", "\n", "'>>> Review: I saw this movie at the theaters when I was about 6 or 7 years old. I loved it then, and have recently come to own a VHS version.

My 4 and 6 year old children love this movie and have been asking again and again to watch it.

I have enjoyed watching it again too. Though I have to admit it is not as good on a little TV.

I do not have older children so I do not know what they would think of it.

The songs are very cute. My daughter keeps singing them over and over.

Hope this helps.'\n", "'>>> Label: 1'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sample = imdb_dataset[\"train\"].shuffle(seed=42).select(range(3))\n", "\n", "for row in sample:\n", " print(f\"\\n'>>> Review: {row['text']}'\")\n", " print(f\"'>>> Label: {row['label']}'\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "DatasetDict({\n", " train: Dataset({\n", " features: ['attention_mask', 'input_ids', 'word_ids'],\n", " num_rows: 25000\n", " })\n", " test: Dataset({\n", " features: ['attention_mask', 'input_ids', 'word_ids'],\n", " num_rows: 25000\n", " })\n", " unsupervised: Dataset({\n", " features: ['attention_mask', 'input_ids', 'word_ids'],\n", " num_rows: 50000\n", " })\n", "})" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def tokenize_function(examples):\n", " result = tokenizer(examples[\"text\"])\n", " if tokenizer.is_fast:\n", " result[\"word_ids\"] = [result.word_ids(i) for i in range(len(result[\"input_ids\"]))]\n", " return result\n", "\n", "\n", "# Utilisation de batched=True pour activer le multithreading rapide !\n", "tokenized_datasets = imdb_dataset.map(\n", " tokenize_function, batched=True, remove_columns=[\"text\", \"label\"]\n", ")\n", "tokenized_datasets" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "512" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer.model_max_length" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "chunk_size = 128" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'>>> Review 0 length: 200'\n", "'>>> Review 1 length: 559'\n", "'>>> Review 2 length: 192'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Le découpage produit une liste de listes pour chaque caractéristique\n", "tokenized_samples = tokenized_datasets[\"train\"][:3]\n", "\n", "for idx, sample in enumerate(tokenized_samples[\"input_ids\"]):\n", " print(f\"'>>> Review {idx} length: {len(sample)}'\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'>>> Longueur des critiques concaténées : 951'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "concatenated_examples = {\n", " k: sum(tokenized_samples[k], []) for k in tokenized_samples.keys()\n", "}\n", "total_length = len(concatenated_examples[\"input_ids\"])\n", "print(f\"'>>> Longueur des critiques concaténées : {total_length}'\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'>>> Chunk length: 128'\n", "'>>> Chunk length: 128'\n", "'>>> Chunk length: 128'\n", "'>>> Chunk length: 128'\n", "'>>> Chunk length: 128'\n", "'>>> Chunk length: 128'\n", "'>>> Chunk length: 128'\n", "'>>> Chunk length: 55'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chunks = {\n", " k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]\n", " for k, t in concatenated_examples.items()\n", "}\n", "\n", "for chunk in chunks[\"input_ids\"]:\n", " print(f\"'>>> Chunk length: {len(chunk)}'\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def group_texts(examples):\n", " # Concaténation de tous les textes\n", " concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\n", " # Calcule la longueur des textes concaténés\n", " total_length = len(concatenated_examples[list(examples.keys())[0]])\n", " # Nous laissons tomber le dernier morceau s'il est plus petit que chunk_size\n", " total_length = (total_length // chunk_size) * chunk_size\n", " # Fractionnement par chunk de max_len\n", " result = {\n", " k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]\n", " for k, t in concatenated_examples.items()\n", " }\n", " # Créer une nouvelle colonne d'étiquettes\n", " result[\"labels\"] = result[\"input_ids\"].copy()\n", " return result" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "DatasetDict({\n", " train: Dataset({\n", " features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n", " num_rows: 61289\n", " })\n", " test: Dataset({\n", " features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n", " num_rows: 59905\n", " })\n", " unsupervised: Dataset({\n", " features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n", " num_rows: 122963\n", " })\n", "})" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "lm_datasets = tokenized_datasets.map(group_texts, batched=True)\n", "lm_datasets" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\".... at.......... high. a classic line : inspector : i'm here to sack one of your teachers. student : welcome to bromwell high. i expect that many adults of my age think that bromwell high is far fetched. what a pity that it isn't! [SEP] [CLS] homelessness ( or houselessness as george carlin stated ) has been an issue for years but never a plan to help those on the street that were once considered human who did everything from going to school, work, or vote for the matter. most people think of the homeless\"" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer.decode(lm_datasets[\"train\"][1][\"input_ids\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import DataCollatorForLanguageModeling\n", "\n", "data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "samples = [lm_datasets[\"train\"][i] for i in range(2)]\n", "for sample in samples:\n", " _ = sample.pop(\"word_ids\")\n", "\n", "for chunk in data_collator(samples)[\"input_ids\"]:\n", " print(f\"\\n'>>> {tokenizer.decode(chunk)}'\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import collections\n", "import numpy as np\n", "\n", "from transformers.data.data_collator import tf_default_data_collator\n", "\n", "wwm_probability = 0.2\n", "\n", "\n", "def whole_word_masking_data_collator(features):\n", " for feature in features:\n", " word_ids = feature.pop(\"word_ids\")\n", "\n", " # Création d'une correspondance entre les mots et les indices des tokens correspondants\n", " mapping = collections.defaultdict(list)\n", " current_word_index = -1\n", " current_word = None\n", " for idx, word_id in enumerate(word_ids):\n", " if word_id is not None:\n", " if word_id != current_word:\n", " current_word = word_id\n", " current_word_index += 1\n", " mapping[current_word_index].append(idx)\n", "\n", " # Masquer des mots de façon aléatoire\n", " mask = np.random.binomial(1, wwm_probability, (len(mapping),))\n", " input_ids = feature[\"input_ids\"]\n", " labels = feature[\"labels\"]\n", " new_labels = [-100] * len(labels)\n", " for word_id in np.where(mask)[0]:\n", " word_id = word_id.item()\n", " for idx in mapping[word_id]:\n", " new_labels[idx] = labels[idx]\n", " input_ids[idx] = tokenizer.mask_token_id\n", "\n", " return tf_default_data_collator(features)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'>>> [CLS] bromwell high is a cartoon comedy [MASK] it ran at the same time as some other programs about school life, such as \" teachers \". my 35 years in the teaching profession lead me to believe that bromwell high\\'s satire is much closer to reality than is \" teachers \". the scramble to survive financially, the insightful students who can see right through their pathetic teachers\\'pomp, the pettiness of the whole situation, all remind me of the schools i knew and their students. when i saw the episode in which a student repeatedly tried to burn down the school, i immediately recalled.....'\n", "\n", "'>>> .... [MASK] [MASK] [MASK] [MASK]....... high. a classic line : inspector : i\\'m here to sack one of your teachers. student : welcome to bromwell high. i expect that many adults of my age think that bromwell high is far fetched. what a pity that it isn\\'t! [SEP] [CLS] homelessness ( or houselessness as george carlin stated ) has been an issue for years but never a plan to help those on the street that were once considered human who did everything from going to school, work, or vote for the matter. most people think of the homeless'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "samples = [lm_datasets[\"train\"][i] for i in range(2)]\n", "batch = whole_word_masking_data_collator(samples)\n", "\n", "for chunk in batch[\"input_ids\"]:\n", " print(f\"\\n'>>> {tokenizer.decode(chunk)}'\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "DatasetDict({\n", " train: Dataset({\n", " features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n", " num_rows: 10000\n", " })\n", " test: Dataset({\n", " features: ['attention_mask', 'input_ids', 'labels', 'word_ids'],\n", " num_rows: 1000\n", " })\n", "})" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_size = 10_000\n", "test_size = int(0.1 * train_size)\n", "\n", "downsampled_dataset = lm_datasets[\"train\"].train_test_split(\n", " train_size=train_size, test_size=test_size, seed=42\n", ")\n", "downsampled_dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from huggingface_hub import notebook_login\n", "\n", "notebook_login()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tf_train_dataset = downsampled_dataset[\"train\"].to_tf_dataset(\n", " columns=[\"input_ids\", \"attention_mask\", \"labels\"],\n", " collate_fn=data_collator,\n", " shuffle=True,\n", " batch_size=32,\n", ")\n", "\n", "tf_eval_dataset = downsampled_dataset[\"test\"].to_tf_dataset(\n", " columns=[\"input_ids\", \"attention_mask\", \"labels\"],\n", " collate_fn=data_collator,\n", " shuffle=False,\n", " batch_size=32,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import create_optimizer\n", "from transformers.keras_callbacks import PushToHubCallback\n", "import tensorflow as tf\n", "\n", "num_train_steps = len(tf_train_dataset)\n", "optimizer, schedule = create_optimizer(\n", " init_lr=2e-5,\n", " num_warmup_steps=1_000,\n", " num_train_steps=num_train_steps,\n", " weight_decay_rate=0.01,\n", ")\n", "model.compile(optimizer=optimizer)\n", "\n", "# Entraîner en mixed-precision float16\n", "tf.keras.mixed_precision.set_global_policy(\"mixed_float16\")\n", "\n", "callback = PushToHubCallback(\n", " output_dir=f\"{model_name}-finetuned-imdb\", tokenizer=tokenizer\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ ">>> Perplexité : 21.75" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import math\n", "\n", "eval_loss = model.evaluate(tf_eval_dataset)\n", "print(f\"Perplexité : {math.exp(eval_loss):.2f}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.fit(tf_train_dataset, validation_data=tf_eval_dataset, callbacks=[callback])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ ">>> Perplexité : 11.32" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "eval_loss = model.evaluate(tf_eval_dataset)\n", "print(f\"Perplexité : {math.exp(eval_loss):.2f}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import pipeline\n", "\n", "mask_filler = pipeline(\n", " \"fill-mask\", model=\"huggingface-course/distilbert-base-uncased-finetuned-imdb\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'>>> this is a great movie.'\n", "'>>> this is a great film.'\n", "'>>> this is a great story.'\n", "'>>> this is a great movies.'\n", "'>>> this is a great character.'" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "preds = mask_filler(text)\n", "\n", "for pred in preds:\n", " print(f\">>> {pred['sequence']}\")" ] } ], "metadata": { "colab": { "name": "Finetuner un modèle de langage masqué (TensorFlow)", "provenance": [] } }, "nbformat": 4, "nbformat_minor": 4 }