{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "*Licensed under the MIT License.*\n", "\n", "# Text Classification of MultiNLI Sentences using BERT" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Before You Start\n", "\n", "> **Tip**: If you want to run through the notebook quickly, you can set the **`QUICK_RUN`** flag in the cell below to **`True`**. This will run the notebook on a small subset of the data and a use a smaller number of epochs. \n", "\n", "If you run into CUDA out-of-memory error or the jupyter kernel dies constantly, try reducing the `BATCH_SIZE` and `MAX_LEN`, but note that model performance will be compromised. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## Set QUICK_RUN = True to run the notebook on a small subset of data and a smaller number of epochs.\n", "QUICK_RUN = True" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sys\n", "sys.path.append(\"../../\")\n", "import os\n", "import json\n", "import pandas as pd\n", "import numpy as np\n", "import scrapbook as sb\n", "from sklearn.metrics import classification_report, accuracy_score\n", "from sklearn.preprocessing import LabelEncoder\n", "from sklearn.model_selection import train_test_split\n", "import torch\n", "import torch.nn as nn\n", "\n", "from interpret_text.experimental.common.utils_bert import Language, Tokenizer, BERTSequenceClassifier\n", "from interpret_text.experimental.common.timer import Timer\n", "\n", "from notebooks.test_utils.utils_mnli import load_mnli_pandas_df" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from interpret_text.experimental.unified_information import UnifiedInformationExplainer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "In this notebook, we fine-tune and evaluate a pretrained [BERT](https://arxiv.org/abs/1810.04805) model on a subset of the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) dataset.\n", "\n", "We use a [sequence classifier](https://github.com/microsoft/nlp/blob/master/utils_nlp/models/bert/sequence_classification.py) that wraps [Hugging Face's PyTorch implementation](https://github.com/huggingface/pytorch-pretrained-BERT) of Google's [BERT](https://github.com/google-research/bert)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Set parameters\n", "Here we set some parameters that we use for our modeling task." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "TRAIN_DATA_FRACTION = 1\n", "TEST_DATA_FRACTION = 1\n", "NUM_EPOCHS = 1\n", "\n", "if QUICK_RUN:\n", " TRAIN_DATA_FRACTION = 0.001\n", " TEST_DATA_FRACTION = 0.001\n", " NUM_EPOCHS = 1\n", "\n", "if torch.cuda.is_available():\n", " BATCH_SIZE = 1\n", "else:\n", " BATCH_SIZE = 8\n", "\n", "DATA_FOLDER = \"./temp\"\n", "BERT_CACHE_DIR = \"./temp\"\n", "LANGUAGE = Language.ENGLISH\n", "TO_LOWER = True\n", "MAX_LEN = 150\n", "BATCH_SIZE_PRED = 512\n", "TRAIN_SIZE = 0.6\n", "LABEL_COL = \"genre\"\n", "TEXT_COL = \"sentence1\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Read Dataset\n", "We start by loading a subset of the data. The following function also downloads and extracts the files, if they don't exist in the data folder.\n", "\n", "The MultiNLI dataset is mainly used for natural language inference (NLI) tasks, where the inputs are sentence pairs and the labels are entailment indicators. The sentence pairs are also classified into *genres* that allow for more coverage and better evaluation of NLI models.\n", "\n", "For our classification task, we use the first sentence only as the text input, and the corresponding genre as the label. We select the examples corresponding to one of the entailment labels (*neutral* in this case) to avoid duplicate rows, as the sentences are not unique, whereas the sentence pairs are." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = load_mnli_pandas_df(DATA_FOLDER, \"train\")\n", "df = df[df[\"gold_label\"]==\"neutral\"] # get unique sentences" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These are the five genres in the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[[LABEL_COL, TEXT_COL]].head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[LABEL_COL].value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We start by splitting the data for training and testing, and then we encode the class labels:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# split\n", "df_train, df_test = train_test_split(df, train_size = TRAIN_SIZE, random_state=0)\n", "df_train = df_train.reset_index(drop=True)\n", "df_test = df_test.reset_index(drop=True)\n", "\n", "if QUICK_RUN:\n", " df_train = df_train.sample(frac=TRAIN_DATA_FRACTION).reset_index(drop=True)\n", " df_test = df_test.sample(frac=TEST_DATA_FRACTION).reset_index(drop=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# encode labels\n", "label_encoder = LabelEncoder()\n", "labels_train = label_encoder.fit_transform(df_train[LABEL_COL])\n", "labels_test = label_encoder.transform(df_test[LABEL_COL])\n", "\n", "num_labels = len(np.unique(labels_train))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Number of unique labels: {}\".format(num_labels))\n", "print(\"Number of training examples: {}\".format(df_train.shape[0]))\n", "print(\"Number of testing examples: {}\".format(df_test.shape[0]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Tokenize and Preprocess" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before we start training, we tokenize the text documents and convert them to lists of tokens. The following steps instantiate a `BERT tokenizer` given the language, and tokenize the text of the training and testing sets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tokenizer = Tokenizer(LANGUAGE, to_lower=TO_LOWER, cache_dir=BERT_CACHE_DIR)\n", "\n", "tokens_train = tokenizer.tokenize(list(df_train[TEXT_COL]))\n", "tokens_test = tokenizer.tokenize(list(df_test[TEXT_COL]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition, we perform the following preprocessing steps in the cell below:\n", "- Convert the tokens into token indices corresponding to the BERT tokenizer's vocabulary\n", "- Add the special tokens [CLS] and [SEP] to mark the beginning and end of a sentence, respectively\n", "- Pad or truncate the token lists to the specified max length. In this case, `MAX_LEN = 150`\n", "- Return mask lists that indicate the paddings' positions\n", "- Return token type id lists that indicate which sentence the tokens belong to (not needed for one-sequence classification)\n", "\n", "*See the original [implementation](https://github.com/google-research/bert/blob/master/run_classifier.py) for more information on BERT's input format.*" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tokens_train, mask_train, _ = tokenizer.preprocess_classification_tokens(tokens_train, MAX_LEN)\n", "tokens_test, mask_test, _ = tokenizer.preprocess_classification_tokens(tokens_test, MAX_LEN)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sequence Classifier Model\n", "Next, we use a sequence classifier that loads a pre-trained BERT model, given the language and number of labels." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "classifier = BERTSequenceClassifier(language=LANGUAGE, num_labels=num_labels, cache_dir=BERT_CACHE_DIR)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Train Model\n", "We train the classifier using the training set. This involves fine-tuning the BERT Transformer and learning a linear classification layer on top of that:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with Timer() as t:\n", " classifier.fit(token_ids=tokens_train,\n", " input_mask=mask_train,\n", " labels=labels_train, \n", " num_epochs=NUM_EPOCHS,\n", " batch_size=BATCH_SIZE, \n", " verbose=True) \n", "print(\"[Training time: {:.3f} hrs]\".format(t.interval / 3600))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Score Model\n", "We score the test set using the trained classifier:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "preds = classifier.predict(token_ids=tokens_test, \n", " input_mask=mask_test, \n", " batch_size=BATCH_SIZE_PRED)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluate Model\n", "Finally, we compute the overall accuracy, precision, recall, and F1 metrics on the test set. We also look at the metrics for eact of the genres in the the dataset. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "report = classification_report(labels_test, preds, target_names=label_encoder.classes_, output_dict=True) \n", "accuracy = accuracy_score(labels_test, preds)\n", "print(\"accuracy: {}\".format(accuracy))\n", "print(json.dumps(report, indent=4, sort_keys=True))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# for testing\n", "sb.glue(\"accuracy\", accuracy)\n", "sb.glue(\"precision\", report[\"macro avg\"][\"precision\"])\n", "sb.glue(\"recall\", report[\"macro avg\"][\"recall\"])\n", "sb.glue(\"f1\", report[\"macro avg\"][\"f1-score\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Explain Model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "device = torch.device(\"cpu\" if not torch.cuda.is_available() else \"cuda\")\n", "\n", "classifier.model.to(device)\n", "for param in classifier.model.parameters():\n", " param.requires_grad = False\n", "classifier.model.eval()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "interpreter_unified = UnifiedInformationExplainer(model=classifier.model, \n", " train_dataset=list(df_train[TEXT_COL]), \n", " device=device, \n", " target_layer=14, \n", " classes=label_encoder.classes_)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "idx = 7\n", "text = df_test[TEXT_COL][idx]\n", "true_label = df_test[LABEL_COL][idx]\n", "predicted_label = label_encoder.inverse_transform([preds[idx]])\n", "print(text, true_label, predicted_label)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "explanation_unified = interpreter_unified.explain_local(text, true_label)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualize Explanation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from interpret_text.experimental.widget import ExplanationDashboard" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ExplanationDashboard(explanation_unified)" ] } ], "metadata": { "kernelspec": { "display_name": "Python (interpret_cpu)", "language": "python", "name": "interpret_cpu" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" } }, "nbformat": 4, "nbformat_minor": 2 }