{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "IDdZSPcLtKx4" }, "source": [ "##### Copyright 2019 The TensorFlow Hub Authors.\n", "\n", "Licensed under the Apache License, Version 2.0 (the \"License\");" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-g5By3P4tavy" }, "outputs": [], "source": [ "# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# http://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS, \n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License.\n", "# ==============================================================================" ] }, { "cell_type": "markdown", "metadata": { "id": "vpaLrN0mteAS" }, "source": [ "# Bangla Article Classification With TF-Hub" ] }, { "cell_type": "markdown", "metadata": { "id": "MfBg1C5NB3X0" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View on GitHub\n", " \n", " Download notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "GhN2WtIrBQ4y" }, "source": [ "Caution: In addition to installing Python packages with pip, this notebook uses\n", "`sudo apt install` to install system packages: `unzip`.\n", "\n", "This Colab is a demonstration of using [Tensorflow Hub](https://www.tensorflow.org/hub/) for text classification in non-English/local languages. Here we choose [Bangla](https://en.wikipedia.org/wiki/Bengali_language) as the local language and use pretrained word embeddings to solve a multiclass classification task where we classify Bangla news articles in 5 categories. The pretrained embeddings for Bangla comes from [fastText](https://fasttext.cc/docs/en/crawl-vectors.html) which is a library by Facebook with released pretrained word vectors for 157 languages. \n", "\n", "We'll use TF-Hub's pretrained embedding exporter for converting the word embeddings to a text embedding module first and then use the module to train a classifier with [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras), Tensorflow's high level user friendly API to build deep learning models. Even if we are using fastText embeddings here, it's possible to export any other embeddings pretrained from other tasks and quickly get results with Tensorflow hub. " ] }, { "cell_type": "markdown", "metadata": { "id": "Q4DN769E2O_R" }, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9Vt-StAAZguA" }, "outputs": [], "source": [ "%%bash\n", "# https://github.com/pypa/setuptools/issues/1694#issuecomment-466010982\n", "pip install gdown --no-use-pep517" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "WcBA19FlDPZO" }, "outputs": [], "source": [ "%%bash\n", "sudo apt-get install -y unzip" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zSeyZMq-BYsu" }, "outputs": [], "source": [ "import os\n", "\n", "import tensorflow as tf\n", "import tensorflow_hub as hub\n", "\n", "import gdown\n", "import numpy as np\n", "from sklearn.metrics import classification_report\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns" ] }, { "cell_type": "markdown", "metadata": { "id": "9FB7gLU4F54l" }, "source": [ "# Dataset\n", "\n", "We will use [BARD](https://www.researchgate.net/publication/328214545_BARD_Bangla_Article_Classification_Using_a_New_Comprehensive_Dataset) (Bangla Article Dataset) which has around 376,226 articles collected from different Bangla news portals and labelled with 5 categories: economy, state, international, sports, and entertainment. We download the file from Google Drive this ([bit.ly/BARD_DATASET](https://bit.ly/BARD_DATASET)) link is referring to from [this](https://github.com/tanvirfahim15/BARD-Bangla-Article-Classifier) GitHub repository.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zdQrL_rwa-1K" }, "outputs": [], "source": [ "gdown.download(\n", " url='https://drive.google.com/uc?id=1Ag0jd21oRwJhVFIBohmX_ogeojVtapLy',\n", " output='bard.zip',\n", " quiet=True\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "P2YW4GGa9Y5o" }, "outputs": [], "source": [ "%%bash\n", "unzip -qo bard.zip" ] }, { "cell_type": "markdown", "metadata": { "id": "js75OARBF_B8" }, "source": [ "# Export pretrained word vectors to TF-Hub module" ] }, { "cell_type": "markdown", "metadata": { "id": "-uAicYA6vLsf" }, "source": [ "TF-Hub provides some useful scripts for converting word embeddings to TF-hub text embedding modules [here](https://github.com/tensorflow/hub/tree/master/examples/text_embeddings_v2). To make the module for Bangla or any other languages, we simply have to download the word embedding `.txt` or `.vec` file to the same directory as `export_v2.py` and run the script.\n", "\n", "\n", "The exporter reads the embedding vectors and exports it to a Tensorflow [SavedModel](https://www.tensorflow.org/beta/guide/saved_model). A SavedModel contains a complete TensorFlow program including weights and graph. TF-Hub can load the SavedModel as a [module](https://www.tensorflow.org/hub/api_docs/python/hub/Module), which we will use to build the model for text classification. Since we are using `tf.keras` to build the model, we will use [hub.KerasLayer](https://www.tensorflow.org/hub/api_docs/python/hub/KerasLayer), which provides a wrapper for a TF-Hub module to use as a Keras Layer.\n", "\n", "First we will get our word embeddings from fastText and embedding exporter from TF-Hub [repo](https://github.com/tensorflow/hub).\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5DY5Ze6pO1G5" }, "outputs": [], "source": [ "%%bash\n", "curl -O https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.bn.300.vec.gz\n", "curl -O https://raw.githubusercontent.com/tensorflow/hub/master/examples/text_embeddings_v2/export_v2.py\n", "gunzip -qf cc.bn.300.vec.gz --k" ] }, { "cell_type": "markdown", "metadata": { "id": "PAzdNZaHmdl1" }, "source": [ "Then, we will run the exporter script on our embedding file. Since fastText embeddings have a header line and are pretty large (around 3.3 GB for Bangla after converting to a module) we ignore the first line and export only the first 100, 000 tokens to the text embedding module." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Tkv5acr_Q9UU" }, "outputs": [], "source": [ "%%bash\n", "python export_v2.py --embedding_file=cc.bn.300.vec --export_path=text_module --num_lines_to_ignore=1 --num_lines_to_use=100000" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "k9WEpmedF_3_" }, "outputs": [], "source": [ "module_path = \"text_module\"\n", "embedding_layer = hub.KerasLayer(module_path, trainable=False)" ] }, { "cell_type": "markdown", "metadata": { "id": "fQHbmS_D4YIo" }, "source": [ "The text embedding module takes a batch of sentences in a 1D tensor of strings as input and outputs the embedding vectors of shape (batch_size, embedding_dim) corresponding to the sentences. It preprocesses the input by splitting on spaces. Word embeddings are combined to sentence embeddings with the `sqrtn` combiner(See [here](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup_sparse)). For demonstration we pass a list of Bangla words as input and get the corresponding embedding vectors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Z1MBnaBUihWn" }, "outputs": [], "source": [ "embedding_layer(['বাস', 'বসবাস', 'ট্রেন', 'যাত্রী', 'ট্রাক']) " ] }, { "cell_type": "markdown", "metadata": { "id": "4KY8LiFOHmcd" }, "source": [ "# Convert to Tensorflow Dataset \n" ] }, { "cell_type": "markdown", "metadata": { "id": "pNguCDNe6bvz" }, "source": [ "Since the dataset is really large instead of loading the entire dataset in memory we will use a generator to yield samples in run-time in batches using [Tensorflow Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) functions. The dataset is also very imbalanced, so, before using the generator, we will shuffle the dataset. \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bYv6LqlEChO1" }, "outputs": [], "source": [ "dir_names = ['economy', 'sports', 'entertainment', 'state', 'international']\n", "\n", "file_paths = []\n", "labels = []\n", "for i, dir in enumerate(dir_names):\n", " file_names = [\"/\".join([dir, name]) for name in os.listdir(dir)]\n", " file_paths += file_names\n", " labels += [i] * len(os.listdir(dir))\n", " \n", "np.random.seed(42)\n", "permutation = np.random.permutation(len(file_paths))\n", "\n", "file_paths = np.array(file_paths)[permutation]\n", "labels = np.array(labels)[permutation]" ] }, { "cell_type": "markdown", "metadata": { "id": "8b-UtAP5TL-W" }, "source": [ "We can check the distribution of labels in the training and validation examples after shuffling." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "mimhWVSzzAmS" }, "outputs": [], "source": [ "train_frac = 0.8\n", "train_size = int(len(file_paths) * train_frac)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "4BNXFrkotAYu" }, "outputs": [], "source": [ "# plot training vs validation distribution\n", "plt.subplot(1, 2, 1)\n", "plt.hist(labels[0:train_size])\n", "plt.title(\"Train labels\")\n", "plt.subplot(1, 2, 2)\n", "plt.hist(labels[train_size:])\n", "plt.title(\"Validation labels\")\n", "plt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": { "id": "RVbHb2I3TUNA" }, "source": [ "To create a [Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) using a generator, we first write a generator function which reads each of the articles from `file_paths` and the labels from the label array, and yields one training example at each step. We pass this generator function to the [`tf.data.Dataset.from_generator`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) method and specify the output types. Each training example is a tuple containing an article of `tf.string` data type and one-hot encoded label. We split the dataset with a train-validation split of 80-20 using [`tf.data.Dataset.skip`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#skip) and [`tf.data.Dataset.take`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#take) methods." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "eZRGTzEhUi7Q" }, "outputs": [], "source": [ "def load_file(path, label):\n", " return tf.io.read_file(path), label" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2g4nRflB7fbF" }, "outputs": [], "source": [ "def make_datasets(train_size):\n", " batch_size = 256\n", "\n", " train_files = file_paths[:train_size]\n", " train_labels = labels[:train_size]\n", " train_ds = tf.data.Dataset.from_tensor_slices((train_files, train_labels))\n", " train_ds = train_ds.map(load_file).shuffle(5000)\n", " train_ds = train_ds.batch(batch_size).prefetch(tf.data.AUTOTUNE)\n", "\n", " test_files = file_paths[train_size:]\n", " test_labels = labels[train_size:]\n", " test_ds = tf.data.Dataset.from_tensor_slices((test_files, test_labels))\n", " test_ds = test_ds.map(load_file)\n", " test_ds = test_ds.batch(batch_size).prefetch(tf.data.AUTOTUNE)\n", "\n", "\n", " return train_ds, test_ds" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8PuuN6el8tv9" }, "outputs": [], "source": [ "train_data, validation_data = make_datasets(train_size)" ] }, { "cell_type": "markdown", "metadata": { "id": "MrdZI6FqPJNP" }, "source": [ "# Model Training and Evaluation" ] }, { "cell_type": "markdown", "metadata": { "id": "jgr7YScGVS58" }, "source": [ "Since we have already added a wrapper around our module to use it as any other layer in Keras, we can create a small [Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model which is a linear stack of layers. We can add our text embedding module with `model.add` just like any other layer. We compile the model by specifying the loss and optimizer and train it for 10 epochs. The `tf.keras` API can handle Tensorflow Datasets as input, so we can pass a Dataset instance to the fit method for model training. Since we are using the generator function, `tf.data` will handle generating the samples, batching them and feeding them to the model." ] }, { "cell_type": "markdown", "metadata": { "id": "WhCqbDK2uUV5" }, "source": [ "## Model" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "nHUw807XPPM9" }, "outputs": [], "source": [ "def create_model():\n", " model = tf.keras.Sequential([\n", " tf.keras.layers.Input(shape=[], dtype=tf.string),\n", " embedding_layer,\n", " tf.keras.layers.Dense(64, activation=\"relu\"),\n", " tf.keras.layers.Dense(16, activation=\"relu\"),\n", " tf.keras.layers.Dense(5),\n", " ])\n", " model.compile(loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),\n", " optimizer=\"adam\", metrics=['accuracy'])\n", " return model" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5J4EXJUmPVNG" }, "outputs": [], "source": [ "model = create_model()\n", "# Create earlystopping callback\n", "early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=3)" ] }, { "cell_type": "markdown", "metadata": { "id": "ZZ7XJLg2u2No" }, "source": [ "## Training" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "OoBkN2tAaXWD" }, "outputs": [], "source": [ "history = model.fit(train_data, \n", " validation_data=validation_data, \n", " epochs=5, \n", " callbacks=[early_stopping_callback])" ] }, { "cell_type": "markdown", "metadata": { "id": "XoDk8otmMoT7" }, "source": [ "## Evaluation" ] }, { "cell_type": "markdown", "metadata": { "id": "G5ZRKGOsXEh4" }, "source": [ "We can visualize the accuracy and loss curves for training and validation data using the `tf.keras.callbacks.History` object returned by the `tf.keras.Model.fit` method, which contains the loss and accuracy value for each epoch." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "V6tOnByIOeGn" }, "outputs": [], "source": [ "# Plot training & validation accuracy values\n", "plt.plot(history.history['accuracy'])\n", "plt.plot(history.history['val_accuracy'])\n", "plt.title('Model accuracy')\n", "plt.ylabel('Accuracy')\n", "plt.xlabel('Epoch')\n", "plt.legend(['Train', 'Test'], loc='upper left')\n", "plt.show()\n", "\n", "# Plot training & validation loss values\n", "plt.plot(history.history['loss'])\n", "plt.plot(history.history['val_loss'])\n", "plt.title('Model loss')\n", "plt.ylabel('Loss')\n", "plt.xlabel('Epoch')\n", "plt.legend(['Train', 'Test'], loc='upper left')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": { "id": "D54IXLqcG8Cq" }, "source": [ "## Prediction\n", "\n", "We can get the predictions for the validation data and check the confusion matrix to see the model's performance for each of the 5 classes. Because `tf.keras.Model.predict` method returns an n-d array for probabilities for each class, they can be converted to class labels using `np.argmax`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "dptEywzZJk4l" }, "outputs": [], "source": [ "y_pred = model.predict(validation_data)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7Dzeml6Pk0ub" }, "outputs": [], "source": [ "y_pred = np.argmax(y_pred, axis=1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "T4M3Lzg8jHcB" }, "outputs": [], "source": [ "samples = file_paths[0:3]\n", "for i, sample in enumerate(samples):\n", " f = open(sample)\n", " text = f.read()\n", " print(text[0:100])\n", " print(\"True Class: \", sample.split(\"/\")[0])\n", " print(\"Predicted Class: \", dir_names[y_pred[i]])\n", " f.close()\n", " " ] }, { "cell_type": "markdown", "metadata": { "id": "PlDTIpMBu6h-" }, "source": [ "## Compare Performance\n", "\n", "Now we can take the correct labels for the validation data from `labels` and compare them with our predictions to get a [classification_report](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html). " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "mqrERUCS1Xn7" }, "outputs": [], "source": [ "y_true = np.array(labels[train_size:])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NX5w-NuTKuVP" }, "outputs": [], "source": [ "print(classification_report(y_true, y_pred, target_names=dir_names))" ] }, { "cell_type": "markdown", "metadata": { "id": "p5e9m3bV6oXK" }, "source": [ "We can also compare our model's performance with the published results obtained in the original [paper](https://www.researchgate.net/publication/328214545_BARD_Bangla_Article_Classification_Using_a_New_Comprehensive_Dataset), which had a 0.96 precision .The original authors described many preprocessing steps performed on the dataset, such as dropping punctuations and digits, removing top 25 most frequest stop words. As we can see in the `classification_report`, we also manage to obtain a 0.96 precision and accuracy after training for only 5 epochs without any preprocessing! \n", "\n", "In this example, when we created the Keras layer from our embedding module, we set the parameter`trainable=False`, which means the embedding weights will not be updated during training. Try setting it to `True` to reach around 97% accuracy using this dataset after only 2 epochs. " ] } ], "metadata": { "colab": { "collapsed_sections": [ "IDdZSPcLtKx4" ], "name": "bangla_article_classifier.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }