{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "%reload_ext autoreload\n", "%autoreload 2\n", "%matplotlib inline\n", "import os\n", "os.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\";\n", "os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"\" # Enforce CPU usage\n", "from psutil import cpu_count\n", "import tensorflow as tf\n", "import numpy as np\n", "\n", "# Constants from the performance optimization available in onnxruntime\n", "# It needs to be done before importing onnxruntime\n", "os.environ[\"OMP_NUM_THREADS\"] = str(cpu_count(logical=True))\n", "os.environ[\"OMP_WAIT_POLICY\"] = 'ACTIVE'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## ONNX and TensorFlow Lite Support in `ktrain`\n", "\n", "As of v0.24.x, `predictors` in **ktrain** provide built-in support for exports to [ONNX](https://github.com/onnx/onnx) and [TensorFlow Lite](https://www.tensorflow.org/lite) formats. This allows you to more easily take a **ktrain**-trained model and use it to make predictions *outside* of **ktrain** (or even TensorFlow) in deployment scenarios. In this notebook, we will show a text classification example of this.\n", "\n", "Let us begin by loading a previously trained `Predictor` instance, which consists of both the **DistilBert** model and its associated `Preprocessor` instance. " ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n" ] } ], "source": [ "import ktrain\n", "predictor = ktrain.load_predictor('/tmp/my_distilbert_predictor')\n", "print(predictor.model)\n", "print(predictor.preproc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The cell above assumes that the model was previously trained on the 20 Newsgroup corpus using a GPU (e.g., on Google Colab). The files in question can be easily created with **ktrain**:\n", "\n", "```python\n", "# install ktrain\n", "!pip install ktrain\n", "\n", "# load text data\n", "categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']\n", "from sklearn.datasets import fetch_20newsgroups\n", "train_b = fetch_20newsgroups(subset='train', categories=categories, shuffle=True)\n", "test_b = fetch_20newsgroups(subset='test',categories=categories, shuffle=True)\n", "(x_train, y_train) = (train_b.data, train_b.target)\n", "(x_test, y_test) = (test_b.data, test_b.target)\n", "\n", "# build, train, and validate model (Transformer is wrapper around transformers library)\n", "import ktrain\n", "from ktrain import text\n", "MODEL_NAME = 'distilbert-base-uncased'\n", "t = text.Transformer(MODEL_NAME, maxlen=500, class_names=train_b.target_names)\n", "trn = t.preprocess_train(x_train, y_train)\n", "val = t.preprocess_test(x_test, y_test)\n", "model = t.get_classifier()\n", "learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)\n", "learner.fit_onecycle(5e-5, 1)\n", "\n", "# save predictor\n", "predictor = ktrain.get_predictor(learner.model, t)\n", "predictor.save('/tmp/my_distilbert_predictor')\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## TensorFlow Lite Inferences\n", "\n", "Here, we export our model to TensorFlow LITE and use it to make predictions *without* **ktrain**." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "converting to TFLite format ... this may take a few moments...\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Using experimental converter: If you encountered a problem please file a bug. You can opt-out by setting experimental_new_converter=False\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "done.\n", "\n", "text input: My computer monitor is blurry.\n", "\n", "predicted logits: [[-1.137866 2.7797258 -0.87084955 -1.243239 ]]\n", "\n", "predicted class: comp.graphics\n" ] } ], "source": [ "# export TensorFlow Lite model\n", "tflite_model_path = '/tmp/model.tflite'\n", "tflite_model_path = predictor.export_model_to_tflite(tflite_model_path)\n", "\n", "# load interpreter\n", "interpreter = tf.lite.Interpreter(model_path=tflite_model_path)\n", "interpreter.allocate_tensors()\n", "input_details = interpreter.get_input_details()\n", "output_details = interpreter.get_output_details()\n", "\n", "# preprocess and predict outside of ktrain\n", "doc = 'My computer monitor is blurry.'\n", "maxlen = predictor.preproc.maxlen\n", "tokenizer = predictor.preproc.get_tokenizer()\n", "inputs = tokenizer(doc, max_length=maxlen, padding='max_length', return_tensors=\"tf\")\n", "interpreter.set_tensor(input_details[0]['index'], inputs['attention_mask'])\n", "interpreter.set_tensor(input_details[1]['index'], inputs['input_ids'])\n", "interpreter.invoke()\n", "output_tflite = interpreter.get_tensor(output_details[0]['index'])\n", "print()\n", "print('text input: %s' % (doc))\n", "print()\n", "print('predicted logits: %s' % (output_tflite))\n", "print()\n", "print(\"predicted class: %s\" % ( predictor.get_classes()[np.argmax(output_tflite[0])]) )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## ONNX Inferences\n", "\n", "Here, we will export our trained model to ONNX and make predictions *outside* of both **ktrain** and **TensorFlow** using the ONNX runtime." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "tf executing eager_mode: True\n", "INFO:keras2onnx:tf executing eager_mode: True\n", "tf.keras model eager_mode: False\n", "INFO:keras2onnx:tf.keras model eager_mode: False\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "converting to ONNX format ... this may take a few moments...\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "The ONNX operator number change on the optimization: 1317 -> 844\n", "INFO:keras2onnx:The ONNX operator number change on the optimization: 1317 -> 844\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "done.\n", "/tmp/model.onnx\n", "\n", "text input: I received a chest x-ray at the hospital.\n", "\n", "predicted logits: [array([[-1.557031 , -0.78585184, 3.1943865 , -1.13119 ]],\n", " dtype=float32)]\n", "\n", "predicted class: sci.med\n" ] } ], "source": [ "# export ONNX model\n", "onnx_model_path = '/tmp/model.onnx'\n", "onnx_model_path = predictor.export_model_to_onnx(onnx_model_path)\n", "print(onnx_model_path)\n", "\n", "# create ONNX inference session (you can also do this manually instead of using create_onnx_session)\n", "sess = predictor.create_onnx_session(onnx_model_path)\n", "\n", "# preprocess and predict outside of ktrain and TensorFlow\n", "doc = 'I received a chest x-ray at the hospital.'\n", "maxlen = predictor.preproc.maxlen\n", "tokenizer = predictor.preproc.get_tokenizer()\n", "input_dict = tokenizer(doc, max_length=maxlen, padding='max_length')\n", "feed = {}\n", "feed['input_ids'] = np.array(input_dict['input_ids']).astype('int32')[None,:]\n", "feed['attention_mask'] = np.array(input_dict['attention_mask']).astype('int32')[None,:]\n", "output_onnx = sess.run(None, feed)\n", "print()\n", "print('text input: %s' % (doc))\n", "print()\n", "print('predicted logits: %s' % (output_onnx))\n", "print()\n", "print(\"predicted class: %s\" % ( predictor.get_classes()[np.argmax(output_onnx[0][0])]) )" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 4 }