{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# djl.ai BERT Inference Demo\n", "\n", "## Introduction\n", "\n", "In this tutorial, you'll walk through the [BERT](https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270) QA model trained by MXNet. \n", "You can provide a question and a paragraph containing the answer to the model. The model is then able to find the best answer from the answer paragraph.\n", "\n", "Example:\n", "```text\n", "Q: When did BBC Japan start broadcasting?\n", "```\n", "\n", "Answer paragraph:\n", "```text\n", "BBC Japan was a general entertainment channel, which operated between December 2004 and April 2006.\n", "It ceased operations after its Japanese distributor folded.\n", "```\n", "And it picked the right answer:\n", "```text\n", "A: December 2004\n", "```\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 1 Configure the maven repository\n", "The following command defines the repo that the djl.ai package will be fetched from:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%mavenRepo s3 https://djl-ai.s3.amazonaws.com/dev" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2 Import the required library\n", "Run the following command to load the djl.ai package and its dependencies:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%maven ai.djl:api:0.1.0\n", "%maven ai.djl.mxnet:mxnet-engine:0.1.0\n", "%maven ai.djl:repository:0.1.0\n", "%maven ai.djl.mxnet:mxnet-model-zoo:0.1.0\n", "%maven org.slf4j:slf4j-api:1.7.26\n", "%maven org.slf4j:slf4j-simple:1.7.26\n", "%maven net.java.dev.jna:jna:5.3.0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Due to a bug in java kernel's **%maven** macro, it fails to resolve dependencies with an explicit classifer. You need to use **%%loadFromPOM** to load the MxNet package.\n", "\n", "Specify the MXNet package you would like to use by changing the `` tag. The following are the options for Mac and Linux:\n", "\n", "#### Mac OS\n", "```\n", "osx-x86_64\n", "```\n", "\n", "#### Ubuntu 16.04/Cent OS 7/Amazon Linux\n", "```\n", "linux-x86_64\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%loadFromPOM\n", " \n", " \n", " djl.ai\n", " https://djl-ai.s3.amazonaws.com/dev\n", " \n", " \n", "\n", " \n", " \n", " ai.djl.mxnet\n", " mxnet-native-mkl\n", " 1.6.0\n", " osx-x86_64\n", " \n", " \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Import the used libraries by running the following:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import java.io.*;\n", "import java.nio.charset.*;\n", "import java.nio.file.*;\n", "import java.util.*;\n", "import com.google.gson.*;\n", "import com.google.gson.annotations.*;\n", "import ai.djl.*;\n", "import ai.djl.inference.*;\n", "import ai.djl.metric.*;\n", "import ai.djl.mxnet.zoo.*;\n", "import ai.djl.mxnet.zoo.nlp.bertqa.*;\n", "import ai.djl.repository.zoo.*;\n", "import ai.djl.ndarray.*;\n", "import ai.djl.ndarray.types.*;\n", "import ai.djl.translate.*;\n", "import ai.djl.util.*;" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that all of the prerequisites are complete, start writing code to run inference with this example.\n", "\n", "### Step 4 Preparing for the model and input\n", "\n", "The model requires three inputs:\n", "\n", "- word indices: The index of each word in a sentence\n", "- word types: The type index of the word. All Questions will be labelled with 0 and all Answers will be labelled with 1.\n", "- sequence length: You need to limit the length of the input. In this case, the length is 384\n", "- valid length: The length of the question and answer tokens\n", "\n", "**First, load the input**\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "var question = \"When did BBC Japan start broadcasting?\";\n", "var resourceDocument = \"BBC Japan was a general entertainment Channel.\\n\" +\n", " \"Which operated between December 2004 and April 2006.\\n\" +\n", " \"It ceased operations after its Japanese distributor folded.\";\n", "\n", "QAInput input = new QAInput(question, resourceDocument, 384);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then load the model and vocabulary. Create a variable `model` by using `Model.load(, )` to load your model.\n", "\n", "After that, use the `getArtifact(\"fileName\", function)` method to load the vocabulary and create `BertDataFormatter` class to prepare for preprocessing." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Map criteria = new ConcurrentHashMap<>();\n", "criteria.put(\"backbone\", \"bert\");\n", "criteria.put(\"dataset\", \"book_corpus_wiki_en_uncased\");\n", "ZooModel model = MxModelZoo.BERT_QA.loadModel(criteria);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Predictor predictor = model.newPredictor();\n", "String answer = predictor.predict(input);\n", "answer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 5 Do Preprocessing\n", "\n", "Inference in Deep Learning is the process of predicting the output for a given input based on a pre-defined model. \n", "djl.ai abstracts the whole process away from you. It can load the model, perform inference on the input, and provide \n", "output. djl.ai also allows you to provide user-defined inputs. The workflow looks like the following:\n", "\n", "![image](https://djl-ai.s3.amazonaws.com/other+resources/BertFlow.png)\n", "\n", "The red block (\"Images\") in the workflow is the input that djl.ai expects from you. The green block (\"Images \n", "bounding box\") is the output that you expect. Since djl.ai does not know what input to expect and what format of output that you prefer, djl.ai provides the `Translator` interface so you can define your own \n", "input and output. \n", "\n", "The `Translator` interface encompasses the two white blocks: Pre-processing and Post-processing. The pre-processing \n", "component converts the user-defined input objects into an NDList, so that the `Predictor` in djl.ai can understand the \n", "input and make its prediction. Similarly, the post-processing block receives an NDList as the output from the \n", "`Predictor`. The post-processing block allows you to convert the output from the `Predictor` to the desired output \n", "format. \n", "\n", "#### Pre-processing\n", "\n", "Now, you need to convert the sentences into tokens. You can use `BertDataFormatter.tokenizer` to convert questions and answers into tokens. Then, use `BertDataFormatter.formTokens` to create Bert Formatted tokens. Once you have properly formatted tokens, use `parser.token2idx` to create the indices. \n", "\n", "The following code block converts the question and answer defined earlier into bert-formatted tokens and creates word types for the tokens. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "// Create token lists for question and answer\n", "List tokenQ = BertDataFormatter.tokenizer(question);\n", "List tokenA = BertDataFormatter.tokenizer(resourceDocument);\n", "System.out.println(\"Question Token: \" + tokenQ);\n", "System.out.println(\"Answer Token: \" + tokenA);\n", "System.out.println(\"Valid length: \" + (tokenQ.size() + tokenA.size()));" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Normally, words/sentences are represented as indices instead of Strings for training. They typically work like a vector in a n-dimensional space. In this case, you need to map them into indices. The form tokens also pad the sentence to the required length." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "// Create Bert-formatted tokens\n", "List tokens = BertDataFormatter.formTokens(tokenQ, tokenA, 384);\n", "// Convert tokens into indices in the vocabulary\n", "List indices = parser.token2idx(tokens);\n", "\n", "System.out.println(\"The indices of tokens: \" + indices);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, the model needs to understand which part is the Question and which part is the Answer. Mask the tokens as follows:\n", "```\n", "[Question tokens...AnswerTokens...padding tokens] => [000000...11111....0000]\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "// Get token types\n", "List tokenTypes = BertDataFormatter.getTokenTypes(tokenQ, tokenA, 384);\n", "\n", "System.out.println(\"The type mask for tokens: \" + tokenTypes);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To properly convert them into `float[]` for `NDArray` creation, here is the helper function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "/**\n", " * Convert a List of Number to float array.\n", " *\n", " * @param list the list to be converted\n", " * @return float array\n", " */\n", "public static float[] toFloatArray(List list) {\n", " float[] ret = new float[list.size()];\n", " int idx = 0;\n", " for (Number n : list) {\n", " ret[idx++] = n.floatValue();\n", " }\n", " return ret;\n", "}\n", "\n", "float[] indicesFloat = toFloatArray(indices);\n", "float[] types = toFloatArray(tokenTypes);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that you have everything you need, you can create an NDList and populate all of the inputs you formatted earlier. You're done with pre-processing! \n", "\n", "### Step 6 Construct `Translator`\n", "\n", "You need to do this processing within an implementation of the `Translator` interface. `Translator` is designed to do preprocessing and post processing. Users are required to define input object and output object. It contains the following two override classes:\n", "- `public NDList processInput(TranslatorContext ctx, I)`\n", "- `public String processOutput(TranslatorContext ctx, O)`\n", "\n", "Every translator takes in input, and returns output in the form of generic objects. In this case, the translator takes input in the form of `QAInput` (I), and return output as a `String` (O). `QAInput` is just an object that holds questions and answer; We have prepared the Input class for you." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "public class QAInput {\n", " private String question;\n", " private String answer;\n", "\n", " QAInput(String question, String answer) {\n", " this.question = question;\n", " this.answer = answer;\n", " }\n", "\n", " public String getQuestion() {\n", " return question;\n", " }\n", " \n", " public String getAnswer() {\n", " return answer;\n", " }\n", "}\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below is one implementation of the translator we have created. Complete the TODO sections in the `processInput` section below. (HINT: use the code snippets in the previous cell to help guide you). You can find the usage for [`NDManager`](https://djl-ai.s3.amazonaws.com/java-api/0.1.0/ai/djl/ndarray/NDManager.html).\n", "\n", "```\n", "manager.create(Number[] data, Shape)\n", "manager.create(Number[] data)\n", "```\n", "\n", "The `Shape` for `data0` and `data1` is (num_of_batches, sequence_length). For `data2` is just 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\n", "public class BertTranslator implements Translator {\n", " private BertDataFormatter parser;\n", " private List tokens;\n", " private int seqLength;\n", "\n", " BertTranslator(BertDataFormatter parser) {\n", " this.parser = parser;\n", " this.seqLength = 384;\n", " }\n", " \n", " @Override\n", " public Batchifier getBatchifier() {\n", " return null;\n", " }\n", "\n", " @Override\n", " public NDList processInput(TranslatorContext ctx, QAInput input) {\n", " // Pre-processing - tokenize sentence\n", " // TODO: Create token lists for question and answer\n", " List tokenQ = BertDataFormatter.tokenizer(question);\n", " List tokenA = BertDataFormatter.tokenizer(resourceDocument);\n", " \n", " // TODO Calculate valid length (length(Question tokens) + length(resourceDocument tokens))\n", " var validLength = tokenQ.size() + tokenA.size();\n", " \n", " // TODO: Create Bert-formatted tokens\n", " tokens = BertDataFormatter.formTokens(tokenQ, tokenA, 384);\n", " \n", " if (tokens == null) {\n", " throw new IllegalStateException(\"tokens is not defined\");\n", " }\n", " \n", " // TODO: Convert tokens into indices in the vocabulary\n", " List indices = parser.token2idx(tokens);\n", " // TODO: Get token types\n", " List tokenTypes = BertDataFormatter.getTokenTypes(tokenQ, tokenA, 384);\n", "\n", " NDManager manager = ctx.getNDManager();\n", " \n", " // TODO Using the manager created above, create NDArrays for the indices, types, and valid length.\n", " // in that order. The type of the NDArray should all be float\n", " NDArray indicesNd = manager.create(toFloatArray(indices), new Shape(1, 384));\n", " NDArray typesNd = manager.create(toFloatArray(tokenTypes), new Shape(1, 384));;\n", " NDArray validLengthNd = manager.create(new float[]{validLength});\n", "\n", " NDList list = new NDList(3);\n", " list.add(\"data0\", indicesNd);\n", " list.add(\"data1\", typesNd);\n", " list.add(\"data2\", validLengthNd);\n", " \n", " return list;\n", " }\n", "\n", " @Override\n", " public String processOutput(TranslatorContext ctx, NDList list) {\n", " NDArray array = list.head();\n", " NDList output = array.split(2, 2);\n", " // Get the formatted logits result\n", " NDArray startLogits = output.get(0).reshape(new Shape(1, -1));\n", " NDArray endLogits = output.get(1).reshape(new Shape(1, -1));\n", " // Get Probability distribution\n", " NDArray startProb = startLogits.softmax(-1);\n", " NDArray endProb = endLogits.softmax(-1);\n", " int startIdx = (int) startProb.argmax(1, true).getFloat(0);\n", " int endIdx = (int) endProb.argmax(1, true).getFloat(0);\n", " return tokens.subList(startIdx, endIdx + 1).toString();\n", " }\n", " }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Congrats! You have created your first Translator! We have pre-filled the `processOutput()` that will process the `NDList` and return it in a desired format. `processInput()` and `processOutput()` offer the flexibility to get the predictions from the model in any format you desire. \n", "\n", "\n", "With the Translator implemented, you need to bring up the predictor to start making predictions. You can find the usage for `Predictor` in the [Javadoc](https://djl-ai.s3.amazonaws.com/java-api/0.1.0/ai/djl/inference/Predictor.html). Create a translator and use the `question`, `resourceDocument` provided previously." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "String predictResult = null;\n", "\n", "QAInput input = new QAInput(question, resourceDocument);\n", "BertTranslator translator = new BertTranslator(parser);\n", "\n", "// TODO: Create a Predictor and predict the output using the predictor\n", "try (Predictor predictor = model.newPredictor(translator)) {\n", " predictResult = predictor.predict(input);\n", "}\n", "\n", "System.out.println(predictResult);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Based on the input, the following result will be shown:\n", "```\n", "[december, 2004]\n", "```\n", "That's it! \n", "\n", "You can try with more questions and answers. Here are the samples:\n", "\n", "**Answer Material**\n", "\n", "The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse (\"Norman\" comes from \"Norseman\") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.\n", "\n", "\n", "**Question**\n", "\n", "Q: When were the Normans in Normandy?\n", "A: 10th and 11th centuries\n", "\n", "Q: In what country is Normandy located?\n", "A: france" ] } ], "metadata": { "kernelspec": { "display_name": "Java", "language": "java", "name": "java" }, "language_info": { "codemirror_mode": "java", "file_extension": ".jshell", "mimetype": "text/x-java-source", "name": "Java", "pygments_lexer": "java", "version": "12.0.2+10" } }, "nbformat": 4, "nbformat_minor": 2 }