{ "changeLog": "", "cpu": 0, "description": "This model was open sourced by ONNX. The model predicts the answer to a question based on some paragraph context. A common use case is in KYC or processing financial reports where a common question can be applied to a standard set of documents. It is based on the state-of-the-art BERT (Bidirectional Encoder Representations from Transformers). The model applies Transformers, a popular attention model, to language modelling to produce an encoding of the input and then trains on the task of question answering. The implementation is open sourced by the license here: https://github.com/onnx/models/blob/master/LICENSE", "displayName": "QuestionAnswering", "gpu": 0, "inputDescription": "The input is a paragraph and question relating to that paragraph. For example:\n```\n{\n \"paragraph\": \"Abraham Lincoln was an American statesman and lawyer who served as the 16th president of the United States from March 1861 until his assassination in April 1865. Lincoln led the nation through the American Civil War, its bloodiest war and its greatest moral, constitutional, and political crisis.[3][4] He preserved the Union, abolished slavery, strengthened the federal government, and modernized the U.S. economy.\",\n \"question\": \"Which year did Lincoln pass away?\"\n}```", "inputType": "JSON", "memory": 0, "mlPackageLanguage": "PYTHON36", "name": "QuestionAnswering", "outputDescription": "Answer to the questions asked in input mapped to ids\n```\n{\n \"answer\": \"1865\"\n}\n```", "processorType": "GPU", "projectId": "[project-id]", "retrainable": false, "stagingUri": "[staging-uri]", "projectName": "Language Comprehension", "projectDescription": "Models performing cognitively challenging tasks such as text summarization and question answering.", "tenantName": "Open-Source Packages", "imagePath": "registry.replicated.com/aif-core/questionanswering:1" }