{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# Chapter 7: Building Chat Applications\n", "## OpenAI API Quickstart\n", "\n", "This notebook is adapted from the [Azure OpenAI Samples Repository](https://github.com/Azure/azure-openai-samples?WT.mc_id=academic-105485-koreyst) that includes notebooks that access [Azure OpenAI](notebook-azure-openai.ipynb) services.\n", "\n", "The Python OpenAI API works with Azure OpenAI Models as well, with a few modifications. Learn more about the differences here: [How to switch between OpenAI and Azure OpenAI endpoints with Python](https://learn.microsoft.com/azure/ai-services/openai/how-to/switching-endpoints?WT.mc_id=academic-109527-jasmineg)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# Overview \n", "\"Large language models are functions that map text to text. Given an input string of text, a large language model tries to predict the text that will come next\"(1). This \"quickstart\" notebook will introduce users to high-level LLM concepts, core package requirements for getting started with AML, a soft introduction to prompt design, and several short examples of different use cases. " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Table of Contents \n", "\n", "[Overview](#overview) \n", "[How to use OpenAI Service](#how-to-use-openai-service) \n", "[1. Creating your OpenAI Service](#1.-creating-your-openai-service) \n", "[2. Installation](#2.-installation) \n", "[3. Credentials](#3.-credentials) \n", "\n", "[Use Cases](#use-cases) \n", "[1. Summarize Text](#1.-summarize-text) \n", "[2. Classify Text](#2.-classify-text) \n", "[3. Generate New Product Names](#3.-generate-new-product-names) \n", "[4. Fine Tune a Classifier](#4.fine-tune-a-classifier) \n", "\n", "[References](#references)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Build your first prompt \n", "This short exercise will provide a basic introduction for submitting prompts to an OpenAI model for a simple task \"summarization\".\n", "\n", "\n", "**Steps**: \n", "1. Install OpenAI library in your python environment \n", "2. Load standard helper libraries and set your typical OpenAI security credentials for the OpenAI Service that you've created \n", "3. Choose a model for your task \n", "4. Create a simple prompt for the model \n", "5. Submit your request to the model API!" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### 1. Install OpenAI" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1674254990318 }, "jupyter": { "outputs_hidden": true, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%pip install openai python-dotenv" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### 2. Import helper libraries and instantiate credentials" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1674829434433 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "import os\n", "from openai import OpenAI\n", "from dotenv import load_dotenv\n", "\n", "load_dotenv()\n", "\n", "API_KEY = os.getenv(\"OPENAI_API_KEY\",\"\")\n", "assert API_KEY, \"ERROR: OpenAI Key is missing\"\n", "\n", "client = OpenAI(\n", " api_key=API_KEY\n", " )\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### 3. Finding the right model \n", "The GPT-3.5-turbo or GPT-4 models can understand and generate natural language." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1674742720788 }, "jupyter": { "outputs_hidden": true, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Select the General Purpose curie model for text\n", "model = \"gpt-3.5-turbo\"" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## 4. Prompt Design \n", "\n", "\"The magic of large language models is that by being trained to minimize this prediction error over vast quantities of text, the models end up learning concepts useful for these predictions. For example, they learn concepts like\"(1):\n", "\n", "* how to spell\n", "* how grammar works\n", "* how to paraphrase\n", "* how to answer questions\n", "* how to hold a conversation\n", "* how to write in many languages\n", "* how to code\n", "* etc.\n", "\n", "#### How to control a large language model \n", "\"Of all the inputs to a large language model, by far the most influential is the text prompt(1).\n", "\n", "Large language models can be prompted to produce output in a few ways:\n", "\n", "Instruction: Tell the model what you want\n", "Completion: Induce the model to complete the beginning of what you want\n", "Demonstration: Show the model what you want, with either:\n", "A few examples in the prompt\n", "Many hundreds or thousands of examples in a fine-tuning training dataset\"\n", "\n", "\n", "\n", "#### There are three basic guidelines to creating prompts:\n", "\n", "**Show and tell**. Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, show it that's what you want.\n", "\n", "**Provide quality data**. If you're trying to build a classifier or get the model to follow a pattern, make sure that there are enough examples. Be sure to proofread your examples — the model is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume this is intentional and it can affect the response.\n", "\n", "**Check your settings.** The temperature and top_p settings control how deterministic the model is in generating a response. If you're asking it for a response where there's only one right answer, then you'd want to set these lower. If you're looking for more diverse responses, then you might want to set them higher. The number one mistake people make with these settings is assuming that they're \"cleverness\" or \"creativity\" controls.\n", "\n", "\n", "Source: https://github.com/Azure/OpenAI/blob/main/How%20to/Completions.md" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### 5. Submit!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1674494935186 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Create your first prompt\n", "text_prompt = \"Should oxford commas always be used?\"\n", "\n", "response = client.chat.completions.create(\n", " model=model,\n", " messages = [{\"role\":\"system\", \"content\":\"You are a helpful assistant.\"},\n", " {\"role\":\"user\",\"content\":text_prompt},])\n", "\n", "response.choices[0].message.content" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Repeat the same call, how do the results compare?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1674494940872 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "\n", "response = client.chat.completions.create(\n", " model=model,\n", " messages = [{\"role\":\"system\", \"content\":\"You are a helpful assistant.\"},\n", " {\"role\":\"user\",\"content\":text_prompt},])\n", "\n", "response.choices[0].message.content" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Summarize Text \n", "#### Challenge \n", "Summarize text by adding a 'tl;dr:' to the end of a text passage. Notice how the model understands how to perform a number of tasks with no additional instructions. You can experiment with more descriptive prompts than tl;dr to modify the model’s behavior and customize the summarization you receive(3). \n", "\n", "Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something that current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. \n", "\n", "\n", "\n", "Tl;dr" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# Exercises for several use cases \n", "1. Summarize Text \n", "2. Classify Text \n", "3. Generate New Product Names" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1674495198534 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "prompt = \"Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something that current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches.\\n\\nTl;dr\"\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1674495201868 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "#Setting a few additional, typical parameters during API Call\n", "\n", "response = client.chat.completions.create(\n", " model=model,\n", " messages = [{\"role\":\"system\", \"content\":\"You are a helpful assistant.\"},\n", " {\"role\":\"user\",\"content\":prompt},])\n", "\n", "response.choices[0].message.content" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Classify Text \n", "#### Challenge \n", "Classify items into categories provided at inference time. In the following example, we provide both the categories and the text to classify in the prompt(*playground_reference). \n", "\n", "Customer Inquiry: Hello, one of the keys on my laptop keyboard broke recently and I'll need a replacement:\n", "\n", "Classified category:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1674499424645 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "prompt = \"Classify the following inquiry into one of the following: categories: [Pricing, Hardware Support, Software Support]\\n\\ninquiry: Hello, one of the keys on my laptop keyboard broke recently and I'll need a replacement:\\n\\nClassified category:\"\n", "print(prompt)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1674499378518 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "#Setting a few additional, typical parameters during API Call\n", "\n", "response = client.chat.completions.create(\n", " model=model,\n", " messages = [{\"role\":\"system\", \"content\":\"You are a helpful assistant.\"},\n", " {\"role\":\"user\",\"content\":prompt},])\n", "\n", "response.choices[0].message.content" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Generate New Product Names\n", "#### Challenge\n", "Create product names from examples words. Here we include in the prompt information about the product we are going to generate names for. We also provide a similar example to show the pattern we wish to receive. We have also set the temperature value high to increase randomness and more innovative responses.\n", "\n", "Product description: A home milkshake maker\n", "Seed words: fast, healthy, compact.\n", "Product names: HomeShaker, Fit Shaker, QuickShake, Shake Maker\n", "\n", "Product description: A pair of shoes that can fit any foot size.\n", "Seed words: adaptable, fit, omni-fit." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1674257087279 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "prompt = \"Product description: A home milkshake maker\\nSeed words: fast, healthy, compact.\\nProduct names: HomeShaker, Fit Shaker, QuickShake, Shake Maker\\n\\nProduct description: A pair of shoes that can fit any foot size.\\nSeed words: adaptable, fit, omni-fit.\"\n", "\n", "print(prompt)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "#Setting a few additional, typical parameters during API Call\n", "\n", "response = client.chat.completions.create(\n", " model=model,\n", " messages = [{\"role\":\"system\", \"content\":\"You are a helpful assistant.\"},\n", " {\"role\":\"user\",\"content\":prompt}])\n", "\n", "response.choices[0].message.content" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# References \n", "- [Openai Cookbook](https://github.com/openai/openai-cookbook?WT.mc_id=academic-105485-koreyst) \n", "- [OpenAI Studio Examples](https://oai.azure.com/portal?WT.mc_id=academic-105485-koreyst) \n", "- [Best practices for fine-tuning GPT-3 to classify text](https://docs.google.com/document/d/1rqj7dkuvl7Byd5KQPUJRxc19BJt8wo0yHNwK84KfU3Q/edit#?WT.mc_id=academic-105485-koreyst)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# For More Help \n", "[OpenAI Commercialization Team](AzureOpenAITeam@microsoft.com) " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# Contributors\n", "* Louis Li \n" ] } ], "metadata": { "kernel_info": { "name": "python310-sdkv2" }, "kernelspec": { "display_name": "base", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.13" }, "microsoft": { "host": { "AzureML": { "notebookHasBeenCompleted": true } } }, "nteract": { "version": "nteract-front-end@1.0.0" } }, "nbformat": 4, "nbformat_minor": 2 }