{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "3c93ac5b", "metadata": {}, "source": [ "# Running Prompt Functions Inline\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "40201641", "metadata": {}, "source": [ "The [previous notebook](./02-running-prompts-from-file.ipynb)\n", "showed how to define a semantic function using a prompt template stored on a file.\n", "\n", "In this notebook, we'll show how to use the Semantic Kernel to define functions inline with your python code. This can be useful in a few scenarios:\n", "\n", "- Dynamically generating the prompt using complex rules at runtime\n", "- Writing prompts by editing Python code instead of TXT files.\n", "- Easily creating demos, like this document\n", "\n", "Prompt templates are defined using the SK template language, which allows to reference variables and functions. Read [this doc](https://aka.ms/sk/howto/configurefunction) to learn more about the design decisions for prompt templating.\n", "\n", "For now we'll use only the `{{$input}}` variable, and see more complex templates later.\n", "\n", "Almost all semantic function prompts have a reference to `{{$input}}`, which is the default way\n", "a user can import content from the context variables.\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "d90b0c13", "metadata": {}, "source": [ "Prepare a semantic kernel instance first, loading also the AI service settings defined in the [Setup notebook](00-getting-started.ipynb):\n" ] }, { "cell_type": "code", "execution_count": null, "id": "1da651d4", "metadata": {}, "outputs": [], "source": [ "!python -m pip install semantic-kernel==1.0.3" ] }, { "cell_type": "code", "execution_count": null, "id": "68b770df", "metadata": {}, "outputs": [], "source": [ "from services import Service\n", "\n", "# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)\n", "selectedService = Service.OpenAI" ] }, { "cell_type": "code", "execution_count": null, "id": "3712b7c3", "metadata": {}, "outputs": [], "source": [ "import semantic_kernel as sk\n", "\n", "kernel = sk.Kernel()\n", "\n", "service_id = None\n", "if selectedService == Service.OpenAI:\n", " from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion\n", "\n", " service_id = \"oai_chat_completion\"\n", " kernel.add_service(\n", " OpenAIChatCompletion(service_id=service_id, ai_model_id=\"gpt-3.5-turbo-instruct\"),\n", " )\n", "elif selectedService == Service.AzureOpenAI:\n", " from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion\n", "\n", " service_id = \"aoai_chat_completion\"\n", " kernel.add_service(\n", " AzureChatCompletion(service_id=service_id),\n", " )" ] }, { "attachments": {}, "cell_type": "markdown", "id": "589733c5", "metadata": {}, "source": [ "Let's use a prompt to create a semantic function used to summarize content, allowing for some creativity and a sufficient number of tokens.\n", "\n", "The function will take in input the text to summarize.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "ae29c207", "metadata": {}, "outputs": [], "source": [ "from semantic_kernel.connectors.ai.open_ai import OpenAIChatPromptExecutionSettings\n", "from semantic_kernel.prompt_template import InputVariable, PromptTemplateConfig\n", "\n", "prompt = \"\"\"{{$input}}\n", "Summarize the content above.\n", "\"\"\"\n", "\n", "if selectedService == Service.OpenAI:\n", " execution_settings = OpenAIChatPromptExecutionSettings(\n", " service_id=service_id,\n", " ai_model_id=\"gpt-3.5-turbo\",\n", " max_tokens=2000,\n", " temperature=0.7,\n", " )\n", "elif selectedService == Service.AzureOpenAI:\n", " execution_settings = OpenAIChatPromptExecutionSettings(\n", " service_id=service_id,\n", " ai_model_id=\"gpt-35-turbo\",\n", " max_tokens=2000,\n", " temperature=0.7,\n", " )\n", "\n", "prompt_template_config = PromptTemplateConfig(\n", " template=prompt,\n", " name=\"summarize\",\n", " template_format=\"semantic-kernel\",\n", " input_variables=[\n", " InputVariable(name=\"input\", description=\"The user input\", is_required=True),\n", " ],\n", " execution_settings=execution_settings,\n", ")\n", "\n", "summarize = kernel.add_function(\n", " function_name=\"summarizeFunc\",\n", " plugin_name=\"summarizePlugin\",\n", " prompt_template_config=prompt_template_config,\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "f26b90c4", "metadata": {}, "source": [ "Set up some content to summarize, here's an extract about Demo, an ancient Greek poet, taken from Wikipedia (https://en.wikipedia.org/wiki/Demo_(ancient_Greek_poet)).\n" ] }, { "cell_type": "code", "execution_count": null, "id": "314557fb", "metadata": {}, "outputs": [], "source": [ "input_text = \"\"\"\n", "Demo (ancient Greek poet)\n", "From Wikipedia, the free encyclopedia\n", "Demo or Damo (Greek: Δεμώ, Δαμώ; fl. c. AD 200) was a Greek woman of the Roman period, known for a single epigram, engraved upon the Colossus of Memnon, which bears her name. She speaks of herself therein as a lyric poetess dedicated to the Muses, but nothing is known of her life.[1]\n", "Identity\n", "Demo was evidently Greek, as her name, a traditional epithet of Demeter, signifies. The name was relatively common in the Hellenistic world, in Egypt and elsewhere, and she cannot be further identified. The date of her visit to the Colossus of Memnon cannot be established with certainty, but internal evidence on the left leg suggests her poem was inscribed there at some point in or after AD 196.[2]\n", "Epigram\n", "There are a number of graffiti inscriptions on the Colossus of Memnon. Following three epigrams by Julia Balbilla, a fourth epigram, in elegiac couplets, entitled and presumably authored by \"Demo\" or \"Damo\" (the Greek inscription is difficult to read), is a dedication to the Muses.[2] The poem is traditionally published with the works of Balbilla, though the internal evidence suggests a different author.[1]\n", "In the poem, Demo explains that Memnon has shown her special respect. In return, Demo offers the gift for poetry, as a gift to the hero. At the end of this epigram, she addresses Memnon, highlighting his divine status by recalling his strength and holiness.[2]\n", "Demo, like Julia Balbilla, writes in the artificial and poetic Aeolic dialect. The language indicates she was knowledgeable in Homeric poetry—'bearing a pleasant gift', for example, alludes to the use of that phrase throughout the Iliad and Odyssey.[a][2] \n", "\"\"\"" ] }, { "attachments": {}, "cell_type": "markdown", "id": "bf0f2330", "metadata": {}, "source": [ "...and run the summary function:\n" ] }, { "cell_type": "code", "execution_count": null, "id": "7b0e3b0c", "metadata": {}, "outputs": [], "source": [ "from semantic_kernel.functions import KernelArguments\n", "\n", "summary = await kernel.invoke(summarize, KernelArguments(input=input_text))\n", "\n", "print(summary)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "1c2c1262", "metadata": {}, "source": [ "# Using ChatCompletion for Semantic Plugins\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "29b59b28", "metadata": {}, "source": [ "You can also use chat completion models (like `gpt-35-turbo` and `gpt4`) for creating plugins. Normally you would have to tweak the API to accommodate for a system and user role, but SK abstracts that away for you by using `kernel.add_service` and `AzureChatCompletion` or `OpenAIChatCompletion`\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "4777f447", "metadata": {}, "source": [ "Here's one more example of how to write an inline Semantic Function that gives a TLDR for a piece of text using a ChatCompletion model\n" ] }, { "cell_type": "code", "execution_count": null, "id": "c5886aeb", "metadata": {}, "outputs": [], "source": [ "kernel = sk.Kernel()\n", "\n", "service_id = None\n", "if selectedService == Service.OpenAI:\n", " from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion\n", "\n", " service_id = \"oai_chat_gpt\"\n", " kernel.add_service(\n", " OpenAIChatCompletion(service_id=service_id, ai_model_id=\"gpt-3.5-turbo-1106\"),\n", " )\n", "elif selectedService == Service.AzureOpenAI:\n", " from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion\n", "\n", " service_id = \"aoai_chat_completion\"\n", " kernel.add_service(\n", " AzureChatCompletion(service_id=service_id),\n", " )" ] }, { "cell_type": "code", "execution_count": null, "id": "ea8128c8", "metadata": {}, "outputs": [], "source": [ "prompt = \"\"\"\n", "{{$input}}\n", "\n", "Give me the TLDR in 5 words or less.\n", "\"\"\"\n", "\n", "text = \"\"\"\n", " 1) A robot may not injure a human being or, through inaction,\n", " allow a human being to come to harm.\n", "\n", " 2) A robot must obey orders given it by human beings except where\n", " such orders would conflict with the First Law.\n", "\n", " 3) A robot must protect its own existence as long as such protection\n", " does not conflict with the First or Second Law.\n", "\"\"\"\n", "\n", "from semantic_kernel.connectors.ai.open_ai.prompt_execution_settings.open_ai_prompt_execution_settings import (\n", " OpenAIChatPromptExecutionSettings,\n", ")\n", "\n", "if selectedService == Service.OpenAI:\n", " execution_settings = OpenAIChatPromptExecutionSettings(\n", " service_id=service_id,\n", " ai_model_id=\"gpt-3.5-turbo-1106\",\n", " max_tokens=2000,\n", " temperature=0.7,\n", " )\n", "elif selectedService == Service.AzureOpenAI:\n", " execution_settings = OpenAIChatPromptExecutionSettings(\n", " service_id=service_id,\n", " ai_model_id=\"gpt-35-turbo\",\n", " max_tokens=2000,\n", " temperature=0.7,\n", " )\n", "\n", "prompt_template_config = PromptTemplateConfig(\n", " template=prompt,\n", " name=\"tldr\",\n", " template_format=\"semantic-kernel\",\n", " input_variables=[\n", " InputVariable(name=\"input\", description=\"The user input\", is_required=True),\n", " ],\n", " execution_settings=execution_settings,\n", ")\n", "\n", "tldr_function = kernel.add_function(\n", " function_name=\"tldrFunction\",\n", " plugin_name=\"tldrPlugin\",\n", " prompt_template_config=prompt_template_config,\n", ")\n", "\n", "summary = await kernel.invoke(tldr_function, KernelArguments(input=text))\n", "\n", "print(f\"Output: {summary}\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.9" } }, "nbformat": 4, "nbformat_minor": 5 }