{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "3c93ac5b", "metadata": {}, "source": [ "# Running Native Functions\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "40201641", "metadata": {}, "source": [ "Two of the previous notebooks showed how to [execute semantic functions inline](./03-semantic-function-inline.ipynb) and how to [run prompts from a file](./02-running-prompts-from-file.ipynb).\n", "\n", "In this notebook, we'll show how to use native functions from a file. We will also show how to call semantic functions from native functions.\n", "\n", "This can be useful in a few scenarios:\n", "\n", "- Writing logic around how to run a prompt that changes the prompt's outcome.\n", "- Using external data sources to gather data to concatenate into your prompt.\n", "- Validating user input data prior to sending it to the LLM prompt.\n", "\n", "Native functions are defined using standard Python code. The structure is simple, but not well documented at this point.\n", "\n", "The following examples are intended to help guide new users towards successful native & semantic function use with the SK Python framework.\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "d90b0c13", "metadata": {}, "source": [ "Prepare a semantic kernel instance first, loading also the AI service settings defined in the [Setup notebook](00-getting-started.ipynb):\n" ] }, { "cell_type": "markdown", "id": "f39125a5", "metadata": {}, "source": [ "Import Semantic Kernel SDK from pypi.org" ] }, { "cell_type": "code", "execution_count": null, "id": "1da651d4", "metadata": {}, "outputs": [], "source": [ "# Note: if using a virtual environment, do not run this cell\n", "%pip install -U semantic-kernel\n", "from semantic_kernel import __version__\n", "\n", "__version__" ] }, { "cell_type": "markdown", "id": "5f726252", "metadata": {}, "source": [ "Initial configuration for the notebook to run properly." ] }, { "cell_type": "code", "execution_count": null, "id": "ecfe74be", "metadata": {}, "outputs": [], "source": [ "# Make sure paths are correct for the imports\n", "\n", "import os\n", "import sys\n", "\n", "notebook_dir = os.path.abspath(\"\")\n", "parent_dir = os.path.dirname(notebook_dir)\n", "grandparent_dir = os.path.dirname(parent_dir)\n", "\n", "\n", "sys.path.append(grandparent_dir)" ] }, { "cell_type": "markdown", "id": "73a7fd96", "metadata": {}, "source": [ "### Configuring the Kernel\n", "\n", "Let's get started with the necessary configuration to run Semantic Kernel. For Notebooks, we require a `.env` file with the proper settings for the model you use. Create a new file named `.env` and place it in this directory. Copy the contents of the `.env.example` file from this directory and paste it into the `.env` file that you just created.\n", "\n", "**NOTE: Please make sure to include `GLOBAL_LLM_SERVICE` set to either OpenAI, AzureOpenAI, or HuggingFace in your .env file. If this setting is not included, the Service will default to AzureOpenAI.**\n", "\n", "#### Option 1: using OpenAI\n", "\n", "Add your [OpenAI Key](https://openai.com/product/) key to your `.env` file (org Id only if you have multiple orgs):\n", "\n", "```\n", "GLOBAL_LLM_SERVICE=\"OpenAI\"\n", "OPENAI_API_KEY=\"sk-...\"\n", "OPENAI_ORG_ID=\"\"\n", "OPENAI_CHAT_MODEL_ID=\"\"\n", "OPENAI_TEXT_MODEL_ID=\"\"\n", "OPENAI_EMBEDDING_MODEL_ID=\"\"\n", "```\n", "The names should match the names used in the `.env` file, as shown above.\n", "\n", "#### Option 2: using Azure OpenAI\n", "\n", "Add your [Azure Open AI Service key](https://learn.microsoft.com/azure/cognitive-services/openai/quickstart?pivots=programming-language-studio) settings to the `.env` file in the same folder:\n", "\n", "```\n", "GLOBAL_LLM_SERVICE=\"AzureOpenAI\"\n", "AZURE_OPENAI_API_KEY=\"...\"\n", "AZURE_OPENAI_ENDPOINT=\"https://...\"\n", "AZURE_OPENAI_CHAT_DEPLOYMENT_NAME=\"...\"\n", "AZURE_OPENAI_TEXT_DEPLOYMENT_NAME=\"...\"\n", "AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME=\"...\"\n", "AZURE_OPENAI_API_VERSION=\"...\"\n", "```\n", "The names should match the names used in the `.env` file, as shown above.\n", "\n", "For more advanced configuration, please follow the steps outlined in the [setup guide](./CONFIGURING_THE_KERNEL.md)." ] }, { "cell_type": "markdown", "id": "9a888bb7", "metadata": {}, "source": [ "We will load our settings and get the LLM service to use for the notebook." ] }, { "cell_type": "code", "execution_count": null, "id": "fddb5403", "metadata": {}, "outputs": [], "source": [ "from services import Service\n", "\n", "from samples.service_settings import ServiceSettings\n", "\n", "service_settings = ServiceSettings.create()\n", "\n", "# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)\n", "selectedService = (\n", " Service.AzureOpenAI\n", " if service_settings.global_llm_service is None\n", " else Service(service_settings.global_llm_service.lower())\n", ")\n", "print(f\"Using service type: {selectedService}\")" ] }, { "cell_type": "markdown", "id": "fcee3dc1", "metadata": {}, "source": [ "We now configure our Chat Completion service on the kernel." ] }, { "cell_type": "code", "execution_count": null, "id": "dd150646", "metadata": {}, "outputs": [], "source": [ "from semantic_kernel import Kernel\n", "\n", "kernel = Kernel()\n", "\n", "service_id = None\n", "if selectedService == Service.OpenAI:\n", " from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion\n", "\n", " service_id = \"default\"\n", " kernel.add_service(\n", " OpenAIChatCompletion(\n", " service_id=service_id,\n", " ),\n", " )\n", "elif selectedService == Service.AzureOpenAI:\n", " from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion\n", "\n", " service_id = \"default\"\n", " kernel.add_service(\n", " AzureChatCompletion(\n", " service_id=service_id,\n", " ),\n", " )" ] }, { "attachments": {}, "cell_type": "markdown", "id": "186767f8", "metadata": {}, "source": [ "Let's create a **native** function that gives us a random number between 3 and a user input as the upper limit. We'll use this number to create 3-x paragraphs of text when passed to a semantic function.\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "589733c5", "metadata": {}, "source": [ "First, let's create our native function.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "ae29c207", "metadata": {}, "outputs": [], "source": [ "import random\n", "\n", "from semantic_kernel.functions import kernel_function\n", "\n", "\n", "class GenerateNumberPlugin:\n", " \"\"\"\n", " Description: Generate a number between 3-x.\n", " \"\"\"\n", "\n", " @kernel_function(\n", " description=\"Generate a random number between 3-x\",\n", " name=\"GenerateNumberThreeOrHigher\",\n", " )\n", " def generate_number_three_or_higher(self, input: str) -> str:\n", " \"\"\"\n", " Generate a number between 3-<input>\n", " Example:\n", " \"8\" => rand(3,8)\n", " Args:\n", " input -- The upper limit for the random number generation\n", " Returns:\n", " int value\n", " \"\"\"\n", " try:\n", " return str(random.randint(3, int(input)))\n", " except ValueError as e:\n", " print(f\"Invalid input {input}\")\n", " raise e" ] }, { "attachments": {}, "cell_type": "markdown", "id": "f26b90c4", "metadata": {}, "source": [ "Next, let's create a semantic function that accepts a number as `{{$input}}` and generates that number of paragraphs about two Corgis on an adventure. `$input` is a default variable semantic functions can use.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "7890943f", "metadata": {}, "outputs": [], "source": [ "from semantic_kernel.connectors.ai.open_ai import AzureChatPromptExecutionSettings, OpenAIChatPromptExecutionSettings\n", "from semantic_kernel.prompt_template import InputVariable, PromptTemplateConfig\n", "\n", "prompt = \"\"\"\n", "Write a short story about two Corgis on an adventure.\n", "The story must be:\n", "- G rated\n", "- Have a positive message\n", "- No sexism, racism or other bias/bigotry\n", "- Be exactly {{$input}} paragraphs long. It must be this length.\n", "\"\"\"\n", "\n", "if selectedService == Service.OpenAI:\n", " execution_settings = OpenAIChatPromptExecutionSettings(\n", " service_id=service_id,\n", " ai_model_id=\"gpt-3.5-turbo\",\n", " max_tokens=2000,\n", " temperature=0.7,\n", " )\n", "elif selectedService == Service.AzureOpenAI:\n", " execution_settings = AzureChatPromptExecutionSettings(\n", " service_id=service_id,\n", " ai_model_id=\"gpt-35-turbo\",\n", " max_tokens=2000,\n", " temperature=0.7,\n", " )\n", "\n", "prompt_template_config = PromptTemplateConfig(\n", " template=prompt,\n", " name=\"story\",\n", " template_format=\"semantic-kernel\",\n", " input_variables=[\n", " InputVariable(name=\"input\", description=\"The user input\", is_required=True),\n", " ],\n", " execution_settings=execution_settings,\n", ")\n", "\n", "corgi_story = kernel.add_function(\n", " function_name=\"CorgiStory\",\n", " plugin_name=\"CorgiPlugin\",\n", " prompt_template_config=prompt_template_config,\n", ")\n", "\n", "generate_number_plugin = kernel.add_plugin(GenerateNumberPlugin(), \"GenerateNumberPlugin\")" ] }, { "cell_type": "code", "execution_count": null, "id": "2471c2ab", "metadata": {}, "outputs": [], "source": [ "# Run the number generator\n", "generate_number_three_or_higher = generate_number_plugin[\"GenerateNumberThreeOrHigher\"]\n", "number_result = await generate_number_three_or_higher(kernel, input=6)\n", "print(number_result)" ] }, { "cell_type": "code", "execution_count": null, "id": "f043a299", "metadata": {}, "outputs": [], "source": [ "story = await corgi_story.invoke(kernel, input=number_result.value)" ] }, { "cell_type": "markdown", "id": "7245e7a2", "metadata": {}, "source": [ "_Note: depending on which model you're using, it may not respond with the proper number of paragraphs._\n" ] }, { "cell_type": "code", "execution_count": null, "id": "59a60e2a", "metadata": {}, "outputs": [], "source": [ "print(f\"Generating a corgi story exactly {number_result.value} paragraphs long.\")\n", "print(\"=====================================================\")\n", "print(story)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "8ef29d16", "metadata": {}, "source": [ "## Kernel Functions with Annotated Parameters\n", "\n", "That works! But let's expand on our example to make it more generic.\n", "\n", "For the native function, we'll introduce the lower limit variable. This means that a user will input two numbers and the number generator function will pick a number between the first and second input.\n", "\n", "We'll make use of the Python's `Annotated` class to hold these variables.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "d54983d8", "metadata": {}, "outputs": [], "source": [ "kernel.remove_all_services()\n", "\n", "service_id = None\n", "if selectedService == Service.OpenAI:\n", " from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion\n", "\n", " service_id = \"default\"\n", " kernel.add_service(\n", " OpenAIChatCompletion(\n", " service_id=service_id,\n", " ),\n", " )\n", "elif selectedService == Service.AzureOpenAI:\n", " from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion\n", "\n", " service_id = \"default\"\n", " kernel.add_service(\n", " AzureChatCompletion(\n", " service_id=service_id,\n", " ),\n", " )" ] }, { "attachments": {}, "cell_type": "markdown", "id": "091f45e4", "metadata": {}, "source": [ "Let's start with the native function. Notice that we're add the `@kernel_function` decorator that holds the name of the function as well as an optional description. The input parameters are configured as part of the function's signature, and we use the `Annotated` type to specify the required input arguments.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "4ea462c2", "metadata": {}, "outputs": [], "source": [ "import sys\n", "from typing import Annotated\n", "\n", "from semantic_kernel.functions import kernel_function\n", "\n", "\n", "class GenerateNumberPlugin:\n", " \"\"\"\n", " Description: Generate a number between a min and a max.\n", " \"\"\"\n", "\n", " @kernel_function(\n", " name=\"GenerateNumber\",\n", " description=\"Generate a random number between min and max\",\n", " )\n", " def generate_number(\n", " self,\n", " min: Annotated[int, \"the minimum number of paragraphs\"],\n", " max: Annotated[int, \"the maximum number of paragraphs\"] = 10,\n", " ) -> Annotated[int, \"the output is a number\"]:\n", " \"\"\"\n", " Generate a number between min-max\n", " Example:\n", " min=\"4\" max=\"10\" => rand(4,8)\n", " Args:\n", " min -- The lower limit for the random number generation\n", " max -- The upper limit for the random number generation\n", " Returns:\n", " int value\n", " \"\"\"\n", " try:\n", " return str(random.randint(min, max))\n", " except ValueError as e:\n", " print(f\"Invalid input {min} and {max}\")\n", " raise e" ] }, { "cell_type": "code", "execution_count": null, "id": "48bcdf9e", "metadata": {}, "outputs": [], "source": [ "generate_number_plugin = kernel.add_plugin(GenerateNumberPlugin(), \"GenerateNumberPlugin\")\n", "generate_number = generate_number_plugin[\"GenerateNumber\"]" ] }, { "attachments": {}, "cell_type": "markdown", "id": "6ad068d6", "metadata": {}, "source": [ "Now let's also allow the semantic function to take in additional arguments. In this case, we're going to allow the our CorgiStory function to be written in a specified language. We'll need to provide a `paragraph_count` and a `language`.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "8b8286fb", "metadata": {}, "outputs": [], "source": [ "prompt = \"\"\"\n", "Write a short story about two Corgis on an adventure.\n", "The story must be:\n", "- G rated\n", "- Have a positive message\n", "- No sexism, racism or other bias/bigotry\n", "- Be exactly {{$paragraph_count}} paragraphs long\n", "- Be written in this language: {{$language}}\n", "\"\"\"\n", "\n", "if selectedService == Service.OpenAI:\n", " execution_settings = OpenAIChatPromptExecutionSettings(\n", " service_id=service_id,\n", " ai_model_id=\"gpt-3.5-turbo\",\n", " max_tokens=2000,\n", " temperature=0.7,\n", " )\n", "elif selectedService == Service.AzureOpenAI:\n", " execution_settings = AzureChatPromptExecutionSettings(\n", " service_id=service_id,\n", " ai_model_id=\"gpt-35-turbo\",\n", " max_tokens=2000,\n", " temperature=0.7,\n", " )\n", "\n", "prompt_template_config = PromptTemplateConfig(\n", " template=prompt,\n", " name=\"summarize\",\n", " template_format=\"semantic-kernel\",\n", " input_variables=[\n", " InputVariable(name=\"paragraph_count\", description=\"The number of paragraphs\", is_required=True),\n", " InputVariable(name=\"language\", description=\"The language of the story\", is_required=True),\n", " ],\n", " execution_settings=execution_settings,\n", ")\n", "\n", "corgi_story = kernel.add_function(\n", " function_name=\"CorgiStory\",\n", " plugin_name=\"CorgiPlugin\",\n", " prompt_template_config=prompt_template_config,\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "c8778bad", "metadata": {}, "source": [ "Let's generate a paragraph count.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "28820d9d", "metadata": {}, "outputs": [], "source": [ "result = await generate_number.invoke(kernel, min=1, max=5)\n", "num_paragraphs = result.value\n", "print(f\"Generating a corgi story {num_paragraphs} paragraphs long.\")" ] }, { "cell_type": "markdown", "id": "225a9147", "metadata": {}, "source": [ "We can now invoke our corgi_story function using the `kernel` and the keyword arguments `paragraph_count` and `language`.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "dbe07c4d", "metadata": {}, "outputs": [], "source": [ "# Pass the output to the semantic story function\n", "desired_language = \"Spanish\"\n", "story = await corgi_story.invoke(kernel, paragraph_count=num_paragraphs, language=desired_language)" ] }, { "cell_type": "code", "execution_count": null, "id": "6732a30b", "metadata": {}, "outputs": [], "source": [ "print(f\"Generating a corgi story {num_paragraphs} paragraphs long in {desired_language}.\")\n", "print(\"=====================================================\")\n", "print(story)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "fb786c54", "metadata": {}, "source": [ "## Calling Native Functions within a Semantic Function\n", "\n", "One neat thing about the Semantic Kernel is that you can also call native functions from within Prompt Functions!\n", "\n", "We will make our CorgiStory semantic function call a native function `GenerateNames` which will return names for our Corgi characters.\n", "\n", "We do this using the syntax `{{plugin_name.function_name}}`. You can read more about our prompte templating syntax [here](../../../docs/PROMPT_TEMPLATE_LANGUAGE.md).\n" ] }, { "cell_type": "code", "execution_count": null, "id": "d84c7d84", "metadata": {}, "outputs": [], "source": [ "from semantic_kernel.functions import kernel_function\n", "\n", "\n", "class GenerateNamesPlugin:\n", " \"\"\"\n", " Description: Generate character names.\n", " \"\"\"\n", "\n", " # The default function name will be the name of the function itself, however you can override this\n", " # by setting the name=<name override> in the @kernel_function decorator. In this case, we're using\n", " # the same name as the function name for simplicity.\n", " @kernel_function(description=\"Generate character names\", name=\"generate_names\")\n", " def generate_names(self) -> str:\n", " \"\"\"\n", " Generate two names.\n", " Returns:\n", " str\n", " \"\"\"\n", " names = {\"Hoagie\", \"Hamilton\", \"Bacon\", \"Pizza\", \"Boots\", \"Shorts\", \"Tuna\"}\n", " first_name = random.choice(list(names))\n", " names.remove(first_name)\n", " second_name = random.choice(list(names))\n", " return f\"{first_name}, {second_name}\"" ] }, { "cell_type": "code", "execution_count": null, "id": "2ab7d65f", "metadata": {}, "outputs": [], "source": [ "generate_names_plugin = kernel.add_plugin(GenerateNamesPlugin(), plugin_name=\"GenerateNames\")\n", "generate_names = generate_names_plugin[\"generate_names\"]" ] }, { "cell_type": "code", "execution_count": null, "id": "94decd3e", "metadata": {}, "outputs": [], "source": [ "prompt = \"\"\"\n", "Write a short story about two Corgis on an adventure.\n", "The story must be:\n", "- G rated\n", "- Have a positive message\n", "- No sexism, racism or other bias/bigotry\n", "- Be exactly {{$paragraph_count}} paragraphs long\n", "- Be written in this language: {{$language}}\n", "- The two names of the corgis are {{GenerateNames.generate_names}}\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "id": "be72a503", "metadata": {}, "outputs": [], "source": [ "if selectedService == Service.OpenAI:\n", " execution_settings = OpenAIChatPromptExecutionSettings(\n", " service_id=service_id,\n", " ai_model_id=\"gpt-3.5-turbo\",\n", " max_tokens=2000,\n", " temperature=0.7,\n", " )\n", "elif selectedService == Service.AzureOpenAI:\n", " execution_settings = AzureChatPromptExecutionSettings(\n", " service_id=service_id,\n", " ai_model_id=\"gpt-35-turbo\",\n", " max_tokens=2000,\n", " temperature=0.7,\n", " )\n", "\n", "prompt_template_config = PromptTemplateConfig(\n", " template=prompt,\n", " name=\"corgi-new\",\n", " template_format=\"semantic-kernel\",\n", " input_variables=[\n", " InputVariable(name=\"paragraph_count\", description=\"The number of paragraphs\", is_required=True),\n", " InputVariable(name=\"language\", description=\"The language of the story\", is_required=True),\n", " ],\n", " execution_settings=execution_settings,\n", ")\n", "\n", "corgi_story = kernel.add_function(\n", " function_name=\"CorgiStoryUpdated\",\n", " plugin_name=\"CorgiPluginUpdated\",\n", " prompt_template_config=prompt_template_config,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "56e6cf0f", "metadata": {}, "outputs": [], "source": [ "result = await generate_number.invoke(kernel, min=1, max=5)\n", "num_paragraphs = result.value" ] }, { "cell_type": "code", "execution_count": null, "id": "7e980348", "metadata": {}, "outputs": [], "source": [ "desired_language = \"French\"\n", "story = await corgi_story.invoke(kernel, paragraph_count=num_paragraphs, language=desired_language)" ] }, { "cell_type": "code", "execution_count": null, "id": "c4ade048", "metadata": {}, "outputs": [], "source": [ "print(f\"Generating a corgi story {num_paragraphs} paragraphs long in {desired_language}.\")\n", "print(\"=====================================================\")\n", "print(story)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "42f0c472", "metadata": {}, "source": [ "### Recap\n", "\n", "A quick review of what we've learned here:\n", "\n", "- We've learned how to create native and prompt functions and register them to the kernel\n", "- We've seen how we can use Kernel Arguments to pass in more custom variables into our prompt\n", "- We've seen how we can call native functions within a prompt.\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.3" } }, "nbformat": 4, "nbformat_minor": 5 }