{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "68e1c158",
   "metadata": {},
   "source": [
    "# Using Hugging Face With Plugins\n",
    "\n",
    "In this notebook, we demonstrate using Hugging Face models for Plugins using both SemanticMemory and text completions.\n",
    "\n",
    "SK supports downloading models from the Hugging Face that can perform the following tasks: text-generation, text2text-generation, summarization, and sentence-similarity. You can search for models by task at https://huggingface.co/models.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a77bdf89",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Note: if using a virtual environment, do not run this cell\n",
    "%pip install -U semantic-kernel\n",
    "from semantic_kernel import __version__\n",
    "\n",
    "__version__"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "753ab756",
   "metadata": {},
   "outputs": [],
   "source": [
    "from services import Service\n",
    "\n",
    "# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)\n",
    "selectedService = Service.HuggingFace\n",
    "print(f\"Using service type: {selectedService}\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "d8ddffc1",
   "metadata": {},
   "source": [
    "First, we will create a kernel and add both text completion and embedding services.\n",
    "\n",
    "For text completion, we are choosing GPT2. This is a text-generation model. (Note: text-generation will repeat the input in the output, text2text-generation will not.)\n",
    "For embeddings, we are using sentence-transformers/all-MiniLM-L6-v2. Vectors generated for this model are of length 384 (compared to a length of 1536 from OpenAI ADA).\n",
    "\n",
    "The following step may take a few minutes when run for the first time as the models will be downloaded to your local machine.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8f8dcbc6",
   "metadata": {},
   "outputs": [],
   "source": [
    "from semantic_kernel import Kernel\n",
    "from semantic_kernel.connectors.ai.hugging_face import HuggingFaceTextCompletion, HuggingFaceTextEmbedding\n",
    "from semantic_kernel.core_plugins import TextMemoryPlugin\n",
    "from semantic_kernel.memory import SemanticTextMemory, VolatileMemoryStore\n",
    "\n",
    "kernel = Kernel()\n",
    "\n",
    "# Configure LLM service\n",
    "if selectedService == Service.HuggingFace:\n",
    "    # Feel free to update this model to any other model available on Hugging Face\n",
    "    text_service_id = \"HuggingFaceM4/tiny-random-LlamaForCausalLM\"\n",
    "    kernel.add_service(\n",
    "        service=HuggingFaceTextCompletion(\n",
    "            service_id=text_service_id, ai_model_id=text_service_id, task=\"text-generation\"\n",
    "        ),\n",
    "    )\n",
    "    embed_service_id = \"sentence-transformers/all-MiniLM-L6-v2\"\n",
    "    embedding_svc = HuggingFaceTextEmbedding(service_id=embed_service_id, ai_model_id=embed_service_id)\n",
    "    kernel.add_service(\n",
    "        service=embedding_svc,\n",
    "    )\n",
    "    memory = SemanticTextMemory(storage=VolatileMemoryStore(), embeddings_generator=embedding_svc)\n",
    "    kernel.add_plugin(TextMemoryPlugin(memory), \"TextMemoryPlugin\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "2a7e7ca4",
   "metadata": {},
   "source": [
    "### Add Memories and Define a plugin to use them\n",
    "\n",
    "Most models available on huggingface.co are not as powerful as OpenAI GPT-3+. Your plugins will likely need to be simpler to accommodate this.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d096504c",
   "metadata": {},
   "outputs": [],
   "source": [
    "from semantic_kernel.connectors.ai.hugging_face import HuggingFacePromptExecutionSettings\n",
    "from semantic_kernel.prompt_template import PromptTemplateConfig\n",
    "\n",
    "collection_id = \"generic\"\n",
    "\n",
    "await memory.save_information(collection=collection_id, id=\"info1\", text=\"Sharks are fish.\")\n",
    "await memory.save_information(collection=collection_id, id=\"info2\", text=\"Whales are mammals.\")\n",
    "await memory.save_information(collection=collection_id, id=\"info3\", text=\"Penguins are birds.\")\n",
    "await memory.save_information(collection=collection_id, id=\"info4\", text=\"Dolphins are mammals.\")\n",
    "await memory.save_information(collection=collection_id, id=\"info5\", text=\"Flies are insects.\")\n",
    "\n",
    "# Define prompt function using SK prompt template language\n",
    "my_prompt = \"\"\"I know these animal facts: \n",
    "- {{recall 'fact about sharks'}}\n",
    "- {{recall 'fact about whales'}} \n",
    "- {{recall 'fact about penguins'}} \n",
    "- {{recall 'fact about dolphins'}} \n",
    "- {{recall 'fact about flies'}}\n",
    "Now, tell me something about: {{$request}}\"\"\"\n",
    "\n",
    "execution_settings = HuggingFacePromptExecutionSettings(\n",
    "    service_id=text_service_id,\n",
    "    ai_model_id=text_service_id,\n",
    "    max_tokens=45,\n",
    "    temperature=0.5,\n",
    "    top_p=0.5,\n",
    ")\n",
    "\n",
    "prompt_template_config = PromptTemplateConfig(\n",
    "    template=my_prompt,\n",
    "    name=\"text_complete\",\n",
    "    template_format=\"semantic-kernel\",\n",
    "    execution_settings=execution_settings,\n",
    ")\n",
    "\n",
    "my_function = kernel.add_function(\n",
    "    function_name=\"text_complete\",\n",
    "    plugin_name=\"TextCompletionPlugin\",\n",
    "    prompt_template_config=prompt_template_config,\n",
    ")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "2calf857",
   "metadata": {},
   "source": [
    "Let's now see what the completion looks like! Remember, \"gpt2\" is nowhere near as large as ChatGPT, so expect a much simpler answer.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "628c843e",
   "metadata": {},
   "outputs": [],
   "source": [
    "output = await kernel.invoke(\n",
    "    my_function,\n",
    "    request=\"What are whales?\",\n",
    ")\n",
    "\n",
    "output = str(output).strip()\n",
    "\n",
    "query_result1 = await memory.search(\n",
    "    collection=collection_id, query=\"What are sharks?\", limit=1, min_relevance_score=0.3\n",
    ")\n",
    "\n",
    "print(f\"The queried result for 'What are sharks?' is {query_result1[0].text}\")\n",
    "\n",
    "print(f\"{text_service_id} completed prompt with: '{output}'\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}