{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "1629a153-4494-48f9-b32f-7a1258475f03", "metadata": {}, "source": [ "# LLaMa 2: A Chatbot LLM on IPUs - Inference" ] }, { "attachments": {}, "cell_type": "markdown", "id": "5fb6a43b-7017-458a-a288-394c2811c97a", "metadata": {}, "source": [ "| Domain | Tasks | Model | Datasets | Workflow | Number of IPUs | Execution time |\n", "|---------|-------|-------|----------|----------|--------------|--------------|\n", "| NLP | Chat Fine-tuned Text Generation | LLaMa 2 7B/13B | N/A | Inference | recommended: 16 (minimum 4) | 20 min |\n", "\n", "[LLaMa 2](https://about.fb.com/news/2023/07/llama-2/) is the next generation of the [LLaMa](https://ai.meta.com/blog/large-language-model-llama-meta-ai/) model by [Meta](https://ai.meta.com/), released as a series of multi-billion parameter models fine-tuned for dialogue. LLaMa 2 was pre-trained on 2 trillion tokens (40% more than the original LLaMa) and shows better performance on benchmarks for equivalent parameter sizes than other SOTA LLMs such as Falcon and MPT.\n", "\n", "This notebook will show you how to run LLaMa 2 7B and LLaMa 2 13B models on Graphcore IPUs. In this notebook, we describe how to create and configure the LLaMa inference pipeline, then run live inference as well as batched inference using your own prompts. \n", "\n", "You will need a minimum of 4 IPUs to run this notebook. You can also use 16 IPUs for faster inference using some extra tensor parallelism.\n", "\n", "[![Join our Slack Community](https://img.shields.io/badge/Slack-Join%20Graphcore's%20Community-blue?style=flat-square&logo=slack)](https://www.graphcore.ai/join-community)\n", "\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "69147caa-ef19-4892-8c45-fb01370ce0c2", "metadata": {}, "source": [ "## Environment setup\n", "\n", "If you are running this notebook on Paperspace Gradient, the environment will already be set up for you.\n", "\n", "To run the demo using other IPU hardware, you need to have the Poplar SDK enabled. Refer to the [Getting Started guide](https://docs.graphcore.ai/en/latest/getting-started.html#getting-started) for your system for details on how to enable the Poplar SDK. Also refer to the [Jupyter Quick Start guide](https://docs.graphcore.ai/projects/jupyter-notebook-quick-start/en/latest/index.html) for how to set up Jupyter to be able to run this notebook on a remote IPU machine.\n", "\n", "In order to improve usability and support for future users, Graphcore would like to collect information about the applications and code being run in this notebook. The following information will be anonymised before being sent to Graphcore:\n", "\n", "* User progression through the notebook\n", "* Notebook details: number of cells, code being run and the output of the cells\n", "* Environment details\n", "\n", "You can disable logging at any time by running %unload_ext graphcore_cloud_tools.notebook_logging.gc_logger from any cell.\n", "\n", "Run the next cell to install extra requirements for this notebook and load the logger." ] }, { "cell_type": "code", "execution_count": null, "id": "cd539f63-4d58-4610-a973-346661c187ae", "metadata": { "scrolled": true }, "outputs": [], "source": [ "%pip install -r requirements.txt\n", "%load_ext graphcore_cloud_tools.notebook_logging.gc_logger" ] }, { "attachments": {}, "cell_type": "markdown", "id": "f915e58b-5761-41bb-90f9-cc6abfb53be8", "metadata": {}, "source": [ "LLaMa 2 is open source and available to use as a Hugging Face checkpoint, but requires access to be granted by Meta. If you do not yet have permission to access the checkpoint and want to use this notebook, [please request access from the LLaMa 2 Hugging Face Model card](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).\n", "\n", "Once you have access, you must be logged onto your Hugging Face Hub account from the Hugging Face CLI in order to load the pre-trained model:" ] }, { "cell_type": "code", "execution_count": null, "id": "8c3ed15f-9498-47d4-9083-099129770ae8", "metadata": {}, "outputs": [], "source": [ "import huggingface_hub\n", "huggingface_hub.notebook_login()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "93fd4907-2025-4634-9454-5e3df4305fea", "metadata": {}, "source": [ "Next, we define the number of IPUs for your instance as well as the cache directory for the generated executable:" ] }, { "cell_type": "code", "execution_count": null, "id": "3b8d64eb-1a15-4934-ae6b-6f7bf343c9ec", "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "number_of_ipus = int(os.getenv(\"NUM_AVAILABLE_IPU\", 16))\n", "executable_cache_dir = os.path.join(os.getenv(\"POPLAR_EXECUTABLE_CACHE_DIR\", \"./exe_cache\"), \"llama2\")\n", "os.environ[\"POPXL_CACHE_DIR\"] = executable_cache_dir" ] }, { "attachments": {}, "cell_type": "markdown", "id": "4902e970-2b3c-45d8-901d-cecde86fe004", "metadata": {}, "source": [ "## LLaMa 2 inference" ] }, { "attachments": {}, "cell_type": "markdown", "id": "27a7e0b5-bdb0-4d78-89fd-7c0d69df6f0e", "metadata": {}, "source": [ "First, load the inference configuration for the model. There are a few configurations made available that you can use in the `config/inference.yml` file. There are different configurations based on model size and the number of available IPUs. You can edit the next cell to choose your model size. It must be one of `7b` or `13b` as both of these are supported." ] }, { "cell_type": "code", "execution_count": null, "id": "fab61103", "metadata": {}, "outputs": [], "source": [ "model_size = '7b' #or '13b'" ] }, { "cell_type": "code", "execution_count": null, "id": "d9d5c513-c582-4ba5-b38f-ec4e642d7783", "metadata": {}, "outputs": [], "source": [ "from utils.setup import llama_config_setup\n", "\n", "checkpoint_name = f\"meta-llama/Llama-2-{model_size}-chat-hf\" \n", "config, *_ = llama_config_setup(\n", " \"config/inference.yml\", \n", " \"release\", \n", " f\"llama2_{model_size}_pod4\" if number_of_ipus == 4 else f\"llama2_{model_size}_pod16\"\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "220dc7c8-a7c1-41f1-865b-9626c1249e03", "metadata": {}, "source": [ "These names are then used to load the configuration - the function will automatically select and load a suitable configuration for your instance. It will also set the name of the Hugging Face checkpoint to load the model weights and tokenizer." ] }, { "cell_type": "code", "execution_count": null, "id": "15454fc7-a51e-4368-815a-f8d49dfa205b", "metadata": {}, "outputs": [], "source": [ "config" ] }, { "attachments": {}, "cell_type": "markdown", "id": "f2e69b02-df88-4d21-ba71-e6e3a449cd92", "metadata": {}, "source": [ "Next, instantiate the inference pipeline for the model. Here, you simply need to define the maximum sequence length and maximum micro batch size. When executing a model on IPUs, the model is compiled into an executable format with frozen parameters. If these parameters are changed, a recompilation will be triggered.\n", "\n", "Selecting longer sequence lengths or batch sizes uses more IPU memory. This means increasing one may require you to decrease the other." ] }, { "cell_type": "code", "execution_count": null, "id": "3625b4da-18cc-4e5e-8834-8d96f379d4d4", "metadata": {}, "outputs": [], "source": [ "import api\n", "import time\n", "\n", "sequence_length = 1024\n", "micro_batch_size = 2\n", "\n", "start = time.time()\n", "\n", "llama_pipeline = api.LlamaPipeline(\n", " config, \n", " sequence_length=sequence_length, \n", " micro_batch_size=micro_batch_size,\n", " hf_llama_checkpoint=checkpoint_name\n", ")\n", "\n", "print(f\"Model preparation time: {time.time() - start}s\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "efd482b5-1f94-4956-a3f4-a21377f295b2", "metadata": {}, "source": [ "Now you can simply run the pipeline:" ] }, { "cell_type": "code", "execution_count": null, "id": "546d54b0-1f23-4480-bf0a-f830ee54b952", "metadata": {}, "outputs": [], "source": [ "answer = llama_pipeline(\"Hi, can you tell me something interesting about cats? List it as bullet points.\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "c16a1b4e-46e5-4749-9d2c-993e8503cfe3", "metadata": {}, "source": [ "\n", "Be warned, you may find the model occasionally hallucinates or provides logically incoherent answers. This is expected from a model of this size and you should try to provide prompts which are as informative as possible. Spend some time tweaking the parameters and the prompt phrasing to get the best results you can!\n", "\n", "There are a few sampling parameters we can use to control the behaviour of the generation:\n", "\n", "- `temperature` – Indicates whether you want more creative or more factual outputs. A value of `1.0` corresponds to the model's default behaviour, with relatively higher values generating more creative outputs and lower values generating more factual answers. `temperature` must be at least `0.0` which means the model picks the most probable output at each step. If the model starts repeating itself, try increasing the temperature. If the model starts producing off-topic or nonsensical answers, try decreasing the temperature.\n", "- `k` – Indicates that only the highest `k` probable tokens can be sampled. Set it to 0 to sample across all possible tokens, which means that top k sampling is disabled. The value for `k` must be between a minimum of 0 and a maximum of `config.model.embedding.vocab_size` which is 32000.\n", "- `output_length` – Indicates the number of tokens to sample before stopping. Sampling can stop early if the model outputs `2` (the end of sequence (EOS) token).\n", "- `print_final` – If `True`, prints the total time and throughput.\n", "- `print_live` – If `True`, the tokens will be printed as they are being sampled. If `False`, only the answer will be returned as a string.\n", "- `prompt` – A string containing the question you wish to generate an answer for.\n", "\n", "These can be freely changed and experimented with in the next cell to produce the behaviour you want from the LLaMa 2 model." ] }, { "cell_type": "code", "execution_count": null, "id": "a7a18d04-fe4b-4ac6-a768-3e703ba61337", "metadata": {}, "outputs": [], "source": [ "answer = llama_pipeline(\n", " \"How do I get help if I am stuck on a deserted island?\",\n", " temperature=0.2,\n", " k=20,\n", " output_length=None,\n", " print_live=True,\n", " print_final=True,\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "a8855267-bddb-4a62-9b38-6894f51f4d41", "metadata": {}, "source": [ "You can set the `micro_batch_size` parameter to be higher during pipeline creation, and use the pipeline on a batch of prompts. Simply pass the list of prompts to the pipeline, ensuring the number of prompts is less than or equal to the micro batch size." ] }, { "cell_type": "code", "execution_count": null, "id": "1ff774ed-55fd-4388-9fc5-979a67131975", "metadata": {}, "outputs": [], "source": [ "prompt = [\n", " \"What came first, the chicken or the egg?\",\n", " \"How do I make an omelette with cheese, onions and spinach?\",\n", "]\n", "answer = llama_pipeline(\n", " prompt,\n", " temperature=0.6,\n", " k=5,\n", " output_length=None,\n", " print_live=False,\n", " print_final=True,\n", ")\n", "\n", "for p, a in zip(prompt, answer):\n", " print(f\"Instruction: {p}\")\n", " print(f\"Response: {a}\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "a4b634e3-be64-404a-b83e-50a528f600d9", "metadata": {}, "source": [ "LLaMa was trained with a specific prompt format and system prompt to guide model behaviour. This is common with instruction and dialogue fine-tuned models. The correct format is essential for getting sensible outputs from the model. To see the full system prompt and format, you can call the `last_instruction_prompt` attribute on the pipeline.\n", "\n", "This is the default prompt format described in this [Hugging Face blog post](https://huggingface.co/blog/llama2#how-to-prompt-llama-2):" ] }, { "cell_type": "code", "execution_count": null, "id": "0512a636-9707-486d-a22f-59c192a47aca", "metadata": {}, "outputs": [], "source": [ "print(llama_pipeline.prompt_format)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "facc99a9-0a89-4f42-ba1d-4112c779af42", "metadata": {}, "source": [ "Remember to detach your pipeline when you are finished to free up resources:" ] }, { "cell_type": "code", "execution_count": null, "id": "c5d240fd-5d0d-400b-ad1e-e037babf5547", "metadata": {}, "outputs": [], "source": [ "llama_pipeline.detach()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "faea2996-46a4-4d6a-9580-8bb5d5aacf33", "metadata": {}, "source": [ "That's it! You have now successfully run LLaMa 2 for inference on IPUs." ] }, { "attachments": {}, "cell_type": "markdown", "id": "c1a5d3c1-b8c0-411c-9d30-ab8d51949a77", "metadata": {}, "source": [ "## Next steps\n", "\n", "Check out the full list of [IPU-powered Jupyter Notebooks](https://www.graphcore.ai/ipu-jupyter-notebooks) to see how IPUs perform on other tasks." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 5 }