{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["# IPEX-LLM\n", "> [IPEX-LLM](https://github.com/intel-analytics/ipex-llm/) 是一个用于在英特尔CPU和GPU(例如带有iGPU的本地PC、Arc、Flex和Max等独立GPU)上运行LLM的PyTorch库,具有非常低的延迟。\n", "\n", "本示例介绍了如何使用LlamaIndex与[`ipex-llm`](https://github.com/intel-analytics/ipex-llm/)进行文本生成和在英特尔GPU上进行聊天。\n", "\n", "> **注意**\n", ">\n", "> 您可以参考[此处](https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/llms/llama-index-llms-ipex-llm/examples)获得`IpexLLM`的完整示例。请注意,如果要在Intel GPU上运行,请在运行示例时在命令参数中指定`-d 'xpu'`。\n", "\n", "## 安装先决条件\n", "为了在Intel GPU上受益于IPEX-LLM,有几个先决步骤来安装工具和准备环境。\n", "\n", "如果您是Windows用户,请访问[在带有Intel GPU的Windows上安装IPEX-LLM指南](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html),按照[**安装先决条件**](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_windows_gpu.html#install-prerequisites)更新GPU驱动程序(可选)并安装Conda。\n", "\n", "如果您是Linux用户,请访问[在带有Intel GPU的Linux上安装IPEX-LLM](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_linux_gpu.html),按照[**安装先决条件**](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/install_linux_gpu.html#install-prerequisites)安装GPU驱动程序、Intel® oneAPI Base Toolkit 2024.0和Conda。\n", "\n", "## 安装`llama-index-llms-ipex-llm`\n", "\n", "完成先决条件安装后,您应该已创建一个包含所有先决条件的conda环境,激活您的conda环境并按如下方式安装`llama-index-llms-ipex-llm`:\n", "\n", "```bash\n", "conda activate \n", "\n", "pip install llama-index-llms-ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/\n", "```\n", "此步骤还将安装`ipex-llm`及其依赖项。\n", "\n", "> **注意**\n", ">\n", "> 您还可以使用`https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/`作为`extra-index-url`。\n", "\n", "## 运行时配置\n", "\n", "为了获得最佳性能,建议根据您的设备设置一些环境变量:\n", "\n", "### 针对带有Intel Core Ultra集成GPU的Windows用户\n", "\n", "在Anaconda Prompt中:\n", "\n", "```\n", "set SYCL_CACHE_PERSISTENT=1\n", "set BIGDL_LLM_XMX_DISABLED=1\n", "```\n", "\n", "### 针对带有Intel Arc A-Series GPU的Linux用户\n", "\n", "```bash\n", "# 配置oneAPI环境变量。对于APT或离线安装的oneAPI,这是必需步骤。\n", "# 对于PIP安装的oneAPI,请跳过此步骤,因为环境已在LD_LIBRARY_PATH中配置。\n", "source /opt/intel/oneapi/setvars.sh\n", "\n", "# 建议的环境变量,以获得最佳性能\n", "export USE_XETLA=OFF\n", "export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1\n", "export SYCL_CACHE_PERSISTENT=1\n", "```\n", "\n", "> **注意**\n", ">\n", "> 第一次在Intel iGPU/Intel Arc A300系列或Pro A60上运行每个模型时,可能需要几分钟来编译。\n", ">\n", "> 对于其他类型的GPU,请参阅[此处](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#runtime-configuration)以获取Windows用户信息,参阅[此处](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#id5)以获取Linux用户信息。\n", "\n", "## `IpexLLM`\n", "\n", "在初始化`IpexLLM`时设置`device_map=\"xpu\"`将把LLM模型放在英特尔GPU上,并受益于IPEX-LLM的优化。\n", "\n", "在加载Zephyr模型之前,您需要定义`completion_to_prompt`和`messages_to_prompt`来格式化提示。根据[模型卡](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)为zephyr-7b-alpha遵循正确的提示格式。这对于准备模型可以准确解释的输入至关重要。使用IpexLLM本地加载Zephyr模型时,请使用`IpexLLM.from_model_id`。它将直接以Huggingface格式加载模型,并自动将其转换为低比特格式以进行推理。\n", "\n", "```python\n", "# 将字符串转换为zephyr特定的输入\n", "def completion_to_prompt(completion):\n", " return f\"<|system|>\\n\\n<|user|>\\n{completion}\\n<|assistant|>\\n\"\n", "\n", "\n", "# 将聊天消息列表转换为zephyr特定的输入\n", "def messages_to_prompt(messages):\n", " prompt = \"\"\n", " for message in messages:\n", " if message.role == \"system\":\n", " prompt += f\"<|system|>\\n{message.content}\\n\"\n", " elif message.role == \"user\":\n", " prompt += f\"<|user|>\\n{message.content}\\n\"\n", " elif message.role == \"assistant\":\n", " prompt += f\"<|assistant|>\\n{message.content}\\n\"\n", "\n", " # 确保我们以系统提示开头,如果需要插入空白\n", " if not prompt.startswith(\"<|system|>\\n\"):\n", " prompt = \"<|system|>\\n\\n\" + prompt\n", "\n", " # 添加最终的助手提示\n", " prompt = prompt + \"<|assistant|>\\n\"\n", "\n", " return prompt\n", "\n", "from llama_index.llms.ipex_llm import IpexLLM\n", "\n", "llm = IpexLLM.from_model_id(\n", " model_name=\"HuggingFaceH4/zephyr-7b-alpha\",\n", " tokenizer_name=\"HuggingFaceH4/zephyr-7b-alpha\",\n", " context_window=512,\n", " max_new_tokens=128,\n", " generate_kwargs={\"do_sample\": False},\n", " completion_to_prompt=completion_to_prompt,\n", " messages_to_prompt=messages_to_prompt,\n", " device_map=\"xpu\",\n", ")\n", "```\n", "\n", "> 请注意,在此示例中我们将使用[HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)模型进行演示。它需要更新`transformers`和`tokenizers`包。\n", "> ```bash\n", "> pip install -U transformers==4.37.0 tokenizers==0.15.2\n", "> ```\n", "\n", "然后您可以像往常一样进行完成任务或聊天任务:\n", "\n", "```python\n", "print(\"----------------- 完成 ------------------\")\n", "completion_response = llm.complete(\"很久很久以前,\")\n", "print(completion_response.text)\n", "print(\"----------------- 流式完成 ------------------\")\n", "response_iter = llm.stream_complete(\"很久很久以前,有一个小女孩\")\n", "for response in response_iter:\n", " print(response.delta, end=\"\", flush=True)\n", "print(\"----------------- 聊天 ------------------\")\n", "from llama_index.core.llms import ChatMessage\n", "\n", "message = ChatMessage(role=\"user\", content=\"简要解释一下大爆炸理论\")\n", "resp = llm.chat([message])\n", "print(resp)\n", "print(\"----------------- 流式聊天 ------------------\")\n", "message = ChatMessage(role=\"user\", content=\"什么是人工智能?\")\n", "resp = llm.stream_chat([message], max_tokens=256)\n", "for r in resp:\n", " print(r.delta, end=\"\")\n", "```\n", "\n", "另外,您也可以将低比特模型保存到磁盘上,然后使用`from_model_id_low_bit`而不是`from_model_id`重新加载它以供后续使用,甚至可以在不同的机器之间使用。这种方法在空间上更高效,因为低比特模型所需的磁盘空间显著比原始模型少。而且`from_model_id_low_bit`在速度和内存使用方面也比`from_model_id`更高效,因为它跳过了模型转换步骤。\n", "\n", "要保存低比特模型,请按如下所示使用`save_low_bit`。然后从保存的低比特模型路径加载模型。还使用`device_map`将模型加载到xpu。\n", "> 请注意,低比特模型的保存路径仅包含模型本身,而不包含令牌化器。如果您希望将所有内容放在一个地方,您需要手动从原始模型的目录下载或复制令牌化器文件到低比特模型的保存位置。\n", "\n", "尝试使用加载的低比特模型进行流式完成。\n", "```python\n", "saved_lowbit_model_path = (\n", " \"./zephyr-7b-alpha-low-bit\" # 保存低比特模型的路径\n", ")\n", "\n", "llm._model.save_low_bit(saved_lowbit_model_path)\n", "del llm\n", "\n", "llm_lowbit = IpexLLM.from_model_id_low_bit(\n", " model_name=saved_lowbit_model_path,\n", " tokenizer_name=\"HuggingFaceH4/zephyr-7b-alpha\",\n", " # tokenizer_name=saved_lowbit_model_path, # 如果你想这样使用,将令牌化器复制到保存路径\n", " context_window=512,\n", " max_new_tokens=64,\n", " completion_to_prompt=completion_to_prompt,\n", " generate_kwargs={\"do_sample\": False},\n", " device_map=\"xpu\",\n", ")\n", "\n", "response_iter = llm_lowbit.stream_complete(\"什么是大型语言模型?\")\n", "for response in response_iter:\n", " print(response.delta, end=\"\", flush=True)\n", "```\n"]}], "metadata": {"language_info": {"name": "python"}}, "nbformat": 4, "nbformat_minor": 2}