{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "The [Langchain](https://python.langchain.com/docs/get_started/introduction) `PanelCallbackHandler` is useful for rendering and streaming the *chain of thought* from Langchain objects like Tools, Agents, and Chains. It inherits from Langchain's [BaseCallbackHandler](https://python.langchain.com/docs/modules/callbacks/).\n", "\n", "Check out the [panel-chat-examples](https://holoviz-topics.github.io/panel-chat-examples/langchain/) docs to see more examples on how to use `PanelCallbackHandler`. If you have an example to demo, we'd love to add it to the panel-chat-examples gallery!\n", "\n", "#### Parameters:\n", "\n", "##### Core\n", "\n", "* **`instance`** (ChatFeed | ChatInterface): The ChatFeed or ChatInterface instance to render or stream to.\n", "* **`user`** (str): Name of the user who sent the message.\n", "* **`avatar`** (str | BinaryIO): The avatar to use for the user. Can be a single character text, an emoji, or anything supported by `pn.pane.Image`. If not set, uses the first character of the name.\n", "\n", "___" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Basics\n", "\n", "To get started:\n", "\n", "1. Pass the instance of a `ChatFeed` or `ChatInterface` to `PanelCallbackHandler`.\n", "2. Pass the `callback_handler` as a list into `callbacks` when constructing or using Langchain objects.\n", "\n", "\n", "```python\n", "import panel as pn\n", "from langchain.llms import OpenAI\n", "\n", "pn.extension()\n", "\n", "llm = OpenAI(temperature=0)\n", "\n", "def callback(contents, user, instance):\n", " callback_handler = pn.chat.langchain.PanelCallbackHandler(instance) \n", " return llm.predict(contents, callbacks=[callback_handler])\n", "\n", "pn.chat.ChatInterface(callback=callback).servable()\n", "```\n", "\n", "\"Panel\n", "\n", "This example shows the response from the `llm` only. A `llm` by it self does not show any *chain of thought*. Later we will build an agent that uses tools. This will show *chain of thought*." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Async\n", "\n", "`async` can also be used. This will make your app more *responsive* and enable it to support more users.\n", "\n", "```python\n", "import panel as pn\n", "from langchain.llms import OpenAI\n", "\n", "pn.extension()\n", "\n", "llm = OpenAI(temperature=0)\n", "\n", "async def callback(contents, user, instance):\n", " callback_handler = pn.chat.langchain.PanelCallbackHandler(instance)\n", " return await llm.apredict(contents, callbacks=[callback_handler])\n", "\n", "pn.chat.ChatInterface(callback=callback).servable()\n", "```\n", "\n", "\"Panel" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Streaming\n", "\n", "To stream tokens from the LLM, simply set `streaming=True` when constructing the LLM.\n", "\n", "Note, `async` is not required to stream, but it is more efficient.\n", "\n", "```python\n", "import panel as pn\n", "from langchain.llms import OpenAI\n", "\n", "pn.extension()\n", "\n", "llm = OpenAI(temperature=0, streaming=True)\n", "\n", "async def callback(contents, user, instance):\n", " callback_handler = pn.chat.langchain.PanelCallbackHandler(instance)\n", " return await llm.apredict(contents, callbacks=[callback_handler])\n", "\n", "pn.chat.ChatInterface(callback=callback).servable()\n", "```\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Agents with Tools\n", "\n", "Tools can be used and it'll also be streamed by passing `callback_handler` to `callbacks`.\n", "\n", "```python\n", "from langchain.llms import OpenAI\n", "from langchain.agents import AgentType, initialize_agent, load_tools\n", "import panel as pn\n", "\n", "llm = OpenAI(temperature=0, streaming=True)\n", "tools = load_tools([\"ddg-search\"])\n", "agent = initialize_agent(\n", " tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, \n", ")\n", "\n", "def callback(contents, user, instance):\n", " callback_handler = pn.chat.langchain.PanelCallbackHandler(instance)\n", " return agent.run(contents, callbacks=[callback_handler])\n", "\n", "pn.chat.ChatInterface(callback=callback).servable()\n", "```\n", "\n", "\"PanelCallbackHandler" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Chains with Retrievers\n", "\n", "Again, `async` is not required, but more efficient.\n", "\n", "```python\n", "from uuid import uuid4\n", "\n", "import requests\n", "\n", "from langchain.chat_models import ChatOpenAI\n", "from langchain.embeddings import OpenAIEmbeddings\n", "from langchain.prompts import ChatPromptTemplate\n", "from langchain.schema.runnable import RunnablePassthrough\n", "from langchain.text_splitter import CharacterTextSplitter\n", "from langchain.vectorstores import Chroma\n", "\n", "import panel as pn\n", "\n", "TEXT = \"https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/docs/modules/state_of_the_union.txt\"\n", "\n", "TEMPLATE = \"\"\"Answer the question based only on the following context:\n", "\n", "{context}\n", "\n", "Question: {question}\n", "\"\"\"\n", "\n", "pn.extension(design=\"material\")\n", "\n", "prompt = ChatPromptTemplate.from_template(TEMPLATE)\n", "\n", "\n", "@pn.cache\n", "def get_vector_store():\n", " full_text = requests.get(TEXT).text\n", " text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=100)\n", " texts = text_splitter.split_text(full_text)\n", " embeddings = OpenAIEmbeddings()\n", " db = Chroma.from_texts(texts, embeddings)\n", " return db\n", "\n", "\n", "db = get_vector_store()\n", "\n", "\n", "def get_chain(callbacks):\n", " retriever = db.as_retriever(callbacks=callbacks)\n", " model = ChatOpenAI(callbacks=callbacks)\n", "\n", " def format_docs(docs):\n", " text = \"\\n\\n\".join([d.page_content for d in docs])\n", " return text\n", "\n", " def hack(docs):\n", " # https://github.com/langchain-ai/langchain/issues/7290\n", " for callback in callbacks:\n", " callback.on_retriever_end(docs, run_id=uuid4())\n", " return docs\n", "\n", " return (\n", " {\"context\": retriever | hack | format_docs, \"question\": RunnablePassthrough()}\n", " | prompt\n", " | model\n", " )\n", "\n", "\n", "async def callback(contents, user, instance):\n", " callback_handler = pn.chat.langchain.PanelCallbackHandler(instance)\n", " chain = get_chain(callbacks=[callback_handler])\n", " response = await chain.ainvoke(contents)\n", " return response.content\n", "\n", "\n", "pn.chat.ChatInterface(callback=callback).servable()\n", "```\n", "\n", "Please note that we use the `hack` because retrievers currently do not call any `CallbackHandler`s.\n", "See [LangChain Issue#7290](https://github.com/langchain-ai/langchain/issues/7290).\n", "\n", "\"PanelCallbackHandler" ] } ], "metadata": { "language_info": { "name": "python", "pygments_lexer": "ipython3" } }, "nbformat": 4, "nbformat_minor": 4 }