{"cells": [{"attachments": {}, "cell_type": "markdown", "id": "91c998a5", "metadata": {}, "source": ["\"在\n"]}, {"cell_type": "markdown", "id": "43497beb-817d-4366-9156-f4d7f0d44942", "metadata": {}, "source": ["# 多文档代理(V1)\n", "\n", "在本指南中,您将学习如何在 LlamaIndex 文档上设置一个多文档代理。\n", "\n", "这是 V0 多文档代理的扩展,具有以下附加功能:\n", "- 在文档(工具)检索过程中重新排序\n", "- 查询规划工具,代理可以用来规划\n", "\n", "我们使用以下架构实现这一点:\n", "\n", "- 在每个文档上设置一个“文档代理”:每个文档代理可以在其文档内进行问答/总结\n", "- 在这组文档代理上设置一个顶层代理。进行工具检索,然后在工具集上进行协同训练以回答问题。\n"]}, {"attachments": {}, "cell_type": "markdown", "id": "77ac7184", "metadata": {}, "source": ["如果您在colab上打开这个笔记本,您可能需要安装LlamaIndex 🦙。\n"]}, {"cell_type": "code", "execution_count": null, "id": "034a7661", "metadata": {}, "outputs": [], "source": ["%pip install llama-index-core\n", "%pip install llama-index-agent-openai\n", "%pip install llama-index-readers-file\n", "%pip install llama-index-postprocessor-cohere-rerank\n", "%pip install llama-index-llms-openai\n", "%pip install llama-index-embeddings-openai\n", "%pip install unstructured[html]"]}, {"cell_type": "code", "execution_count": null, "id": "1f0e47ac-ec6d-48eb-93a3-0e1fcab22112", "metadata": {}, "outputs": [], "source": ["%load_ext autoreload\n", "%autoreload 2"]}, {"cell_type": "markdown", "id": "9be00aba-b6c5-4940-9825-81c5d2cd2f0b", "metadata": {}, "source": ["## 设置和下载数据\n", "\n", "在这一部分,我们将加载LlamaIndex文档。\n"]}, {"cell_type": "code", "execution_count": null, "id": "49893d69-c106-4169-92c3-6b5b751066e9", "metadata": {}, "outputs": [], "source": ["domain = \"docs.llamaindex.ai\"\n", "docs_url = \"https://docs.llamaindex.ai/en/latest/\"\n", "!wget -e robots=off --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains {domain} --no-parent {docs_url}"]}, {"cell_type": "code", "execution_count": null, "id": "c661cb62-1e18-410c-bc2e-e707b66596a3", "metadata": {}, "outputs": [], "source": ["from llama_index.readers.file import UnstructuredReader\n", "\n", "reader = UnstructuredReader()"]}, {"cell_type": "code", "execution_count": null, "id": "44feebd5-0430-4d73-9cb1-a3de73c1f13e", "metadata": {}, "outputs": [], "source": ["from pathlib import Path\n", "\n", "all_files_gen = Path(\"./docs.llamaindex.ai/\").rglob(\"*\")\n", "all_files = [f.resolve() for f in all_files_gen]"]}, {"cell_type": "code", "execution_count": null, "id": "3d837b4b-130c-493c-b62e-6662904c20ca", "metadata": {}, "outputs": [], "source": ["all_html_files = [f for f in all_files if f.suffix.lower() == \".html\"]"]}, {"cell_type": "code", "execution_count": null, "id": "3cddf0f5-3c5f-4d42-868d-54bedb12d02b", "metadata": {}, "outputs": [{"data": {"text/plain": ["1219"]}, "execution_count": null, "metadata": {}, "output_type": "execute_result"}], "source": ["len(all_html_files)"]}, {"cell_type": "code", "execution_count": null, "id": "1a1dd0cf-5da2-4ac0-bfd1-8f48921518c5", "metadata": {}, "outputs": [], "source": ["from llama_index.core import Document", "", "# TODO: 如果您想要更多的文档,请将其设置为更高的值", "doc_limit = 100", "", "docs = []", "for idx, f in enumerate(all_html_files):", " if idx > doc_limit:", " break", " print(f\"索引 {idx}/{len(all_html_files)}\")", " loaded_docs = reader.load_data(file=f, split_documents=True)", " # 硬编码索引。这之前的所有内容都是所有页面的目录", " start_idx = 72", " loaded_doc = Document(", " text=\"\\n\\n\".join([d.get_content() for d in loaded_docs[72:]]),", " metadata={\"path\": str(f)},", " )", " print(loaded_doc.metadata[\"path\"])", " docs.append(loaded_doc)"]}, {"cell_type": "markdown", "id": "6189aaf4-2eb7-40bc-9e83-79ce4f221b4b", "metadata": {}, "source": ["# 定义全局LLM + 嵌入\n", "\n", "在这个notebook中,我们将定义一个全局LLM(全局线性语言模型)和嵌入层。全局LLM是一种用于自然语言处理任务的模型,它可以学习单词之间的关系并将它们映射到一个连续的向量空间中。嵌入层用于将单词转换为密集的向量表示,这些向量可以作为模型的输入。\n"]}, {"cell_type": "code", "execution_count": null, "id": "4e56afdc", "metadata": {}, "outputs": [], "source": ["import os\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\n", "\n", "import nest_asyncio\n", "\n", "nest_asyncio.apply()"]}, {"cell_type": "code", "execution_count": null, "id": "dd6e5e48-91b9-4701-a85d-d98c92323350", "metadata": {}, "outputs": [], "source": ["from llama_index.llms.openai import OpenAI\n", "from llama_index.embeddings.openai import OpenAIEmbedding\n", "from llama_index.core import Settings\n", "\n", "llm = OpenAI(model=\"gpt-3.5-turbo\")\n", "Settings.llm = llm\n", "Settings.embed_model = OpenAIEmbedding(\n", " model=\"text-embedding-3-small\", embed_batch_size=256\n", ")"]}, {"cell_type": "markdown", "id": "4eeef31a-fc25-4367-a5ba-945f81d04cf9", "metadata": {}, "source": ["## 构建多文档代理\n", "\n", "在本节中,我们将向您展示如何构建多文档代理。我们首先为每个文档构建一个文档代理,然后使用对象索引定义顶层父代理。\n"]}, {"cell_type": "markdown", "id": "976cd798-2e8d-474c-922a-51b12c5c6f36", "metadata": {}, "source": ["### 为每个文档构建文档代理\n", "\n", "在这一部分,我们为每个文档定义\"文档代理\"。\n", "\n", "我们为每个文档定义了一个向量索引(用于语义搜索)和摘要索引(用于摘要生成)。然后,这两个查询引擎被转换为工具,传递给一个调用OpenAI函数的代理。\n", "\n", "这个文档代理可以动态选择在给定文档中执行语义搜索或摘要生成。\n", "\n", "我们为每个城市创建一个单独的文档代理。\n"]}, {"cell_type": "code", "execution_count": null, "id": "eacdf3a7-cfe3-4c2b-9037-b28a065ed148", "metadata": {}, "outputs": [], "source": ["from llama_index.agent.openai import OpenAIAgent", "from llama_index.core import (", " load_index_from_storage,", " StorageContext,", " VectorStoreIndex,", ")", "from llama_index.core import SummaryIndex", "from llama_index.core.tools import QueryEngineTool, ToolMetadata", "from llama_index.core.node_parser import SentenceSplitter", "import os", "from tqdm.notebook import tqdm", "import pickle", "", "", "async def build_agent_per_doc(nodes, file_base):", " print(file_base)", "", " vi_out_path = f\"./data/llamaindex_docs/{file_base}\"", " summary_out_path = f\"./data/llamaindex_docs/{file_base}_summary.pkl\"", " if not os.path.exists(vi_out_path):", " Path(\"./data/llamaindex_docs/\").mkdir(parents=True, exist_ok=True)", " # 构建向量索引", " vector_index = VectorStoreIndex(nodes)", " vector_index.storage_context.persist(persist_dir=vi_out_path)", " else:", " vector_index = load_index_from_storage(", " StorageContext.from_defaults(persist_dir=vi_out_path),", " )", "", " # 构建摘要索引", " summary_index = SummaryIndex(nodes)", "", " # 定义查询引擎", " vector_query_engine = vector_index.as_query_engine(llm=llm)", " summary_query_engine = summary_index.as_query_engine(", " response_mode=\"tree_summarize\", llm=llm", " )", "", " # 提取摘要", " if not os.path.exists(summary_out_path):", " Path(summary_out_path).parent.mkdir(parents=True, exist_ok=True)", " summary = str(", " await summary_query_engine.aquery(", " \"提取该文档的简洁1-2行摘要\"", " )", " )", " pickle.dump(summary, open(summary_out_path, \"wb\"))", " else:", " summary = pickle.load(open(summary_out_path, \"rb\"))", "", " # 定义工具", " query_engine_tools = [", " QueryEngineTool(", " query_engine=vector_query_engine,", " metadata=ToolMetadata(", " name=f\"vector_tool_{file_base}\",", " description=f\"用于与特定事实相关的问题\",", " ),", " ),", " QueryEngineTool(", " query_engine=summary_query_engine,", " metadata=ToolMetadata(", " name=f\"summary_tool_{file_base}\",", " description=f\"用于摘要问题\",", " ),", " ),", " ]", "", " # 构建代理", " function_llm = OpenAI(model=\"gpt-4\")", " agent = OpenAIAgent.from_tools(", " query_engine_tools,", " llm=function_llm,", " verbose=True,", " system_prompt=f\"\"\"\\", "您是一名专门设计用于回答关于“{file_base}.html”部分LlamaIndex文档的查询的代理。", "在回答问题时,您必须始终使用提供的工具之一;不要依赖先前的知识。\\", "\"\"\",", " )", "", " return agent, summary", "", "", "async def build_agents(docs):", " node_parser = SentenceSplitter()", "", " # 构建代理字典", " agents_dict = {}", " extra_info_dict = {}", "", " # # 这是为了基准线", " # all_nodes = []", "", " for idx, doc in enumerate(tqdm(docs)):", " nodes = node_parser.get_nodes_from_documents([doc])", " # all_nodes.extend(nodes)", "", " # ID将是基础+父级", " file_path = Path(doc.metadata[\"path\"])", " file_base = str(file_path.parent.stem) + \"_\" + str(file_path.stem)", " agent, summary = await build_agent_per_doc(nodes, file_base)", "", " agents_dict[file_base] = agent", " extra_info_dict[file_base] = {\"summary\": summary, \"nodes\": nodes}", "", " return agents_dict, extra_info_dict"]}, {"cell_type": "code", "execution_count": null, "id": "44748b46-dd6b-4d4f-bc70-7022ae96413f", "metadata": {}, "outputs": [], "source": ["agents_dict, extra_info_dict = await build_agents(docs)"]}, {"cell_type": "markdown", "id": "899ca55b-0c02-429b-a765-8e4f806d503f", "metadata": {}, "source": ["### 构建Retriever-Enabled OpenAI Agent\n", "\n", "我们构建了一个顶层代理,可以协调不同的文档代理来回答任何用户查询。\n", "\n", "这个`RetrieverOpenAIAgent`在使用工具之前执行工具检索(与默认代理不同,后者试图将所有工具放入提示中)。\n", "\n", "**与V0版本相比的改进**:与V0版本中的“基础”版本相比,我们进行了以下改进。\n", "\n", "- 添加重新排序功能:我们使用Cohere重新排序器来更好地过滤候选文档集。\n", "- 添加查询规划工具:我们添加了一个显式的查询规划工具,它是根据检索到的工具集动态创建的。\n"]}, {"cell_type": "code", "execution_count": null, "id": "6884ff15-bf40-4bdd-a1e3-58cbd056a12a", "metadata": {}, "outputs": [], "source": ["# 为每个文档代理定义工具", "all_tools = []", "for file_base, agent in agents_dict.items():", " summary = extra_info_dict[file_base][\"summary\"]", " doc_tool = QueryEngineTool(", " query_engine=agent,", " metadata=ToolMetadata(", " name=f\"tool_{file_base}\",", " description=summary,", " ),", " )", " all_tools.append(doc_tool)"]}, {"cell_type": "code", "execution_count": null, "id": "346ed0e1-b96f-446b-a768-4f11a9a1a7f6", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["ToolMetadata(description='This document provides examples and documentation for an agent on the llama index platform.', name='tool_latest_index', fn_schema=)\n"]}], "source": ["print(all_tools[0].metadata)"]}, {"cell_type": "code", "execution_count": null, "id": "b266ad43-c3fd-41cb-9e3b-4cb2bb2c2e5f", "metadata": {}, "outputs": [], "source": ["# 定义一个“对象”索引和检索器", "from llama_index.core import VectorStoreIndex", "from llama_index.core.objects import (", " ObjectIndex,", " ObjectRetriever,", ")", "from llama_index.postprocessor.cohere_rerank import CohereRerank", "from llama_index.core.query_engine import SubQuestionQueryEngine", "from llama_index.core.schema import QueryBundle", "from llama_index.llms.openai import OpenAI", "", "", "llm = OpenAI(model_name=\"gpt-4-0613\")", "", "obj_index = ObjectIndex.from_objects(", " all_tools,", " index_cls=VectorStoreIndex,", ")", "vector_node_retriever = obj_index.as_node_retriever(", " similarity_top_k=10,", ")", "", "", "# 定义一个自定义对象检索器,添加一个查询规划工具", "class CustomObjectRetriever(ObjectRetriever):", " def __init__(", " self,", " retriever,", " object_node_mapping,", " node_postprocessors=None,", " llm=None,", " ):", " self._retriever = retriever", " self._object_node_mapping = object_node_mapping", " self._llm = llm or OpenAI(\"gpt-4-0613\")", " self._node_postprocessors = node_postprocessors or []", "", " def retrieve(self, query_bundle):", " if isinstance(query_bundle, str):", " query_bundle = QueryBundle(query_str=query_bundle)", "", " nodes = self._retriever.retrieve(query_bundle)", " for processor in self._node_postprocessors:", " nodes = processor.postprocess_nodes(", " nodes, query_bundle=query_bundle", " )", " tools = [self._object_node_mapping.from_node(n.node) for n in nodes]", "", " sub_question_engine = SubQuestionQueryEngine.from_defaults(", " query_engine_tools=tools, llm=self._llm", " )", " sub_question_description = f\"\"\"\\", "用于涉及比较多个文档的任何查询。始终使用此工具进行比较查询 - 确保使用原始查询调用此工具。不要对涉及多个文档的任何查询使用其他工具。", "\"\"\"", " sub_question_tool = QueryEngineTool(", " query_engine=sub_question_engine,", " metadata=ToolMetadata(", " name=\"compare_tool\", description=sub_question_description", " ),", " )", "", " return tools + [sub_question_tool]"]}, {"cell_type": "code", "execution_count": null, "id": "0ba0d1a6-e324-4faa-b72b-d340904e65b2", "metadata": {}, "outputs": [], "source": ["# 用ObjectRetriever包装它以返回对象", "custom_obj_retriever = CustomObjectRetriever(", " vector_node_retriever,", " obj_index.object_node_mapping,", " node_postprocessors=[CohereRerank(top_n=5)],", " llm=llm,", ")"]}, {"cell_type": "code", "execution_count": null, "id": "8654ce2a-cce7-44fc-8445-8bbcfdf7ee91", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["6\n"]}], "source": ["tmps = custom_obj_retriever.retrieve(\"hello\")", "", "# 应该是 5 + 1 -- 5 来自 reranker,1 来自子问题", "print(len(tmps))"]}, {"cell_type": "code", "execution_count": null, "id": "fed38942-1e37-4c61-89fa-d2ef41151831", "metadata": {}, "outputs": [], "source": ["from llama_index.agent.openai import OpenAIAgent", "from llama_index.core.agent import ReActAgent", "", "top_agent = OpenAIAgent.from_tools(", " tool_retriever=custom_obj_retriever,", " system_prompt=\"\"\" \\", "您是一个专门用于回答关于文档的查询的代理。", "请始终使用提供的工具来回答问题。不要依赖先前的知识。\\", "", "\"\"\",", " llm=llm,", " verbose=True,", ")", "", "# top_agent = ReActAgent.from_tools(", "# tool_retriever=custom_obj_retriever,", "# system_prompt=\"\"\" \\", "# 您是一个专门用于回答关于文档的查询的代理。", "# 请始终使用提供的工具来回答问题。不要依赖先前的知识。\\", "", "# \"\"\",", "# llm=llm,", "# verbose=True,", "# )"]}, {"cell_type": "markdown", "id": "aa32b97c-6779-4b60-823d-6ca3be6f358a", "metadata": {}, "source": ["### 定义基准向量存储索引\n", "\n", "作为比较的基准,我们定义一个“简单”的RAG管道,将所有文档都存储在单个向量索引集合中。\n", "\n", "我们设置top_k = 4\n"]}, {"cell_type": "code", "execution_count": null, "id": "f2f54834-1597-46ce-b0d3-0456bfa0d368", "metadata": {}, "outputs": [], "source": ["all_nodes = [\n", " n for extra_info in extra_info_dict.values() for n in extra_info[\"nodes\"]\n", "]"]}, {"cell_type": "code", "execution_count": null, "id": "60dfc88f-6f47-4ef2-9ae6-74abde06a485", "metadata": {}, "outputs": [], "source": ["base_index = VectorStoreIndex(all_nodes)\n", "base_query_engine = base_index.as_query_engine(similarity_top_k=4)"]}, {"cell_type": "markdown", "id": "8dedb927-a992-4f21-a0fb-4ce4361adcb3", "metadata": {}, "source": ["## 运行示例查询\n", "\n", "让我们运行一些示例查询,涵盖从针对单个文档的问答/摘要到针对多个文档的问答/摘要。\n"]}, {"cell_type": "code", "execution_count": null, "id": "8e743c62-7dd8-4ac9-85a5-f1cbc112a79c", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Added user message to memory: What types of agents are available in LlamaIndex?\n", "=== Calling Function ===\n", "Calling function: tool_agents_index with args: {\"input\":\"types of agents\"}\n", "Added user message to memory: types of agents\n", "=== Calling Function ===\n", "Calling function: vector_tool_agents_index with args: {\n", " \"input\": \"types of agents\"\n", "}\n", "Got output: The types of agents mentioned in the provided context are ReActAgent, Native OpenAIAgent, OpenAIAgent with Query Engine Tools, OpenAIAgent Query Planning, OpenAI Assistant, OpenAI Assistant Cookbook, Forced Function Calling, Parallel Function Calling, and Context Retrieval.\n", "========================\n", "\n", "Got output: The types of agents mentioned in the `agents_index.html` part of the LlamaIndex docs are:\n", "\n", "1. ReActAgent\n", "2. Native OpenAIAgent\n", "3. OpenAIAgent with Query Engine Tools\n", "4. OpenAIAgent Query Planning\n", "5. OpenAI Assistant\n", "6. OpenAI Assistant Cookbook\n", "7. Forced Function Calling\n", "8. Parallel Function Calling\n", "9. Context Retrieval\n", "========================\n", "\n"]}], "source": ["response = top_agent.query(\n", " \"What types of agents are available in LlamaIndex?\",\n", ")"]}, {"cell_type": "code", "execution_count": null, "id": "a4ce2a76-5779-4acf-9337-69109dae7fd6", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["The types of agents available in LlamaIndex include ReActAgent, Native OpenAIAgent, OpenAIAgent with Query Engine Tools, OpenAIAgent Query Planning, OpenAI Assistant, OpenAI Assistant Cookbook, Forced Function Calling, Parallel Function Calling, and Context Retrieval.\n"]}], "source": ["print(response)"]}, {"cell_type": "code", "execution_count": null, "id": "af28b422-fb73-4b59-9e77-3ba3afa87795", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["The types of agents available in LlamaIndex are ReActAgent, Native OpenAIAgent, and OpenAIAgent.\n"]}], "source": ["# 基线", "response = base_query_engine.query(", " \"LlamaIndex中有哪些类型的代理可用?\",", ")", "print(str(response))"]}, {"cell_type": "code", "execution_count": null, "id": "ee6ef20c-3ccc-46c3-ad87-667138d78d5d", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Added user message to memory: Compare the content in the agents page vs. tools page.\n", "=== Calling Function ===\n", "Calling function: compare_tool with args: {\"input\":\"agents vs tools\"}\n", "Generated 2 sub questions.\n", "\u001b[1;3;38;2;237;90;200m[tool_understanding_index] Q: What are the functionalities of agents in the Llama Index platform?\n", "\u001b[0mAdded user message to memory: What are the functionalities of agents in the Llama Index platform?\n", "\u001b[1;3;38;2;90;149;237m[tool_understanding_index] Q: How do agents differ from tools in the Llama Index platform?\n", "\u001b[0mAdded user message to memory: How do agents differ from tools in the Llama Index platform?\n", "=== Calling Function ===\n", "Calling function: vector_tool_understanding_index with args: {\n", " \"input\": \"difference between agents and tools\"\n", "}\n", "=== Calling Function ===\n", "Calling function: vector_tool_understanding_index with args: {\n", " \"input\": \"functionalities of agents\"\n", "}\n", "Got output: Agents are typically individuals or entities that act on behalf of others, making decisions and taking actions based on predefined rules or instructions. On the other hand, tools are instruments or devices used to carry out specific functions or tasks, often under the control or direction of an agent.\n", "========================\n", "\n", "Got output: Agents typically have a range of functionalities that allow them to perform tasks autonomously or semi-autonomously. These functionalities may include data collection, analysis, decision-making, communication with other systems or users, and executing specific actions based on predefined rules or algorithms.\n", "========================\n", "\n", "\u001b[1;3;38;2;90;149;237m[tool_understanding_index] A: In the context of the Llama Index platform, agents are entities that make decisions and take actions based on predefined rules or instructions. They are designed to interact with users, understand their queries, and provide appropriate responses. \n", "\n", "On the other hand, tools are instruments or devices that are used to perform specific functions or tasks. They are typically controlled or directed by an agent and do not make decisions on their own. They are used to assist the agents in providing accurate and relevant responses to user queries.\n", "\u001b[0m\u001b[1;3;38;2;237;90;200m[tool_understanding_index] A: In the Llama Index platform, agents have a variety of functionalities. They can perform tasks autonomously or semi-autonomously. These tasks include data collection and analysis, making decisions, communicating with other systems or users, and executing specific actions. These actions are based on predefined rules or algorithms.\n", "\u001b[0mGot output: Agents in the Llama Index platform are responsible for making decisions and taking actions based on predefined rules or instructions. They interact with users, understand queries, and provide appropriate responses. On the other hand, tools in the platform are instruments or devices used to perform specific functions or tasks. Unlike agents, tools are typically controlled or directed by an agent and do not make decisions independently. Their role is to assist agents in delivering accurate and relevant responses to user queries.\n", "========================\n", "\n"]}], "source": ["response = top_agent.query(\n", " \"Compare the content in the agents page vs. tools page.\"\n", ")"]}, {"cell_type": "code", "execution_count": null, "id": "cfe1dd4c-8bfd-43d0-99bc-ca60861dc418", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["The comparison between the content in the agents page and the tools page highlights the difference in their roles and functionalities. Agents on the Llama Index platform are responsible for decision-making and interacting with users, while tools are instruments used to perform specific functions or tasks, controlled by agents to assist in providing responses.\n"]}], "source": ["print(response)"]}, {"cell_type": "code", "execution_count": null, "id": "a8d97266-8e22-43a8-adfe-b9a7f833c06d", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["Added user message to memory: Can you compare the compact and tree_summarize response synthesizer response modes at a very high-level?\n", "=== Calling Function ===\n", "Calling function: compare_tool with args: {\"input\":\"Compare the compact and tree_summarize response synthesizer response modes at a very high-level.\"}\n", "Generated 4 sub questions.\n", "\u001b[1;3;38;2;237;90;200m[tool_querying_index] Q: What are the key differences between the compact and tree_summarize response synthesizer response modes?\n", "\u001b[0mAdded user message to memory: What are the key differences between the compact and tree_summarize response synthesizer response modes?\n", "\u001b[1;3;38;2;90;149;237m[tool_querying_index] Q: How does the compact response synthesizer response mode optimize query logic and response quality?\n", "\u001b[0mAdded user message to memory: How does the compact response synthesizer response mode optimize query logic and response quality?\n", "\u001b[1;3;38;2;11;159;203m[tool_querying_index] Q: How does the tree_summarize response synthesizer response mode optimize query logic and response quality?\n", "\u001b[0mAdded user message to memory: How does the tree_summarize response synthesizer response mode optimize query logic and response quality?\n", "\u001b[1;3;38;2;155;135;227m[tool_evaluating_index] Q: What are the guidelines for evaluating retrievals in the context of response synthesizer response modes?\n", "\u001b[0mAdded user message to memory: What are the guidelines for evaluating retrievals in the context of response synthesizer response modes?\n", "=== Calling Function ===\n", "Calling function: vector_tool_querying_index with args: {\n", " \"input\": \"compact response synthesizer response mode\"\n", "}\n", "=== Calling Function ===\n", "Calling function: summary_tool_querying_index with args: {\n", " \"input\": \"tree_summarize response synthesizer response mode\"\n", "}\n", "=== Calling Function ===\n", "Calling function: vector_tool_querying_index with args: {\n", " \"input\": \"compact vs tree_summarize response synthesizer response modes\"\n", "}\n", "=== Calling Function ===\n", "Calling function: vector_tool_evaluating_index with args: {\n", " \"input\": \"evaluating retrievals response synthesizer response modes\"\n", "}\n", "Got output: The response modes for the response synthesizer include \"compact\" and \"tree_summarize\".\n", "========================\n", "\n", "Got output: The response mode \"tree_summarize\" in the response synthesizer configures the system to recursively construct a tree from a set of Node objects and the query, returning the root node as the final response. This mode is particularly useful for summarization purposes.\n", "========================\n", "\n", "Got output: \"compact\" the prompt during each LLM call by stuffing as many Node text chunks that can fit within the maximum prompt size. If there are too many chunks to stuff in one prompt, \"create and refine\" an answer by going through multiple prompts.\n", "========================\n", "\n", "=== Calling Function ===\n", "Calling function: summary_tool_querying_index with args: {\n", " \"input\": \"compact vs tree_summarize response synthesizer response modes\"\n", "}\n", "Got output: Response synthesizer response modes can be evaluated by comparing what was retrieved for a query to a set of nodes that were expected to be retrieved. This evaluation process typically involves analyzing metrics such as Mean Reciprocal Rank (MRR) and Hit Rate. It is important to evaluate a batch of retrievals to get a comprehensive understanding of the performance. If you are making calls to a hosted, remote LLM, you may also want to consider analyzing the cost implications of your application.\n", "========================\n", "\n", "Got output: The response modes for the response synthesizer include \"compact\" and \"tree_summarize\".\n", "========================\n", "\n", "\u001b[1;3;38;2;90;149;237m[tool_querying_index] A: The compact response synthesizer response mode optimizes query logic and response quality by compacting the prompt during each LLM call. It does this by stuffing as many Node text chunks that can fit within the maximum prompt size. If there are too many chunks to fit in one prompt, it will \"create and refine\" an answer by going through multiple prompts. This approach allows for a more efficient use of the prompt space and can lead to more refined and accurate responses.\n", "\u001b[0m\u001b[1;3;38;2;11;159;203m[tool_querying_index] A: The \"tree_summarize\" response synthesizer response mode optimizes query logic and response quality by recursively constructing a tree from a set of Node objects and the query. This approach allows the system to handle complex queries and generate comprehensive responses. The root node, which is returned as the final response, contains a summarized version of the information, making it easier for users to understand the response. This mode is particularly useful for summarization purposes, where the goal is to provide a concise yet comprehensive answer to a query.\n", "\u001b[0m\u001b[1;3;38;2;155;135;227m[tool_evaluating_index] A: When evaluating retrievals in the context of response synthesizer response modes, you should compare what was retrieved for a query to a set of nodes that were expected to be retrieved. This evaluation process typically involves analyzing metrics such as Mean Reciprocal Rank (MRR) and Hit Rate. It's crucial to evaluate a batch of retrievals to get a comprehensive understanding of the performance. If you are making calls to a hosted, remote LLM, you may also want to consider analyzing the cost implications of your application.\n", "\u001b[0m\u001b[1;3;38;2;237;90;200m[tool_querying_index] A: The \"compact\" and \"tree_summarize\" are two different response modes for the response synthesizer in LlamaIndex. \n", "\n", "The \"compact\" mode provides a more concise response, focusing on delivering the most relevant information in a compact format. This mode is useful when you want a brief and direct answer to your query.\n", "\n", "On the other hand, the \"tree_summarize\" mode provides a more detailed and structured response. It breaks down the information into a tree-like structure, making it easier to understand the relationships and hierarchy of the information. This mode is useful when you want a comprehensive understanding of the query topic.\n", "\u001b[0mGot output: The \"compact\" response synthesizer mode focuses on providing a concise and direct response, while the \"tree_summarize\" mode offers a more detailed and structured response by breaking down information into a tree-like structure. The compact mode aims to deliver the most relevant information in a compact format, suitable for brief answers, whereas the tree_summarize mode is designed to provide a comprehensive understanding of the query topic by presenting information in a hierarchical manner.\n", "========================\n", "\n"]}], "source": ["response = top_agent.query(\n", " \"Can you compare the compact and tree_summarize response synthesizer response modes at a very high-level?\"\n", ")"]}, {"cell_type": "code", "execution_count": null, "id": "7401a80c-3cc7-4c72-9c45-82ffc1bd6816", "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["The \"compact\" response synthesizer mode provides concise and direct responses, while the \"tree_summarize\" mode offers detailed and structured responses in a tree-like format. The compact mode is suitable for brief answers, while the tree_summarize mode presents information hierarchically for a comprehensive understanding of the query topic.\n"]}], "source": ["print(str(response))"]}], "metadata": {"kernelspec": {"display_name": ".venv", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3"}}, "nbformat": 4, "nbformat_minor": 5}