{"cells": [{"cell_type": "markdown", "id": "9ad59b9d", "metadata": {}, "source": ["\"在\n"]}, {"cell_type": "markdown", "id": "c693f512-0033-4bca-9824-261701e4a4d4", "metadata": {}, "source": ["# 答案相关性和上下文相关性评估\n"]}, {"cell_type": "markdown", "id": "c69792a3-24a1-424b-afc0-c28b13f2cb42", "metadata": {}, "source": ["在这个笔记本中,我们演示了如何利用`AnswerRelevancyEvaluator`和`ContextRelevancyEvaluator`类来衡量生成的答案和检索到的上下文与给定用户查询的相关性。这两个评估器都会返回一个介于0和1之间的`score`,以及一个解释分数的生成`feedback`。需要注意的是,得分越高表示相关性越高。特别地,我们要求评判LLM以逐步的方式提供相关性评分,要求它回答以下两个关于查询答案相关性的问题(对于上下文相关性,这些问题会稍作调整):\n", "\n", "1. 提供的回应是否与用户查询的主题相关?\n", "2. 提供的回应是否试图解决用户查询所采用的主题的焦点或观点?\n", "\n", "每个问题值1分,因此完美的评估将得到2/2分。\n"]}, {"cell_type": "code", "execution_count": null, "id": "45fd6fcb", "metadata": {}, "outputs": [], "source": ["%pip install llama-index-llms-openai"]}, {"cell_type": "code", "execution_count": null, "id": "00f5d108-6cad-4b5f-848d-6f4edeef2c61", "metadata": {}, "outputs": [], "source": ["import nest_asyncio\n", "from tqdm.asyncio import tqdm_asyncio\n", "\n", "nest_asyncio.apply()"]}, {"cell_type": "code", "execution_count": null, "id": "4a7436d4-8cf2-444f-a776-4544f666ef3c", "metadata": {}, "outputs": [], "source": ["def displayify_df(df):", " \"\"\"在笔记本中漂亮地显示DataFrame。\"\"\"", " display_df = df.style.set_properties(", " **{", " \"inline-size\": \"300px\",", " \"overflow-wrap\": \"break-word\",", " }", " )", " display(display_df)"]}, {"cell_type": "markdown", "id": "55957d59-44ab-45d3-9716-fb1aeb69633e", "metadata": {}, "source": ["### 下载数据集(`LabelledRagDataset`)\n"]}, {"cell_type": "markdown", "id": "a228c9ab-96d5-4e4c-8b8c-febe8375b649", "metadata": {}, "source": ["对于这个演示,我们将使用通过我们的[llama-hub](https://llamahub.ai)提供的羊驼数据集。\n"]}, {"cell_type": "code", "execution_count": null, "id": "df1997b3-d3fc-4a95-b139-09c1fb256021", "metadata": {}, "outputs": [], "source": ["from llama_index.core.llama_dataset import download_llama_dataset", "from llama_index.core.llama_pack import download_llama_pack", "from llama_index.core import VectorStoreIndex", "", "# 下载并安装基准数据集的依赖项", "rag_dataset, documents = download_llama_dataset(", " \"EvaluatingLlmSurveyPaperDataset\", \"./data\"", ")"]}, {"cell_type": "code", "execution_count": null, "id": "39e019d4-90ae-4e8c-91b6-7313493b4983", "metadata": {}, "outputs": [{"data": {"text/html": ["
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
queryreference_contextsreference_answerreference_answer_byquery_by
0What are the potential risks associated with l...[Evaluating Large Language Models: A\\nComprehe...According to the context information, the pote...ai (gpt-3.5-turbo)ai (gpt-3.5-turbo)
1How does the survey categorize the evaluation ...[Evaluating Large Language Models: A\\nComprehe...The survey categorizes the evaluation of LLMs ...ai (gpt-3.5-turbo)ai (gpt-3.5-turbo)
2What are the different types of reasoning disc...[Contents\\n1 Introduction 4\\n2 Taxonomy and Ro...The different types of reasoning discussed in ...ai (gpt-3.5-turbo)ai (gpt-3.5-turbo)
3How is toxicity evaluated in language models a...[Contents\\n1 Introduction 4\\n2 Taxonomy and Ro...Toxicity is evaluated in language models accor...ai (gpt-3.5-turbo)ai (gpt-3.5-turbo)
4In the context of specialized LLMs evaluation,...[5.1.3 Alignment Robustness . . . . . . . . . ...In the context of specialized LLMs evaluation,...ai (gpt-3.5-turbo)ai (gpt-3.5-turbo)
\n", "
"], "text/plain": [" query \\\n", "0 What are the potential risks associated with l... \n", "1 How does the survey categorize the evaluation ... \n", "2 What are the different types of reasoning disc... \n", "3 How is toxicity evaluated in language models a... \n", "4 In the context of specialized LLMs evaluation,... \n", "\n", " reference_contexts \\\n", "0 [Evaluating Large Language Models: A\\nComprehe... \n", "1 [Evaluating Large Language Models: A\\nComprehe... \n", "2 [Contents\\n1 Introduction 4\\n2 Taxonomy and Ro... \n", "3 [Contents\\n1 Introduction 4\\n2 Taxonomy and Ro... \n", "4 [5.1.3 Alignment Robustness . . . . . . . . . ... \n", "\n", " reference_answer reference_answer_by \\\n", "0 According to the context information, the pote... ai (gpt-3.5-turbo) \n", "1 The survey categorizes the evaluation of LLMs ... ai (gpt-3.5-turbo) \n", "2 The different types of reasoning discussed in ... ai (gpt-3.5-turbo) \n", "3 Toxicity is evaluated in language models accor... ai (gpt-3.5-turbo) \n", "4 In the context of specialized LLMs evaluation,... ai (gpt-3.5-turbo) \n", "\n", " query_by \n", "0 ai (gpt-3.5-turbo) \n", "1 ai (gpt-3.5-turbo) \n", "2 ai (gpt-3.5-turbo) \n", "3 ai (gpt-3.5-turbo) \n", "4 ai (gpt-3.5-turbo) "]}, "execution_count": null, "metadata": {}, "output_type": "execute_result"}], "source": ["rag_dataset.to_pandas()[:5]"]}, {"cell_type": "markdown", "id": "3a3ae70a-d1a2-4f86-909a-e9c6fb063930", "metadata": {}, "source": ["接下来,我们将在与创建`rag_dataset`时使用的相同源文档上构建一个RAG。\n"]}, {"cell_type": "code", "execution_count": null, "id": "a4e6d215-1644-4da9-8f15-5c1c77ab35cf", "metadata": {}, "outputs": [], "source": ["index = VectorStoreIndex.from_documents(documents=documents)\n", "query_engine = index.as_query_engine()"]}, {"cell_type": "markdown", "id": "88027888-102f-40df-a581-3047b35d4f11", "metadata": {}, "source": ["有了我们定义的RAG(即`query_engine`),我们可以利用它在`rag_dataset`上进行预测(即生成对查询的响应)。\n"]}, {"cell_type": "code", "execution_count": null, "id": "d609c502-e2a7-4dfe-ba1d-5ee54fe59691", "metadata": {}, "outputs": [{"name": "stderr", "output_type": "stream", "text": ["Batch processing of predictions: 100%|████████████████████| 100/100 [00:08<00:00, 12.12it/s]\n", "Batch processing of predictions: 100%|████████████████████| 100/100 [00:08<00:00, 12.37it/s]\n", "Batch processing of predictions: 100%|██████████████████████| 76/76 [00:06<00:00, 10.93it/s]\n"]}], "source": ["prediction_dataset = await rag_dataset.amake_predictions_with(\n", " predictor=query_engine, batch_size=100, show_progress=True\n", ")"]}, {"cell_type": "markdown", "id": "db779e26-dad1-4380-a131-b906339a936e", "metadata": {}, "source": ["### 分别评估答案和上下文相关性\n", "\n", "在问答系统中,评估答案的质量是非常重要的。通常情况下,我们需要分别评估答案的准确性以及答案与上下文的相关性。这两个方面的评估可以帮助我们确定一个答案是否是正确的,以及它是否与提出的问题和上下文相关。\n", "\n", "在实际应用中,我们可以使用不同的指标和技术来分别评估答案的准确性和上下文的相关性。这些指标可能包括词向量相似度、语义匹配模型、逻辑推理等。通过综合考虑这些指标,我们可以更全面地评估答案的质量。\n", "\n", "因此,在设计问答系统时,我们需要考虑如何分别评估答案的准确性和上下文的相关性,以提供更准确和相关的答案。\n"]}, {"cell_type": "markdown", "id": "96344bde-30c7-416f-a650-58e79b5abaff", "metadata": {}, "source": ["我们首先需要定义我们的评估器(即`AnswerRelevancyEvaluator`和`ContextRelevancyEvaluator`):\n"]}, {"cell_type": "code", "execution_count": null, "id": "b8fb6c86-b88a-4344-b390-a4decfa71061", "metadata": {}, "outputs": [], "source": ["# 实例化gpt-4评估器", "from llama_index.llms.openai import OpenAI", "from llama_index.core.evaluation import (", " AnswerRelevancyEvaluator,", " ContextRelevancyEvaluator,", ")", "", "judges = {}", "", "judges[\"answer_relevancy\"] = AnswerRelevancyEvaluator(", " llm=OpenAI(temperature=0, model=\"gpt-3.5-turbo\"),", ")", "", "judges[\"context_relevancy\"] = ContextRelevancyEvaluator(", " llm=OpenAI(temperature=0, model=\"gpt-4\"),", ")"]}, {"cell_type": "markdown", "id": "650e5262-1fa0-411f-9f09-e2557f96116a", "metadata": {}, "source": ["现在,我们可以使用我们的评估器通过循环遍历所有的<示例,预测>对来进行评估。\n"]}, {"cell_type": "code", "execution_count": null, "id": "2c5d02ce-c7f3-49b7-b397-b94b0aa3fc48", "metadata": {}, "outputs": [], "source": ["eval_tasks = []\n", "for example, prediction in zip(\n", " rag_dataset.examples, prediction_dataset.predictions\n", "):\n", " eval_tasks.append(\n", " judges[\"answer_relevancy\"].aevaluate(\n", " query=example.query,\n", " response=prediction.response,\n", " sleep_time_in_seconds=1.0,\n", " )\n", " )\n", " eval_tasks.append(\n", " judges[\"context_relevancy\"].aevaluate(\n", " query=example.query,\n", " contexts=prediction.contexts,\n", " sleep_time_in_seconds=1.0,\n", " )\n", " )"]}, {"cell_type": "code", "execution_count": null, "id": "9155634d-58c7-4dac-ac16-93a8537ddef5", "metadata": {}, "outputs": [{"name": "stderr", "output_type": "stream", "text": ["100%|█████████████████████████████████████████████████████| 250/250 [00:28<00:00, 8.85it/s]\n"]}], "source": ["eval_results1 = await tqdm_asyncio.gather(*eval_tasks[:250])"]}, {"cell_type": "code", "execution_count": null, "id": "a481eb08-1922-4e1c-bed1-459398c2e69c", "metadata": {}, "outputs": [{"name": "stderr", "output_type": "stream", "text": ["100%|█████████████████████████████████████████████████████| 302/302 [00:31<00:00, 9.62it/s]\n"]}], "source": ["eval_results2 = await tqdm_asyncio.gather(*eval_tasks[250:])"]}, {"cell_type": "code", "execution_count": null, "id": "7211e3f5-c52d-48da-98b0-7b4e5b9ce002", "metadata": {}, "outputs": [], "source": ["eval_results = eval_results1 + eval_results2"]}, {"cell_type": "code", "execution_count": null, "id": "f56052e8-957f-4945-b910-24f70116ea79", "metadata": {}, "outputs": [], "source": ["evals = {\n", " \"answer_relevancy\": eval_results[::2],\n", " \"context_relevancy\": eval_results[1::2],\n", "}"]}, {"cell_type": "markdown", "id": "9c31a4ac-5871-4e78-856d-5055f249598f", "metadata": {}, "source": ["### 查看评估结果\n"]}, {"cell_type": "markdown", "id": "6d037782-b7fa-43f6-ad83-b80faad3606b", "metadata": {}, "source": ["在这里,我们使用一个实用函数将`EvaluationResult`对象的列表转换为更适合笔记本的格式。这个实用函数将提供两个DataFrame,一个包含所有评估结果的详细信息,另一个通过对每种评估方法的所有分数取平均值来进行聚合。\n"]}, {"cell_type": "code", "execution_count": null, "id": "ffa305d6-bba1-45ef-87a1-c4a2415fa54b", "metadata": {}, "outputs": [], "source": ["from llama_index.core.evaluation.notebook_utils import get_eval_results_df\n", "import pandas as pd\n", "\n", "deep_dfs = {}\n", "mean_dfs = {}\n", "for metric in evals.keys():\n", " deep_df, mean_df = get_eval_results_df(\n", " names=[\"baseline\"] * len(evals[metric]),\n", " results_arr=evals[metric],\n", " metric=metric,\n", " )\n", " deep_dfs[metric] = deep_df\n", " mean_dfs[metric] = mean_df"]}, {"cell_type": "code", "execution_count": null, "id": "5bf8b125-6406-41e9-afad-eb3786645576", "metadata": {}, "outputs": [{"data": {"text/html": ["
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ragbaseline
metrics
mean_answer_relevancy_score0.914855
mean_context_relevancy_score0.572273
\n", "
"], "text/plain": ["rag baseline\n", "metrics \n", "mean_answer_relevancy_score 0.914855\n", "mean_context_relevancy_score 0.572273"]}, "execution_count": null, "metadata": {}, "output_type": "execute_result"}], "source": ["mean_scores_df = pd.concat(\n", " [mdf.reset_index() for _, mdf in mean_dfs.items()],\n", " axis=0,\n", " ignore_index=True,\n", ")\n", "mean_scores_df = mean_scores_df.set_index(\"index\")\n", "mean_scores_df.index = mean_scores_df.index.set_names([\"metrics\"])\n", "mean_scores_df"]}, {"cell_type": "markdown", "id": "cc452d5a-a2f5-4fb6-b41e-ff61cc7cb810", "metadata": {}, "source": ["上述实用程序还提供了在`mean_df`中对所有评估进行平均得分。\n"]}, {"cell_type": "markdown", "id": "52f8b67c-8501-4ad2-ba71-d20ed24dc041", "metadata": {}, "source": ["我们可以通过在`deep_df`上调用`value_counts()`来查看分数的原始分布。\n"]}, {"cell_type": "code", "execution_count": null, "id": "72768281-4fd2-480c-a1f2-5432606b0dfb", "metadata": {}, "outputs": [{"data": {"text/plain": ["scores\n", "1.0 250\n", "0.0 21\n", "0.5 5\n", "Name: count, dtype: int64"]}, "execution_count": null, "metadata": {}, "output_type": "execute_result"}], "source": ["deep_dfs[\"answer_relevancy\"][\"scores\"].value_counts()"]}, {"cell_type": "code", "execution_count": null, "id": "43110a59-feba-42e7-aa77-d765a5844642", "metadata": {}, "outputs": [{"data": {"text/plain": ["scores\n", "1.000 89\n", "0.000 70\n", "0.750 49\n", "0.250 23\n", "0.625 14\n", "0.500 11\n", "0.375 10\n", "0.875 9\n", "Name: count, dtype: int64"]}, "execution_count": null, "metadata": {}, "output_type": "execute_result"}], "source": ["deep_dfs[\"context_relevancy\"][\"scores\"].value_counts()"]}, {"cell_type": "markdown", "id": "4324ddbd-0b59-430a-89d5-c706bd55a184", "metadata": {}, "source": ["似乎大部分情况下,默认的RAG在生成与查询相关的答案方面表现相当不错。通过查看任何`deep_df`的记录,可以更仔细地了解情况。\n"]}, {"cell_type": "code", "execution_count": null, "id": "a617540f-9eb7-47d7-8b96-7618236f1c69", "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
 ragqueryanswercontextsscoresfeedbacks
0baselineWhat are the potential risks associated with large language models (LLMs) according to the context information?None['Evaluating Large Language Models: A\\nComprehensive Survey\\nZishan Guo∗, Renren Jin∗, Chuang Liu∗, Yufei Huang, Dan Shi, Supryadi\\nLinhao Yu, Yan Liu, Jiaxuan Li, Bojian Xiong, Deyi Xiong†\\nTianjin University\\n{guozishan, rrjin, liuc_09, yuki_731, shidan, supryadi}@tju.edu.cn\\n{linhaoyu, yan_liu, jiaxuanlee, xbj1355, dyxiong}@tju.edu.cn\\nAbstract\\nLarge language models (LLMs) have demonstrated remarkable capabilities\\nacross a broad spectrum of tasks. They have attracted significant attention\\nand been deployed in numerous downstream applications. Nevertheless, akin\\nto a double-edged sword, LLMs also present potential risks. They could\\nsuffer from private data leaks or yield inappropriate, harmful, or misleading\\ncontent. Additionally, the rapid progress of LLMs raises concerns about the\\npotential emergence of superintelligent systems without adequate safeguards.\\nTo effectively capitalize on LLM capacities as well as ensure their safe and\\nbeneficial development, it is critical to conduct a rigorous and comprehensive\\nevaluation of LLMs.\\nThis survey endeavors to offer a panoramic perspective on the evaluation\\nof LLMs. We categorize the evaluation of LLMs into three major groups:\\nknowledgeandcapabilityevaluation, alignmentevaluationandsafetyevaluation.\\nIn addition to the comprehensive review on the evaluation methodologies and\\nbenchmarks on these three aspects, we collate a compendium of evaluations\\npertaining to LLMs’ performance in specialized domains, and discuss the\\nconstruction of comprehensive evaluation platforms that cover LLM evaluations\\non capabilities, alignment, safety, and applicability.\\nWe hope that this comprehensive overview will stimulate further research\\ninterests in the evaluation of LLMs, with the ultimate goal of making evaluation\\nserve as a cornerstone in guiding the responsible development of LLMs. We\\nenvision that this will channel their evolution into a direction that maximizes\\nsocietal benefit while minimizing potential risks. A curated list of related\\npapers has been publicly available at a GitHub repository.1\\n∗Equal contribution\\n†Corresponding author.\\n1https://github.com/tjunlp-lab/Awesome-LLMs-Evaluation-Papers\\n1arXiv:2310.19736v3 [cs.CL] 25 Nov 2023', 'criteria. Multilingual Holistic Bias (Costa-jussà et al., 2023) extends the HolisticBias dataset\\nto 50 languages, achieving the largest scale of English template-based text expansion.\\nWhether using automatic or manual evaluations, both approaches inevitably carry human\\nsubjectivity and cannot establish a comprehensive and fair evaluation standard. Unqover\\n(Li et al., 2020) is the first to transform the task of evaluating biases generated by models\\ninto a multiple-choice question, covering gender, nationality, race, and religion categories.\\nThey provide models with ambiguous and disambiguous contexts and ask them to choose\\nbetween options with and without stereotypes, evaluating both PLMs and models fine-tuned\\non multiple-choice question answering datasets. BBQ (Parrish et al., 2022) adopts this\\napproach but extends the types of biases to nine categories. All sentence templates are\\nmanually created, and in addition to the two contrasting group answers, the model is also\\nprovided with correct answers like “I don’t know” and “I’m not sure”, and a statistical bias\\nscore metric is proposed to evaluate multiple question answering models. CBBQ (Huang\\n& Xiong, 2023) extends BBQ to Chinese. Based on Chinese socio-cultural factors, CBBQ\\nadds four categories: disease, educational qualification, household registration, and region.\\nThey manually rewrite ambiguous text templates and use GPT-4 to generate disambiguous\\ntemplates, greatly increasing the dataset’s diversity and extensibility. Additionally, they\\nimprove the experimental setup for LLMs and evaluate existing Chinese open-source LLMs,\\nfinding that current Chinese LLMs not only have higher bias scores but also exhibit behavioral\\ninconsistencies, revealing a significant gap compared to GPT-3.5-Turbo.\\nIn addition to these aforementioned evaluation methods, we could also use advanced LLMs for\\nscoring bias, such as GPT-4, or employ models that perform best in training bias detection\\ntasks to detect the level of bias in answers. Such models can be used not only in the evaluation\\nphase but also for identifying biases in data for pre-training LLMs, facilitating debiasing in\\ntraining data.\\nAs the development of multilingual LLMs and domain-specific LLMs progresses, studies on\\nthe fairness of these models become increasingly important. Zhao et al. (2020) create datasets\\nto study gender bias in multilingual embeddings and cross-lingual tasks, revealing gender\\nbias from both internal and external perspectives. Moreover, FairLex (Chalkidis et al., 2022)\\nproposes a multilingual legal dataset as fairness benchmark, covering four judicial jurisdictions\\n(European Commission, United States, Swiss Federation, and People’s Republic of China), five\\nlanguages (English, German, French, Italian, and Chinese), and various sensitive attributes\\n(gender, age, region, etc.). As LLMs have been applied and deployed in the finance and legal\\nsectors, these studies deserve high attention.\\n4.3 Toxicity\\nLLMs are usually trained on a huge amount of online data which may contain toxic behavior\\nand unsafe content. These include hate speech, offensive/abusive language, pornographic\\ncontent, etc. It is hence very desirable to evaluate how well trained LLMs deal with toxicity.\\nConsidering the proficiency of LLMs in understanding and generating sentences, we categorize\\nthe evaluation of toxicity into two tasks: toxicity identification and classification evaluation,\\nand the evaluation of toxicity in generated sentences.\\n29']1.0000001. The retrieved context does match the subject matter of the user's query. It discusses the potential risks associated with large language models (LLMs), including private data leaks, inappropriate or harmful content, and the emergence of superintelligent systems without adequate safeguards. It also discusses the potential for bias in LLMs, and the risk of toxicity in the content generated by LLMs. Therefore, it is relevant to the user's query about the potential risks associated with LLMs. (2/2)\n", "2. The retrieved context can be used to provide a full answer to the user's query. It provides a comprehensive overview of the potential risks associated with LLMs, including data privacy, inappropriate content, superintelligence, bias, and toxicity. It also discusses the importance of evaluating these risks and the methodologies for doing so. Therefore, it provides a complete answer to the user's query. (2/2)\n", "\n", "[RESULT] 4/4
1baselineHow does the survey categorize the evaluation of LLMs and what are the three major groups mentioned?None['Question \\nAnsweringTool \\nLearning\\nReasoning\\nKnowledge \\nCompletionEthics \\nand \\nMorality Bias\\nToxicity\\nTruthfulnessRobustnessEvaluation\\nRisk \\nEvaluation\\nBiology and \\nMedicine\\nEducationLegislationComputer \\nScienceFinance\\nBenchmarks for\\nHolistic Evaluation\\nBenchmarks \\nforKnowledge and Reasoning\\nBenchmarks \\nforNLU and NLGKnowledge and Capability\\nLarge Language \\nModel EvaluationAlignment Evaluation\\nSafety\\nSpecialized LLMs\\nEvaluation Organization\\n…Figure 1: Our proposed taxonomy of major categories and sub-categories of LLM evaluation.\\nOur survey expands the scope to synthesize findings from both capability and alignment\\nevaluations of LLMs. By complementing these previous surveys through an integrated\\nperspective and expanded scope, our work provides a comprehensive overview of the current\\nstate of LLM evaluation research. The distinctions between our survey and these two related\\nworks further highlight the novel contributions of our study to the literature.\\n2 Taxonomy and Roadmap\\nThe primary objective of this survey is to meticulously categorize the evaluation of LLMs,\\nfurnishing readers with a well-structured taxonomy framework. Through this framework,\\nreaders can gain a nuanced understanding of LLMs’ performance and the attendant challenges\\nacross diverse and pivotal domains.\\nNumerous studies posit that the bedrock of LLMs’ capabilities resides in knowledge and\\nreasoning, serving as the underpinning for their exceptional performance across a myriad of\\ntasks. Nonetheless, the effective application of these capabilities necessitates a meticulous\\nexamination of alignment concerns to ensure that the model’s outputs remain consistent with\\nuser expectations. Moreover, the vulnerability of LLMs to malicious exploits or inadvertent\\nmisuse underscores the imperative nature of safety considerations. Once alignment and safety\\nconcerns have been addressed, LLMs can be judiciously deployed within specialized domains,\\ncatalyzing task automation and facilitating intelligent decision-making. Thus, our overarching\\n6', 'This survey systematically elaborates on the core capabilities of LLMs, encompassing critical\\naspects like knowledge and reasoning. Furthermore, we delve into alignment evaluation and\\nsafety evaluation, including ethical concerns, biases, toxicity, and truthfulness, to ensure the\\nsafe, trustworthy and ethical application of LLMs. Simultaneously, we explore the potential\\napplications of LLMs across diverse domains, including biology, education, law, computer\\nscience, and finance. Most importantly, we provide a range of popular benchmark evaluations\\nto assist researchers, developers and practitioners in understanding and evaluating LLMs’\\nperformance.\\nWe anticipate that this survey would drive the development of LLMs evaluations, offering\\nclear guidance to steer the controlled advancement of these models. This will enable LLMs\\nto better serve the community and the world, ensuring their applications in various domains\\nare safe, reliable, and beneficial. With eager anticipation, we embrace the future challenges\\nof LLMs’ development and evaluation.\\n58']0.3750001. The retrieved context does match the subject matter of the user's query. The user's query is about how a survey categorizes the evaluation of Large Language Models (LLMs) and the three major groups mentioned. The context provided discusses the categorization of LLMs evaluation in the survey, mentioning aspects like knowledge and reasoning, alignment evaluation, safety evaluation, and potential applications across diverse domains. \n", "\n", "2. However, the context does not provide a full answer to the user's query. While it does discuss the categorization of LLMs evaluation, it does not clearly mention the three major groups. The context mentions several aspects of LLMs evaluation, but it is not clear which of these are considered the three major groups. \n", "\n", "[RESULT] 1.5
\n"], "text/plain": [""]}, "metadata": {}, "output_type": "display_data"}], "source": ["displayify_df(deep_dfs[\"context_relevancy\"].head(2))"]}, {"cell_type": "markdown", "id": "5d2a0d5f-253d-433b-85b5-abbd7d95bbe5", "metadata": {}, "source": ["当然,您可以根据需要应用任何筛选器。例如,如果您想查看产生不完美结果的示例。\n"]}, {"cell_type": "code", "execution_count": null, "id": "b8cfaa1e-720b-4753-b8cc-4c76b6f21e65", "metadata": {}, "outputs": [{"data": {"text/html": ["\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
 ragqueryanswercontextsscoresfeedbacks
1baselineHow does the survey categorize the evaluation of LLMs and what are the three major groups mentioned?None['Question \\nAnsweringTool \\nLearning\\nReasoning\\nKnowledge \\nCompletionEthics \\nand \\nMorality Bias\\nToxicity\\nTruthfulnessRobustnessEvaluation\\nRisk \\nEvaluation\\nBiology and \\nMedicine\\nEducationLegislationComputer \\nScienceFinance\\nBenchmarks for\\nHolistic Evaluation\\nBenchmarks \\nforKnowledge and Reasoning\\nBenchmarks \\nforNLU and NLGKnowledge and Capability\\nLarge Language \\nModel EvaluationAlignment Evaluation\\nSafety\\nSpecialized LLMs\\nEvaluation Organization\\n…Figure 1: Our proposed taxonomy of major categories and sub-categories of LLM evaluation.\\nOur survey expands the scope to synthesize findings from both capability and alignment\\nevaluations of LLMs. By complementing these previous surveys through an integrated\\nperspective and expanded scope, our work provides a comprehensive overview of the current\\nstate of LLM evaluation research. The distinctions between our survey and these two related\\nworks further highlight the novel contributions of our study to the literature.\\n2 Taxonomy and Roadmap\\nThe primary objective of this survey is to meticulously categorize the evaluation of LLMs,\\nfurnishing readers with a well-structured taxonomy framework. Through this framework,\\nreaders can gain a nuanced understanding of LLMs’ performance and the attendant challenges\\nacross diverse and pivotal domains.\\nNumerous studies posit that the bedrock of LLMs’ capabilities resides in knowledge and\\nreasoning, serving as the underpinning for their exceptional performance across a myriad of\\ntasks. Nonetheless, the effective application of these capabilities necessitates a meticulous\\nexamination of alignment concerns to ensure that the model’s outputs remain consistent with\\nuser expectations. Moreover, the vulnerability of LLMs to malicious exploits or inadvertent\\nmisuse underscores the imperative nature of safety considerations. Once alignment and safety\\nconcerns have been addressed, LLMs can be judiciously deployed within specialized domains,\\ncatalyzing task automation and facilitating intelligent decision-making. Thus, our overarching\\n6', 'This survey systematically elaborates on the core capabilities of LLMs, encompassing critical\\naspects like knowledge and reasoning. Furthermore, we delve into alignment evaluation and\\nsafety evaluation, including ethical concerns, biases, toxicity, and truthfulness, to ensure the\\nsafe, trustworthy and ethical application of LLMs. Simultaneously, we explore the potential\\napplications of LLMs across diverse domains, including biology, education, law, computer\\nscience, and finance. Most importantly, we provide a range of popular benchmark evaluations\\nto assist researchers, developers and practitioners in understanding and evaluating LLMs’\\nperformance.\\nWe anticipate that this survey would drive the development of LLMs evaluations, offering\\nclear guidance to steer the controlled advancement of these models. This will enable LLMs\\nto better serve the community and the world, ensuring their applications in various domains\\nare safe, reliable, and beneficial. With eager anticipation, we embrace the future challenges\\nof LLMs’ development and evaluation.\\n58']0.3750001. The retrieved context does match the subject matter of the user's query. The user's query is about how a survey categorizes the evaluation of Large Language Models (LLMs) and the three major groups mentioned. The context provided discusses the categorization of LLMs evaluation in the survey, mentioning aspects like knowledge and reasoning, alignment evaluation, safety evaluation, and potential applications across diverse domains. \n", "\n", "2. However, the context does not provide a full answer to the user's query. While it does discuss the categorization of LLMs evaluation, it does not clearly mention the three major groups. The context mentions several aspects of LLMs evaluation, but it is not clear which of these are considered the three major groups. \n", "\n", "[RESULT] 1.5
9baselineHow does this survey on LLM evaluation differ from previous reviews conducted by Chang et al. (2023) and Liu et al. (2023i)?None['This survey systematically elaborates on the core capabilities of LLMs, encompassing critical\\naspects like knowledge and reasoning. Furthermore, we delve into alignment evaluation and\\nsafety evaluation, including ethical concerns, biases, toxicity, and truthfulness, to ensure the\\nsafe, trustworthy and ethical application of LLMs. Simultaneously, we explore the potential\\napplications of LLMs across diverse domains, including biology, education, law, computer\\nscience, and finance. Most importantly, we provide a range of popular benchmark evaluations\\nto assist researchers, developers and practitioners in understanding and evaluating LLMs’\\nperformance.\\nWe anticipate that this survey would drive the development of LLMs evaluations, offering\\nclear guidance to steer the controlled advancement of these models. This will enable LLMs\\nto better serve the community and the world, ensuring their applications in various domains\\nare safe, reliable, and beneficial. With eager anticipation, we embrace the future challenges\\nof LLMs’ development and evaluation.\\n58', '(2021)\\nBEGIN (Dziri et al., 2022b)\\nConsisTest (Lotfi et al., 2022)\\nSummarizationXSumFaith (Maynez et al., 2020)\\nFactCC (Kryscinski et al., 2020)\\nSummEval (Fabbri et al., 2021)\\nFRANK (Pagnoni et al., 2021)\\nSummaC (Laban et al., 2022)\\nWang et al. (2020)\\nGoyal & Durrett (2021)\\nCao et al. (2022)\\nCLIFF (Cao & Wang, 2021)\\nAggreFact (Tang et al., 2023a)\\nPolyTope (Huang et al., 2020)\\nMethodsNLI-based MethodsWelleck et al. (2019)\\nLotfi et al. (2022)\\nFalke et al. (2019)\\nLaban et al. (2022)\\nMaynez et al. (2020)\\nAharoni et al. (2022)\\nUtama et al. (2022)\\nRoit et al. (2023)\\nQAQG-based MethodsFEQA (Durmus et al., 2020)\\nQAGS (Wang et al., 2020)\\nQuestEval (Scialom et al., 2021)\\nQAFactEval (Fabbri et al., 2022)\\nQ2 (Honovich et al., 2021)\\nFaithDial (Dziri et al., 2022a)\\nDeng et al. (2023b)\\nLLMs-based MethodsFIB (Tam et al., 2023)\\nFacTool (Chern et al., 2023)\\nFActScore (Min et al., 2023)\\nSelfCheckGPT (Manakul et al., 2023)\\nSAPLMA (Azaria & Mitchell, 2023)\\nLin et al. (2022b)\\nKadavath et al. (2022)\\nFigure 3: Overview of alignment evaluations.\\n4 Alignment Evaluation\\nAlthough instruction-tuned LLMs exhibit impressive capabilities, these aligned LLMs are\\nstill suffering from annotators’ biases, catering to humans, hallucination, etc. To provide a\\ncomprehensive view of LLMs’ alignment evaluation, in this section, we discuss those of ethics,\\nbias, toxicity, and truthfulness, as illustrated in Figure 3.\\n21']0.0000001. The retrieved context does not match the subject matter of the user's query. The user's query is asking for a comparison between the current survey on LLM evaluation and previous reviews conducted by Chang et al. (2023) and Liu et al. (2023i). However, the context does not mention these previous reviews at all, making it impossible to draw any comparisons. Therefore, the context does not match the subject matter of the user's query. (0/2)\n", "2. The retrieved context cannot be used exclusively to provide a full answer to the user's query. As mentioned above, the context does not mention the previous reviews by Chang et al. and Liu et al., which are the main focus of the user's query. Therefore, it cannot provide a full answer to the user's query. (0/2)\n", "\n", "[RESULT] 0.0
11baselineAccording to the document, what are the two main concerns that need to be addressed before deploying LLMs within specialized domains?None['This survey systematically elaborates on the core capabilities of LLMs, encompassing critical\\naspects like knowledge and reasoning. Furthermore, we delve into alignment evaluation and\\nsafety evaluation, including ethical concerns, biases, toxicity, and truthfulness, to ensure the\\nsafe, trustworthy and ethical application of LLMs. Simultaneously, we explore the potential\\napplications of LLMs across diverse domains, including biology, education, law, computer\\nscience, and finance. Most importantly, we provide a range of popular benchmark evaluations\\nto assist researchers, developers and practitioners in understanding and evaluating LLMs’\\nperformance.\\nWe anticipate that this survey would drive the development of LLMs evaluations, offering\\nclear guidance to steer the controlled advancement of these models. This will enable LLMs\\nto better serve the community and the world, ensuring their applications in various domains\\nare safe, reliable, and beneficial. With eager anticipation, we embrace the future challenges\\nof LLMs’ development and evaluation.\\n58', 'objective is to delve into evaluations encompassing these five fundamental domains and their\\nrespective subdomains, as illustrated in Figure 1.\\nSection 3, titled “Knowledge and Capability Evaluation”, centers on the comprehensive\\nassessment of the fundamental knowledge and reasoning capabilities exhibited by LLMs. This\\nsection is meticulously divided into four distinct subsections: Question-Answering, Knowledge\\nCompletion, Reasoning, and Tool Learning. Question-answering and knowledge completion\\ntasks stand as quintessential assessments for gauging the practical application of knowledge,\\nwhile the various reasoning tasks serve as a litmus test for probing the meta-reasoning and\\nintricate reasoning competencies of LLMs. Furthermore, the recently emphasized special\\nability of tool learning is spotlighted, showcasing its significance in empowering models to\\nadeptly handle and generate domain-specific content.\\nSection 4, designated as “Alignment Evaluation”, hones in on the scrutiny of LLMs’ perfor-\\nmance across critical dimensions, encompassing ethical considerations, moral implications,\\nbias detection, toxicity assessment, and truthfulness evaluation. The pivotal aim here is to\\nscrutinize and mitigate the potential risks that may emerge in the realms of ethics, bias,\\nand toxicity, as LLMs can inadvertently generate discriminatory, biased, or offensive content.\\nFurthermore, this section acknowledges the phenomenon of hallucinations within LLMs, which\\ncan lead to the inadvertent dissemination of false information. As such, an indispensable\\nfacet of this evaluation involves the rigorous assessment of truthfulness, underscoring its\\nsignificance as an essential aspect to evaluate and rectify.\\nSection 5, titled “Safety Evaluation”, embarks on a comprehensive exploration of two funda-\\nmental dimensions: the robustness of LLMs and their evaluation in the context of Artificial\\nGeneral Intelligence (AGI). LLMs are routinely deployed in real-world scenarios, where their\\nrobustness becomes paramount. Robustness equips them to navigate disturbances stemming\\nfrom users and the environment, while also shielding against malicious attacks and deception,\\nthereby ensuring consistent high-level performance. Furthermore, as LLMs inexorably ad-\\nvance toward human-level capabilities, the evaluation expands its purview to encompass more\\nprofound security concerns. These include but are not limited to power-seeking behaviors\\nand the development of situational awareness, factors that necessitate meticulous evaluation\\nto safeguard against unforeseen challenges.\\nSection 6, titled “Specialized LLMs Evaluation”, serves as an extension of LLMs evaluation\\nparadigm into diverse specialized domains. Within this section, we turn our attention to the\\nevaluation of LLMs specifically tailored for application in distinct domains. Our selection\\nencompasses currently prominent specialized LLMs spanning fields such as biology, education,\\nlaw, computer science, and finance. The objective here is to systematically assess their\\naptitude and limitations when confronted with domain-specific challenges and intricacies.\\nSection 7, denominated “Evaluation Organization”, serves as a comprehensive introduction\\nto the prevalent benchmarks and methodologies employed in the evaluation of LLMs. In light\\nof the rapid proliferation of LLMs, users are confronted with the challenge of identifying the\\nmost apt models to meet their specific requirements while minimizing the scope of evaluations.\\nIn this context, we present an overview of well-established and widely recognized benchmark\\n7']0.750000The retrieved context does match the subject matter of the user's query. It discusses the concerns that need to be addressed before deploying LLMs within specialized domains. The two main concerns mentioned are the alignment evaluation, which includes ethical considerations, moral implications, bias detection, toxicity assessment, and truthfulness evaluation, and the safety evaluation, which includes the robustness of LLMs and their evaluation in the context of Artificial General Intelligence (AGI). \n", "\n", "However, the context does not provide a full answer to the user's query. While it does mention the two main concerns, it does not go into detail about why these concerns need to be addressed before deploying LLMs within specialized domains. The context provides a general overview of the concerns, but it does not specifically tie these concerns to the deployment of LLMs within specialized domains. \n", "\n", "[RESULT] 3.0
12baselineIn the \"Alignment Evaluation\" section, what are some of the dimensions that are assessed to mitigate potential risks associated with LLMs?None['This survey systematically elaborates on the core capabilities of LLMs, encompassing critical\\naspects like knowledge and reasoning. Furthermore, we delve into alignment evaluation and\\nsafety evaluation, including ethical concerns, biases, toxicity, and truthfulness, to ensure the\\nsafe, trustworthy and ethical application of LLMs. Simultaneously, we explore the potential\\napplications of LLMs across diverse domains, including biology, education, law, computer\\nscience, and finance. Most importantly, we provide a range of popular benchmark evaluations\\nto assist researchers, developers and practitioners in understanding and evaluating LLMs’\\nperformance.\\nWe anticipate that this survey would drive the development of LLMs evaluations, offering\\nclear guidance to steer the controlled advancement of these models. This will enable LLMs\\nto better serve the community and the world, ensuring their applications in various domains\\nare safe, reliable, and beneficial. With eager anticipation, we embrace the future challenges\\nof LLMs’ development and evaluation.\\n58', 'Question \\nAnsweringTool \\nLearning\\nReasoning\\nKnowledge \\nCompletionEthics \\nand \\nMorality Bias\\nToxicity\\nTruthfulnessRobustnessEvaluation\\nRisk \\nEvaluation\\nBiology and \\nMedicine\\nEducationLegislationComputer \\nScienceFinance\\nBenchmarks for\\nHolistic Evaluation\\nBenchmarks \\nforKnowledge and Reasoning\\nBenchmarks \\nforNLU and NLGKnowledge and Capability\\nLarge Language \\nModel EvaluationAlignment Evaluation\\nSafety\\nSpecialized LLMs\\nEvaluation Organization\\n…Figure 1: Our proposed taxonomy of major categories and sub-categories of LLM evaluation.\\nOur survey expands the scope to synthesize findings from both capability and alignment\\nevaluations of LLMs. By complementing these previous surveys through an integrated\\nperspective and expanded scope, our work provides a comprehensive overview of the current\\nstate of LLM evaluation research. The distinctions between our survey and these two related\\nworks further highlight the novel contributions of our study to the literature.\\n2 Taxonomy and Roadmap\\nThe primary objective of this survey is to meticulously categorize the evaluation of LLMs,\\nfurnishing readers with a well-structured taxonomy framework. Through this framework,\\nreaders can gain a nuanced understanding of LLMs’ performance and the attendant challenges\\nacross diverse and pivotal domains.\\nNumerous studies posit that the bedrock of LLMs’ capabilities resides in knowledge and\\nreasoning, serving as the underpinning for their exceptional performance across a myriad of\\ntasks. Nonetheless, the effective application of these capabilities necessitates a meticulous\\nexamination of alignment concerns to ensure that the model’s outputs remain consistent with\\nuser expectations. Moreover, the vulnerability of LLMs to malicious exploits or inadvertent\\nmisuse underscores the imperative nature of safety considerations. Once alignment and safety\\nconcerns have been addressed, LLMs can be judiciously deployed within specialized domains,\\ncatalyzing task automation and facilitating intelligent decision-making. Thus, our overarching\\n6']0.7500001. The retrieved context does match the subject matter of the user's query. The user's query is about the dimensions assessed in the \"Alignment Evaluation\" section to mitigate potential risks associated with LLMs (Large Language Models). The context talks about the evaluation of LLMs, including alignment evaluation and safety evaluation. It mentions aspects like knowledge and reasoning, ethical concerns, biases, toxicity, and truthfulness. These are some of the dimensions that could be assessed to mitigate potential risks associated with LLMs. So, the context is relevant to the query. (2/2)\n", "\n", "2. However, the retrieved context does not provide a full answer to the user's query. While it mentions some dimensions that could be assessed in alignment evaluation (like knowledge and reasoning, ethical concerns, biases, toxicity, and truthfulness), it does not explicitly state that these are the dimensions assessed to mitigate potential risks associated with LLMs. The context does not provide a comprehensive list of dimensions or explain how these dimensions help mitigate risks. Therefore, the context cannot be used exclusively to provide a full answer to the user's query. (1/2)\n", "\n", "[RESULT] 3.0
14baselineWhat is the purpose of evaluating the knowledge and capability of LLMs?None['objective is to delve into evaluations encompassing these five fundamental domains and their\\nrespective subdomains, as illustrated in Figure 1.\\nSection 3, titled “Knowledge and Capability Evaluation”, centers on the comprehensive\\nassessment of the fundamental knowledge and reasoning capabilities exhibited by LLMs. This\\nsection is meticulously divided into four distinct subsections: Question-Answering, Knowledge\\nCompletion, Reasoning, and Tool Learning. Question-answering and knowledge completion\\ntasks stand as quintessential assessments for gauging the practical application of knowledge,\\nwhile the various reasoning tasks serve as a litmus test for probing the meta-reasoning and\\nintricate reasoning competencies of LLMs. Furthermore, the recently emphasized special\\nability of tool learning is spotlighted, showcasing its significance in empowering models to\\nadeptly handle and generate domain-specific content.\\nSection 4, designated as “Alignment Evaluation”, hones in on the scrutiny of LLMs’ perfor-\\nmance across critical dimensions, encompassing ethical considerations, moral implications,\\nbias detection, toxicity assessment, and truthfulness evaluation. The pivotal aim here is to\\nscrutinize and mitigate the potential risks that may emerge in the realms of ethics, bias,\\nand toxicity, as LLMs can inadvertently generate discriminatory, biased, or offensive content.\\nFurthermore, this section acknowledges the phenomenon of hallucinations within LLMs, which\\ncan lead to the inadvertent dissemination of false information. As such, an indispensable\\nfacet of this evaluation involves the rigorous assessment of truthfulness, underscoring its\\nsignificance as an essential aspect to evaluate and rectify.\\nSection 5, titled “Safety Evaluation”, embarks on a comprehensive exploration of two funda-\\nmental dimensions: the robustness of LLMs and their evaluation in the context of Artificial\\nGeneral Intelligence (AGI). LLMs are routinely deployed in real-world scenarios, where their\\nrobustness becomes paramount. Robustness equips them to navigate disturbances stemming\\nfrom users and the environment, while also shielding against malicious attacks and deception,\\nthereby ensuring consistent high-level performance. Furthermore, as LLMs inexorably ad-\\nvance toward human-level capabilities, the evaluation expands its purview to encompass more\\nprofound security concerns. These include but are not limited to power-seeking behaviors\\nand the development of situational awareness, factors that necessitate meticulous evaluation\\nto safeguard against unforeseen challenges.\\nSection 6, titled “Specialized LLMs Evaluation”, serves as an extension of LLMs evaluation\\nparadigm into diverse specialized domains. Within this section, we turn our attention to the\\nevaluation of LLMs specifically tailored for application in distinct domains. Our selection\\nencompasses currently prominent specialized LLMs spanning fields such as biology, education,\\nlaw, computer science, and finance. The objective here is to systematically assess their\\naptitude and limitations when confronted with domain-specific challenges and intricacies.\\nSection 7, denominated “Evaluation Organization”, serves as a comprehensive introduction\\nto the prevalent benchmarks and methodologies employed in the evaluation of LLMs. In light\\nof the rapid proliferation of LLMs, users are confronted with the challenge of identifying the\\nmost apt models to meet their specific requirements while minimizing the scope of evaluations.\\nIn this context, we present an overview of well-established and widely recognized benchmark\\n7', 'evaluations. This serves the purpose of aiding users in making judicious and well-informed\\ndecisions when selecting an appropriate LLM for their particular needs.\\nPleasebeawarethatourtaxonomyframeworkdoesnotpurporttocomprehensivelyencompass\\nthe entirety of the evaluation landscape. In essence, our aim is to address the following\\nfundamental questions:\\n•What are the capabilities of LLMs?\\n•What factors must be taken into account when deploying LLMs?\\n•In which domains can LLMs find practical applications?\\n•How do LLMs perform in these diverse domains?\\nWe will now embark on an in-depth exploration of each category within the LLM evaluation\\ntaxonomy, sequentially addressing capabilities, concerns, applications, and performance.\\n3 Knowledge and Capability Evaluation\\nEvaluating the knowledge and capability of LLMs has become an important research area as\\nthese models grow in scale and capability. As LLMs are deployed in more applications, it is\\ncrucial to rigorously assess their strengths and limitations across a diverse range of tasks and\\ndatasets. In this section, we aim to offer a comprehensive overview of the evaluation methods\\nand benchmarks pertinent to LLMs, spanning various capabilities such as question answering,\\nknowledge completion, reasoning, and tool use. Our objective is to provide an exhaustive\\nsynthesis of the current advancements in the systematic evaluation and benchmarking of\\nLLMs’ knowledge and capabilities, as illustrated in Figure 2.\\n3.1 Question Answering\\nQuestionansweringisaveryimportantmeansforLLMsevaluation, andthequestionanswering\\nability of LLMs directly determines whether the final output can meet the expectation. At\\nthe same time, however, since any form of LLMs evaluation can be regarded as question\\nanswering or transfer to question answering form, there are rare datasets and works that\\npurely evaluate question answering ability of LLMs. Most of the datasets are curated to\\nevaluate other capabilities of LLMs.\\nTherefore, we believe that the datasets simply used to evaluate the question answering ability\\nof LLMs must be from a wide range of sources, preferably covering all fields rather than\\naiming at some fields, and the questions do not need to be very professional but general.\\nAccording to the above criteria for datasets focusing on question answering capability, we can\\nfind that many datasets are qualified, e.g., SQuAD (Rajpurkar et al., 2016), NarrativeQA\\n(Kociský et al., 2018), HotpotQA (Yang et al., 2018), CoQA (Reddy et al., 2019). Although\\nthese datasets predate LLMs, they can still be used to evaluate the question answering ability\\nof LLMs. Kwiatkowski et al. (2019) present the Natural Questions corpus. The questions\\n8']0.750000The retrieved context is relevant to the user's query as it discusses the purpose of evaluating the knowledge and capability of LLMs (Large Language Models). It explains that the evaluation is important to assess their strengths and limitations across a diverse range of tasks and datasets. The context also mentions the different aspects of LLMs that are evaluated, such as question answering, knowledge completion, reasoning, and tool use. \n", "\n", "However, the context does not fully answer the user's query. While it does provide a general idea of why LLMs are evaluated, it does not delve into the specific purpose of these evaluations. For instance, it does not explain how these evaluations can help improve the performance of LLMs, or how they can be used to identify areas where LLMs may need further development or training.\n", "\n", "[RESULT] 3.0
\n"], "text/plain": [""]}, "metadata": {}, "output_type": "display_data"}], "source": ["cond = deep_dfs[\"context_relevancy\"][\"scores\"] < 1\n", "displayify_df(deep_dfs[\"context_relevancy\"][cond].head(5))"]}], "metadata": {"kernelspec": {"display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3"}}, "nbformat": 4, "nbformat_minor": 5}