{ "cells": [ { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "ecb7ce06-97cb-432c-8fa7-f4cf709d50ff", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "# MLflow 3 and DSPy — Develop and evaluate a *clickbait agent*.\n", "\n", "This notebook demonstrates how to implement and evaluate a clickbait detector and rewriter agent using DSPy and MLflow 3's GenAI tools. It was developed as part of the talk \"What Comes After Coding: Evaluating Agentic Behaviour\" presented at the Madrid Databricks User Group Meetup.\n", "\n", "The focus is on practical agent development and rigorous evaluation. Core topics include agent development with DSPy, MLflow instrumentation and the use of custom scorers and built-in judges to assess agentic behavior directly from execution traces." ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "a6aa73c6-4371-4c17-92bf-edf642820c56", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "## Environment Setup and Configuration\n", "\n", "Ensure that the configuration settings in this section are reviewed and updated before proceeding.\n", "\n", "These values determine where artifacts such as models and volumes will be stored, tokens to allow interaction with external tools, and addresses to volumes with datasets." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "db253b14-0030-46bf-b7ca-626678ec5ccd", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "%pip install -U -qqqq dspy mlflow[databricks]>=3.1.0\n", "%restart_python" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "500e555e-103e-414f-8387-a088f7185315", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "### Global variables\n", "\n", "This notebook uses Unity Catalog to register models and other resources, and relies on a model serving endpoint for inference. You need to specify:\n", "\n", "- `UC_CATALOG`: the Unity Catalog catalog where resources will be stored.\n", "- `UC_SCHEMA`: the schema within the catalog to use.\n", "- `model_serving_endpoint`: the name of the Databricks model serving endpoint that will handle inference requests. You can use a default one or [create your own](https://docs.databricks.com/aws/en/machine-learning/model-serving/create-manage-serving-endpoints)." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "9086a8a6-37c8-4280-b231-e3f44c788dee", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "UC_CATALOG = \"\"\n", "UC_SCHEMA = \"\"\n", "\n", "AGENT_PARAMETERS = {\n", " \"model_serving_endpoint\": \"gemini-2-0-flash-lite\"\n", "}" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "66ac904e-fd4e-4420-abcf-8daf4e19c186", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "#### Jina AI token\n", "\n", "The agent will have access to an external tool to fetch website contents from urls. This tool is implemented using *Jina AI Reader API*, a service for extracting content from web pages and converting it into clean LLM-ready markdown text.\n", "\n", "A free Jina AI API key can be obtained by following [this tutorial](https://www.youtube.com/watch?v=SLv6tSEKYOg).\n", "\n", "The token is managed using [databricks secrets](https://docs.databricks.com/aws/en/security/secrets/). For testing purposes you could replace this definition by inlining the token string." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "f16acaed-53ff-4f7b-aec1-6fa08c97505d", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "import base64\n", "from databricks.sdk import WorkspaceClient\n", "\n", "JINA_AI_TOKEN = base64.b64decode(\n", " WorkspaceClient().secrets.get_secret(\"credentials\", \"jinaai\").value\n", ").decode()\n", "\n", "# JINA_AI_TOKEN = \"YOUR_JINA_AI_TOKEN\" # Replace with your Jina AI token" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "f220c99a-4f53-495c-bb9b-11141e037499", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "### Clickbait dataset" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "d5ed767f-4cb4-4231-8920-c352f165c67c", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "A corpus of clickbait and non-clickbait title examples will prove useful to evaluate our agent implementation. You can download a dataset with headlines and binary (clickbait or not) classifications from this [Kaggle dataset](https://www.kaggle.com/datasets/amananandrai/clickbait-dataset/data).\n", "\n", "The `CORPUS_FILE` variable stores the path to the dataset in csv format. In this example, the dataset is stored in a unity catalog volume." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "f7e4a12f-ff0a-43ad-b39e-e0aef7fa93c8", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "CORPUS_FILE = \"\"" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "99df8387-5425-43ea-ab85-954de7a87b16", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "assert UC_CATALOG != \"\", \"Please set UC_CATALOG to your catalog name\"\n", "assert UC_SCHEMA != \"\", \"Please set UC_SCHEMA to your schema name\"\n", "assert \"model_serving_endpoint\" in AGENT_PARAMETERS and AGENT_PARAMETERS[\"model_serving_endpoint\"] != \"\", \"Please set AGENT_PARAMETERS['model_serving_endpoint'] to your model serving endpoint name\"\n", "assert JINA_AI_TOKEN != \"\", \"Please set JINA_AI_TOKEN to your Jina AI token\"\n", "assert CORPUS_FILE != \"\", \"Please set CORPUS_FILE to the path of your corpus file\"" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "762d05d9-226b-43b5-8995-bbdfd6db9f60", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "## Implement the agent" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "2c847800-fbf8-44fd-9c4c-abfdfab7b8fe", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "### Tool definitions\n", "\n", "The agent will have available a tool for fetching website content:\n", "\n", "- `fetch_url_title_and_content`: fetches title and content from the given url, using *Jina AI reader API*." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "08e655ca-5e0e-4cc6-8674-cf8b4b2ff186", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "import requests\n", "import json\n", "from urllib.parse import urlparse\n", "\n", "def fetch_url_title_and_content(url: str) -> dict[str, str]:\n", " \"\"\"\n", " Gets title and content from an url\n", "\n", " Arguments:\n", " url: The url to fetch data from\n", " \n", " Returns:\n", " dict: Dictionary with title and content\n", " \"\"\"\n", " \n", " parsed = urlparse(url)\n", " if parsed.scheme not in (\"http\", \"https\"):\n", " url = f\"https://{url}\"\n", " parsed = urlparse(url)\n", " if len(parsed.netloc) == 0:\n", " raise ValueError(f\"Invalid URL: {url}\")\n", " \n", " response = requests.get(f\"https://r.jina.ai/{url}\", headers={\n", " \"Accept\": \"application/json\",\n", " \"Authorization\": f\"Bearer {JINA_AI_TOKEN}\",\n", " \"X-Md-Heading-Style\": \"setext\",\n", " \"X-Base\": \"final\",\n", " \"X-Retain-Images\": \"none\",\n", " \"X-Md-Link-Style\": \"discarded\",\n", " \"X-Timeout\": \"10\",\n", " }).json()\n", "\n", " return {\n", " \"title\": response[\"data\"].get(\"title\", \"\"),\n", " \"content\": response[\"data\"].get(\"content\", \"\"),\n", " }" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "1ce34236-f353-47a0-bbc2-2354c8b2aa62", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "### DSPy agent\n", "\n", "DSPy agents are implemented by defining signatures a modules.\n", "\n", "- `Signature`: In DSPy, signatures declaratively define the input and output behavior of your language model tasks. They abstract away the complexities of prompt engineering, allowing DSPy's compiler to optimize how your LM interacts with your data, leading to more robust and performant applications.\n", "- `Module`: DSPy modules are the fundamental building blocks for constructing LM-powered programs. They encapsulate specific prompting techniques (like `ChainOfThought`) and can be easily composed. You can leverage predefined modules for common patterns or define your own custom modules for more specialized tasks, enabling flexible and modular agent design.\n", "\n", "Activating logging (*traces*) executions of our agent with MLflow 3, is as simple as executing `mlflow.dspy.autolog()`.\n" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "7bf2da9b-6f9c-487a-a743-5749d5392a16", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "import dspy\n", "from typing import Any, Generator, Optional\n", "from dataclasses import dataclass\n", "from pprint import pprint\n", "import mlflow\n", "from mlflow.entities import SpanType\n", "from mlflow.pyfunc.model import ChatAgent\n", "import uuid\n", "from mlflow.types.agent import (\n", " ChatAgentMessage,\n", " ChatAgentResponse,\n", " ChatContext,\n", ")\n", "\n", "mlflow.dspy.autolog()\n", "\n", "dspy.configure(lm=dspy.LM(\n", " model=f\"databricks/{AGENT_PARAMETERS[\"model_serving_endpoint\"]}\",\n", " max_tokens=16384\n", "))\n", "dspy.settings.configure(track_usage=True)\n", "\n", "class TextHandler(dspy.Signature):\n", " \"You are a text handler. You are given a text and you have to return the text splitted in the title contained in the text and content. If the text tells is an URL or tells you to use an URL you can use tools to grab the content and its already splitted in title and content\"\n", " text: str = dspy.InputField(desc=\"The text to be splitted.\")\n", " title: str = dspy.OutputField(desc=\"The title of the text.\")\n", " content: str = dspy.OutputField(desc=\"The content of the text.\")\n", "\n", "class ClickBaitDetector(dspy.Signature):\n", " \"\"\"Clasify if the text provided is the title of a clickbait article.\"\"\"\n", " title: str = dspy.InputField(desc=\"The title of a article.\")\n", " is_clickbait: bool = dspy.OutputField(desc=\"Marks if the title is a clickbait.\")\n", " reason: str = dspy.OutputField(desc=\"The reason why the title is a clickbait. Only the bullet points.\")\n", "\n", "class ClickBaitExtractor(dspy.Signature):\n", " \"\"\"Obtain an alternative version of the title without clickbait, use the content of the article to answer any question the reader of the original title could have. Make the response short, concise and in the same language as the provided title.\"\"\"\n", " title: str = dspy.InputField(desc=\"the title of a article\")\n", " content: str = dspy.InputField(desc=\"the content of a article\")\n", " response: str = dspy.OutputField(desc=\"the response to the clickbait in the language provided\")\n", "\n", "class ClickBaitTextAnalyzer(dspy.Module):\n", " def __init__(self, callbacks=None):\n", " super().__init__(callbacks)\n", " self.urlTool = dspy.Tool(fetch_url_title_and_content, name=\"urlTool\", desc=\"A tool to fetch the title and content of an url.\")\n", " self.textHandler = dspy.ReAct(TextHandler, tools = [self.urlTool], max_iters=1)\n", " self.clickbait = dspy.ChainOfThought(ClickBaitDetector)\n", " self.extr = dspy.Predict(ClickBaitExtractor)\n", "\n", " def forward(self, message: str):\n", " \"\"\"Detect if the title is a clickbait and if it is, extract the response to the clickbait, if its not, returns the reason why its not clickbait. If the tool fails and you cant continue to do your task, return the error in the title it and put the reason in the content.\"\"\"\n", " \n", " titleAndContent = self.textHandler(text=message)\n", " clickbait = self.clickbait(title=titleAndContent.title)\n", " if clickbait.is_clickbait:\n", " response = self.extr(title=titleAndContent.title, content=titleAndContent.content)\n", " response.original_title = titleAndContent.title\n", " return response\n", " else:\n", " clickbait.original_title = titleAndContent.title\n", " return clickbait" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "abf08bcd-5c45-4a2f-8a45-bb89a27b6e90", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "### Logging a DSPy agent into MLflow\n", "\n", "- `set_active_model` is used to set the active model with the specified name. This model will be linked to traces generated from now on. The model name can include the version. In this notebook the version is not being associated to a versioning system. [This page](https://mlflow.org/docs/latest/genai/prompt-version-mgmt/version-tracking/track-application-versions-with-mlflow#step-3-link-traces-to-the-application-version) of MLflow documentation gives some insights about agent versioning.\n", "- Model hyperparameters can be associated to the active model using `log_model_params`.\n", "- Finally the agent graph is instantiated. It's important to leave this step as the last one, after enabling logging and registering the active model. Once the agent graph has been initialized, it can be visualized using `display` function." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "ac527b9f-947d-40a3-93ea-b680f0a3629f", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "from time import time\n", "\n", "mlflow.langchain.autolog()\n", "active_model_info = mlflow.set_active_model(name=f\"clickbait-dspy-{int(time())}\")\n", "mlflow.log_model_params(model_id=active_model_info.model_id, params=AGENT_PARAMETERS)\n", "agent = ClickBaitTextAnalyzer()" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "5ab848b6-08be-452f-b2df-9561f5d321cd", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "## Call the agent\n", "\n", "With the agent instantiated, it is now possible to call our agent as a function by passing it an user message.\n", "\n", "We can confirm the autologging is working by checking the produced trace in the block results." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "38497d93-6889-4cdb-ac1e-31209ead39d8", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "agent(\"is this article clickbait https://www.lavanguardia.com/cribeo/fast-news/20250527/10723991/popular-nombre-masculino-espana-jonathan-coeficiente-intelectual-bajo-mmn.amp.html f?\")" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "dd592a61-33d2-4c9b-887e-b537933c1778", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "It is also possible to call just a component of the agent." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "d8f17206-69ed-477e-bd35-e804ae49f6c8", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "agent.clickbait.predict(\n", " title=\"El actor mejor pagado tiene su última pelicula en nuestra plataforma de streaming\"\n", ")" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "879508e6-953d-4b8c-922c-83917c55b383", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "## Agent evaluation\n", "\n", "Once an agent is implemented it is essential to define a set of evaluation scores that enable both quantitative and qualitative assessments of its behavior. This enables the gathering of meaningful insights about how the model—or specific components of it—are performing.\n", "\n", "MLflow 3's new GenAI API introduces the `evaluate`, which supports executing scorer functions on traces or custom function outputs, via the `predict_fn` parameter. This `predict_fn` function allows wrapping the entire agent agent or individual components making it possible to reproduce classic testing paradigms such as *integration tests* and *unit tests* within the agent evaluation workflow.\n", "\n", "Evaluation metrics are defined using the `@scorer` decorator. Custom scorer functions can access inputs, outputs, traces, and expectations, giving them full context to evaluate agent behavior.\n", "\n", "Custom scorers can tipically be written in an implementation-agnostic way, allowing the agent logic to be replaced—potentially even with a different framework, such as LangGraph—without requiring changes to the scorers themselves. The only likely adjustment needed when switching implementations is updating the `predict_fn`, particularly if the new implementation introduces a different function signature.\n", "\n", "Inside a scorer, it's possible to make custom calls to language models—for example, by passing them the output to assess specific properties. This approach resembles *property-based testing*, where evaluations are derived from general behavioral rules rather than hardcoded expected outputs. Scorers that rely on language models for evaluation are known as judges. Databricks provides several [built-in judges](https://docs.databricks.com/aws/en/mlflow3/genai/eval-monitor/predefined-judge-scorers) for common evaluation scenarios, streamlining the process for standard use cases." ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "4e378244-a5ff-40c0-b23c-ad8ba1094336", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "### Useful datasets\n", "\n", "In the following block some datasets are imported from the *kaggle clickbait dataset* and the `urls_data.json`." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "1369c1bc-e919-4424-9388-349934512d58", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "from mlflow.entities import Feedback, Trace\n", "from mlflow.genai.scorers import scorer\n", "import pandas as pd\n", "import random\n", "\n", "# Kaggle dataset\n", "corpus = pd.read_csv(CORPUS_FILE)\n", "SAMPLE_SIZE = 10\n", "RANDOM_SEED = 42 # fix the random seed to get the same sample every time\n", "clickbait_sample = corpus[corpus[\"clickbait\"] == 1].sample(n=SAMPLE_SIZE//2, random_state=RANDOM_SEED)\n", "non_clickbait_sample = corpus[corpus[\"clickbait\"] == 0].sample(n=SAMPLE_SIZE//2, random_state=RANDOM_SEED)\n", "corpus_sample = pd.concat([clickbait_sample, non_clickbait_sample], ignore_index=True)\n", "\n", "# dataset with url, title and content (20 rows)\n", "url_articles = pd.read_json(\"./urls_data.json\")" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "adfd0a27-3f41-4e29-b6d9-c9e11aa9d4db", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "### Classification correctness\n", "\n", "The `check_clickbait` scorer evaluates whether the agent correctly classifies input titles as clickbait or not.\n", "- The dataset consists of inputs with a title field and expectations containing the expected clickbait classification label.\n", "- To isolate and evaluate just the classification logic, the predict_fn wraps the invoke method of the `classify` of the agent.\n", "- The scorer compares the agent’s output to the expected label, marking incorrect predictions as either false positives or false negatives." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "6e342b43-032d-4765-b9c7-90c127fae6cb", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "titles_dataset = (\n", " corpus_sample\n", " .apply(\n", " lambda row: {\n", " \"inputs\": {\"title\": row[\"headline\"]},\n", " \"expectations\": {\"is_clickbait\": (row[\"clickbait\"] == 1)},\n", " },\n", " axis=1,\n", " result_type=\"expand\",\n", " )\n", " .sample(frac=1, random_state=42)\n", ")\n", "\n", "def run_clickbait(title: str):\n", " return agent.clickbait(title=title).toDict()\n", "\n", "@scorer\n", "def check_clickbait(outputs, expectations):\n", "\n", " value = outputs[\"is_clickbait\"] == expectations[\"is_clickbait\"] \n", " rationale = (\n", " \"The model predicted the wrong clickbait status\" if value else \"The model predicted the correct clickbait status\"\n", " )\n", " \n", " feedback = [Feedback(\n", " name = \"success\",\n", " value = value,\n", " rationale=rationale\n", " )] \n", "\n", " if not value:\n", " if expectations[\"is_clickbait\"]:\n", " feedback.append(Feedback(\n", " name = \"false negative\",\n", " value = True))\n", " feedback.append(Feedback(\n", " name = \"false positive\",\n", " value = False))\n", " else :\n", " feedback.append(Feedback(\n", " name = \"false negative\",\n", " value = False))\n", " feedback.append(Feedback(\n", " name = \"false positive\",\n", " value = True))\n", "\n", " return feedback\n", "\n", "evaluation = mlflow.genai.evaluate(\n", " data=titles_dataset,\n", " scorers=[check_clickbait],\n", " predict_fn=run_clickbait\n", ")" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "d505cad3-dba7-4834-a9be-5e520a8de04d", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "### Clickbait response correctness\n", "\n", "The `check_clickbait_response` scorer evaluates whether the agent successfully rewrites titles originally classified as clickbait, eliminating the clickbait aspect in the revised version.\n", "\n", "- The dataset includes title and content fields as inputs.\n", "- To isolate and evaluate the rewriting logic, the `predict_fn` wraps the invoke method of the `clickbait_response` node.\n", "- The scorer then uses the `classify` node as a judge, applying it to the rewritten output to determine whether it would still be classified as clickbait." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "2fcf15a4-1cae-4911-a498-870a36239817", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "clickbait_titles_dataset = (\n", " url_articles\n", " .apply(\n", " lambda row: {\n", " \"inputs\": {\n", " \"title\": row[\"title\"],\n", " \"content\": row[\"content\"]\n", " }\n", " },\n", " axis=1,\n", " result_type=\"expand\",\n", " )\n", ")\n", "\n", "def run_extr(title: str, content: str):\n", " return agent.extr(title=title, content=content).toDict()\n", "\n", "@scorer\n", "def check_clickbait_response(inputs, outputs):\n", " output_title = outputs[\"messages\"][-1].content\n", " judge_output = agent.clickbait(title=output_title)\n", " \n", " return Feedback(\n", " name=\"response is no clickbait\",\n", " value=not judge_output[\"is_clickbait\"],\n", " rationale=judge_output[\"classification_reason\"],\n", " )\n", "\n", "evaluation = mlflow.genai.evaluate(\n", " data=clickbait_titles_dataset,\n", " scorers=[check_clickbait_response],\n", " predict_fn=run_extr\n", ")" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "22e6391a-ca71-48ed-a68e-63325522264c", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "### Tool call correctness\n", "\n", "The `check_tool_use` scorer verifies whether the agent invoked the expected tools during execution.\n", "\n", "- The dataset consists of user messages (in this case, URLs) as inputs, with expectations specifying that the agent should call the `fetch_url_title_and_content` tool.\n", "- The `predict_fn` wraps the full agent invocation, simulating a user message being processed end-to-end.\n", "- The scorer inspects the execution trace and collects all tool invocations by searching for spans of type `\"TOOL\"`. It stores the tool names in a set and compares them to the `expected_tools`, which are also converted into a set.\n", "\n", "Note: Tool names are stored in sets to ignore the order of invocation. This simplifies comparison but does mean that repeated invocations of the same tool (e.g., retries or multiple uses) are ignored. MLflow automatically renames multiple executions of the same tool-appending suffixes like _1, _2, etc- so information about how many times each tool was called is not lost when storing the tool call names in a set." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "3128e56a-2376-49bd-bdc3-51d678640a70", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "tool_use_dataset = url_articles.apply(\n", " lambda row: {\n", " \"inputs\": {\"user_message\": row[\"url\"]},\n", " \"expectations\": {\"expected_tools\": [\"fetch_url_title_and_content\"]},\n", " },\n", " axis=1,\n", " result_type=\"expand\",\n", ")\n", "\n", "def run_agent(user_message: str):\n", " return agent(user_message).toDict()\n", "\n", "@scorer\n", "def check_tool_use(trace, expectations):\n", " tool_calls = {span.name for span in trace.search_spans(span_type=\"TOOL\")}\n", "\n", " if not tool_calls:\n", " return Feedback(value=False, rationale=\"No tool calls found\")\n", " \n", " expected_tools = set(expectations[\"expected_tools\"])\n", "\n", " if expected_tools != tool_calls:\n", " return Feedback(\n", " value=False,\n", " rationale=(\n", " \"Tool calls did not match expectations.\\n\"\n", " f\"Expected {expected_tools} but got {tool_calls}.\"\n", " )\n", " )\n", " return Feedback(value=True)\n", "\n", "check_tool_use_eval_result = mlflow.genai.evaluate(\n", " data=tool_use_dataset, scorers=[check_tool_use], predict_fn=run_agent\n", ")" ] }, { "cell_type": "markdown", "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "8d6e9121-9a55-45ea-9c86-b8629b7c981e", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "source": [ "### Predefined judges\n", "\n", "MLflow GenAI provides several built-in judges, that can be used to evaluate common behavioral expectations without writing custom logic. Two such predefined judges are `Safety` and `Guidelines`.\n", "\n", "- The `Safety` scorer evaluates content—whether generated by the application or provided by a user—for harmful, unethical, or inappropriate material.\n", "- The `Guidelines` scorer enables fast and flexible evaluation based on natural language rules, framed as binary pass/fail conditions. These criteria can be tailored to specific application constraints or behavioral policies.\n", "\n", "Unlike earlier examples that require defining a `predict_fn` to execute and evaluate an agent function, this approach uses the data parameter to evaluate previously executed traces. This is particularly useful when agent outputs have already been logged—such as during experimentation or batch processing—and additional scoring needs to be applied without re-executing the agent." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "application/vnd.databricks.v1+cell": { "cellMetadata": { "byteLimit": 2048000, "rowLimit": 10000 }, "inputWidgets": {}, "nuid": "ee15813f-8539-4507-a781-473f608748b7", "showTitle": false, "tableResultSettingsMap": {}, "title": "" } }, "outputs": [], "source": [ "from mlflow.genai.scorers import Safety, Guidelines\n", "\n", "traces = mlflow.search_traces(run_id=check_tool_use_eval_result.run_id)\n", "\n", "mlflow.genai.evaluate(\n", " data=traces,\n", " scorers=[\n", " Safety(),\n", " Guidelines(name=\"question\", guidelines=\"The response must not contain a question\")\n", " ],\n", ")" ] } ], "metadata": { "application/vnd.databricks.v1+notebook": { "computePreferences": null, "dashboards": [], "environmentMetadata": null, "inputWidgetPreferences": null, "language": "python", "notebookMetadata": { "pythonIndentUnit": 4 }, "notebookName": "Clickbait agent - DSPy", "widgets": {} }, "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 0 }