{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "jWESX0tpdrE-" }, "source": [ "##### Copyright 2024 Google LLC." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "YQvTrJpxzRlJ" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "3hp_P0cDzTWp" }, "source": [ "# Gemini 2.0 - Multimodal live API: Tool use" ] }, { "cell_type": "markdown", "metadata": { "id": "OLW8VU78zZOc" }, "source": [ "\n", " \n", "
\n", " Run in Google Colab\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "y7f4kFby0E6j" }, "source": [ "This notebook provides examples of how to use tools with the multimodal live API with [Gemini 2.0](https://ai.google.dev/gemini-api/docs/models/gemini-v2).\n", "\n", "The API provides Google Search, Code Execution and Function Calling tools. The earlier Gemini models supported versions of these tools. The biggest change with Gemini 2 (in the Live API) is that, basically, all the tools are handled by Code Execution. With that change, you can use **multiple tools** in a single API call, and the model can use multiple tools in a single code execution block. \n", "\n", "This tutorial assumes you are familiar with the Live API, as described in the [this tutorial](https://github.com/google-gemini/cookbook/blob/main/gemini-2/live_api_starter.ipynb)." ] }, { "cell_type": "markdown", "metadata": { "id": "Mfk6YY3G5kqp" }, "source": [ "## Setup" ] }, { "cell_type": "markdown", "metadata": { "id": "d5027929de8f" }, "source": [ "### Install SDK\n", "\n", "The new **[Google Gen AI SDK](https://ai.google.dev/gemini-api/docs/sdks)** provides programmatic access to Gemini 2.0 (and previous models) using both the [Google AI for Developers](https://ai.google.dev/gemini-api/docs) and [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/overview) APIs. With a few exceptions, code that runs on one platform will run on both. This means that you can prototype an application using the Developer API and then migrate the application to Vertex AI without rewriting your code.\n", "\n", "More details about this new SDK on the [documentation](https://ai.google.dev/gemini-api/docs/sdks) or in the [Getting started](../gemini-2/get_started.ipynb) notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "46zEFO2a9FFd" }, "outputs": [], "source": [ "!pip install -U -q google-genai" ] }, { "cell_type": "markdown", "metadata": { "id": "CTIfnvCn9HvH" }, "source": [ "### Setup your API key\n", "\n", "To run the following cell, your API key must be stored it in a Colab Secret named `GOOGLE_API_KEY`. If you don't already have an API key, or you're not sure how to create a Colab Secret, see [Authentication](../quickstarts/Authentication.ipynb) for an example." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "A1pkoyZb9Jm3" }, "outputs": [], "source": [ "from google.colab import userdata\n", "import os\n", "\n", "os.environ['GOOGLE_API_KEY']=userdata.get('GOOGLE_API_KEY')" ] }, { "cell_type": "markdown", "metadata": { "id": "Y13XaCvLY136" }, "source": [ "### Initialize SDK client\n", "\n", "The client will pickup your API key from the environment variable.\n", "To use the live API you need to set the client version to `v1alpha`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HghvVpbU0Uap" }, "outputs": [], "source": [ "from google import genai\n", "\n", "client = genai.Client(http_options= {\n", " 'api_version': 'v1alpha'\n", "})" ] }, { "cell_type": "markdown", "metadata": { "id": "QOov6dpG99rY" }, "source": [ "### Select a model\n", "\n", "Multimodal Live API are a new capability introduced with the [Gemini 2.0](https://ai.google.dev/gemini-api/docs/models/gemini-v2) model. It won't work with previous generation models." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "27Fikag0xSaB" }, "outputs": [], "source": [ "model_name = \"gemini-2.0-flash-exp\"" ] }, { "cell_type": "markdown", "metadata": { "id": "pLU9brx6p5YS" }, "source": [ "### Imports" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yMG4iLu5ZLgc" }, "outputs": [], "source": [ "import asyncio\n", "import contextlib\n", "import json\n", "import wave\n", "\n", "from IPython import display\n", "\n", "from google import genai\n", "from google.genai import types" ] }, { "cell_type": "markdown", "metadata": { "id": "yrb4aX5KqKKX" }, "source": [ "### Utilities" ] }, { "cell_type": "markdown", "metadata": { "id": "rmfQ-NvFI7Ct" }, "source": [ "You're going to use the Live API's audio output, the easiest way hear it in Colab is to write the `PCM` data out as a `WAV` file:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "p2aGpzlR-60Q" }, "outputs": [], "source": [ "@contextlib.contextmanager\n", "def wave_file(filename, channels=1, rate=24000, sample_width=2):\n", " with wave.open(filename, \"wb\") as wf:\n", " wf.setnchannels(channels)\n", " wf.setsampwidth(sample_width)\n", " wf.setframerate(rate)\n", " yield wf" ] }, { "cell_type": "markdown", "metadata": { "id": "KfdD9mVxqatm" }, "source": [ "Use a logger so it's easier to switch on/off debugging messages." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "wgHJgpV9Zw4E" }, "outputs": [], "source": [ "import logging\n", "logger = logging.getLogger('Live')\n", "#logger.setLevel('DEBUG') # Switch between \"INFO\" and \"DEBUG\" to toggle debug messages.\n", "logger.setLevel('INFO')" ] }, { "cell_type": "markdown", "metadata": { "id": "4hiaxgUCZSYJ" }, "source": [ "## Get started" ] }, { "cell_type": "markdown", "metadata": { "id": "LQoca-W7ri0y" }, "source": [ "Most of the Live API setup will be similar to the [starter tutorial](../gemini-2/live_api_starter.ipynb). Since this tutorial doesn't focus on the realtime interactivity of the API, the code has been simplified: This code uses the Live API, but it only sends a single text prompt, and listens for a single turn of replies.\n", "\n", "You can set `modality=\"AUDIO\"` on any of the examples to get the spoken version of the output." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lwLZrmW5zR_P" }, "outputs": [], "source": [ "n = 0\n", "async def run(prompt, modality=\"TEXT\", tools=None):\n", " global n\n", " if tools is None:\n", " tools=[]\n", "\n", " config = {\n", " \"tools\": tools,\n", " \"generation_config\": {\n", " \"response_modalities\": [modality]}}\n", "\n", " async with client.aio.live.connect(model=model_name, config=config) as session:\n", " display.display(display.Markdown(prompt))\n", " display.display(display.Markdown('-------------------------------'))\n", " await session.send(prompt, end_of_turn=True)\n", "\n", " audio = False\n", " filename = f'audio_{n}.wav'\n", " with wave_file(filename) as wf:\n", " async for response in session.receive():\n", " logger.debug(str(response))\n", " if text:=response.text:\n", " display.display(display.Markdown(text))\n", " continue\n", "\n", " if data:=response.data:\n", " print('.', end='')\n", " wf.writeframes(data)\n", " audio = True\n", " continue\n", "\n", " server_content = response.server_content\n", " if server_content is not None:\n", " handle_server_content(wf, server_content)\n", " continue\n", "\n", " tool_call = response.tool_call\n", " if tool_call is not None:\n", " await handle_tool_call(session, tool_call)\n", "\n", "\n", " if audio:\n", " display.display(display.Audio(filename, autoplay=True))\n", " n = n+1" ] }, { "cell_type": "markdown", "metadata": { "id": "ngrvxzrf0ERR" }, "source": [ "Since this tutorial demonstrates several tools, you'll need more code to handle the different types of objects it returns.\n", "\n", "- The `code_execution` tool can return `executable_code` and `code_execution_result` parts.\n", "- The `google_search` tool may attach a `grounding_metadata` object." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "CypjqSb-0C-Q" }, "outputs": [], "source": [ "def handle_server_content(wf, server_content):\n", " model_turn = server_content.model_turn\n", " if model_turn:\n", " for part in model_turn.parts:\n", " executable_code = part.executable_code\n", " if executable_code is not None:\n", " display.display(display.Markdown('-------------------------------'))\n", " display.display(display.Markdown(f'``` python\\n{executable_code.code}\\n```'))\n", " display.display(display.Markdown('-------------------------------'))\n", "\n", " code_execution_result = part.code_execution_result\n", " if code_execution_result is not None:\n", " display.display(display.Markdown('-------------------------------'))\n", " display.display(display.Markdown(f'```\\n{code_execution_result.output}\\n```'))\n", " display.display(display.Markdown('-------------------------------'))\n", "\n", " grounding_metadata = getattr(server_content, 'grounding_metadata', None)\n", " if grounding_metadata is not None:\n", " display.display(\n", " display.HTML(grounding_metadata.search_entry_point.rendered_content))\n", "\n", " return" ] }, { "cell_type": "markdown", "metadata": { "id": "dPnXSNZ5rydM" }, "source": [ "- Finally, with the `function_declarations` tool, the API may return `tool_call` objects. To keep this code minimal, the `tool_call` handler just replies to every function call with a response of `\"ok\"`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "EmTKF_DtrY4U" }, "outputs": [], "source": [ "async def handle_tool_call(session, tool_call):\n", " for fc in tool_call.function_calls:\n", " tool_response = types.LiveClientToolResponse(\n", " function_responses=[types.FunctionResponse(\n", " name=fc.name,\n", " id=fc.id,\n", " response={'result':'ok'},\n", " )]\n", " )\n", "\n", " print('\\n>>> ', tool_response)\n", " await session.send(tool_response)" ] }, { "cell_type": "markdown", "metadata": { "id": "TcNu3zUNsI_p" }, "source": [ "Try running it for a first time:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ss9I0MRdHbP2" }, "outputs": [ { "data": { "text/markdown": "Hello?", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "Hello", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " there! How can I help you today?\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "await run(prompt=\"Hello?\", tools=None, modality = \"TEXT\")" ] }, { "cell_type": "markdown", "metadata": { "id": "Z_BFBLLGp-Ye" }, "source": [ "## Simple function call" ] }, { "cell_type": "markdown", "metadata": { "id": "RMq795G6t2hA" }, "source": [ "The function calling feature of the API Can handle a wide variety of functions. Support in the SDK is still under construction. So keep this simple just send a minimal function definition: Just the function's name.\n", "\n", "Note that in the live API function calls are independent of the chat turns. The conversation can continue while a function call is being processed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8Y00qqZZt5L-" }, "outputs": [], "source": [ "turn_on_the_lights = {'name': 'turn_on_the_lights'}\n", "turn_off_the_lights = {'name': 'turn_off_the_lights'}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8dCjPmz8nEbv" }, "outputs": [ { "data": { "text/markdown": "Turn on the lights", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "``` python\nprint(default_api.turn_on_the_lights())\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", ">>> function_responses=[FunctionResponse(id='function-call-10625358607379620265', name='turn_on_the_lights', response={'result': 'ok'})]\n" ] }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "```\n{'result': 'ok'}\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "OK", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "prompt = \"Turn on the lights\"\n", "\n", "tools = [\n", " {'function_declarations': [turn_on_the_lights, turn_off_the_lights]}\n", "]\n", "\n", "await run(prompt, tools=tools, modality = \"TEXT\")" ] }, { "cell_type": "markdown", "metadata": { "id": "eCnCiTbhqE8q" }, "source": [ "## Code execution" ] }, { "cell_type": "markdown", "metadata": { "id": "P4ptRBNY4N8Q" }, "source": [ "The `code_execution` lets the model write and run python code. Try it on a math problem the model can't solve from memory:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "k4dURhC-QoSw" }, "outputs": [ { "data": { "text/markdown": "Can you compute the largest prime palindrome under 100000.", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "Okay", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": ", I understand. You're asking me to find the largest prime number that", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " is also a palindrome (reads the same forwards and backward) and is less than", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " 100,000.\n\nHere's my plan:\n\n1. **Generate Palindromes:** I will generate a list of palind", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "romes under 100,000, starting with the largest and working downwards.\n2. **Check for Primality:** For each palindrome,", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " I'll check if it's a prime number.\n3. **Return the Largest Prime Palindrome:** The first palindrome I find that's also prime is the largest prime palindrome under 100,000,", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " and that's the one I will return.\n\nLet's start by generating palindromes. Since we are going from largest to smallest, I will start with the number 99999 and work my way downwards. I", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "'ll use python to help with the palindrome generation and primality testing.\n\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "``` python\ndef is_palindrome(n):\n return str(n) == str(n)[::-1]\n\ndef is_prime(n):\n if n <= 1:\n return False\n if n <= 3:\n return True\n if n % 2 == 0 or n % 3 == 0:\n return False\n i = 5\n while i * i <= n:\n if n % i == 0 or n % (i + 2) == 0:\n return False\n i += 6\n return True\n\nlargest_prime_palindrome = None\nfor i in range(99999, 1, -1):\n if is_palindrome(i):\n if is_prime(i):\n largest_prime_palindrome = i\n break\n\nprint(f'{largest_prime_palindrome=}')\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "```\nlargest_prime_palindrome=98689\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "Okay", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": ", I have found the largest prime palindrome under 100,00", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "0.\n\nThe largest prime palindrome under 100000 is", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " **98689**.\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "prompt=\"Can you compute the largest prime palindrome under 100000.\"\n", "\n", "tools = [\n", " {'code_execution': {}}\n", "]\n", "\n", "await run(prompt, tools=tools, modality=\"TEXT\")" ] }, { "cell_type": "markdown", "metadata": { "id": "ueeerkpX5F-v" }, "source": [ "## Compositional Function Calling\n", "\n", "Compositional function calling refers to the ability to combine user defined functions with the `code_execution` tool. The model will write them into larger blocks of code, and then pause execution while it waits for you to send back responses for each call.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XzKyL_Rq5sG3" }, "outputs": [ { "data": { "text/markdown": "Can you turn on the lights wait 10s and then turn them off?", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "``` python\nimport time\ndefault_api.turn_on_the_lights()\ntime.sleep(10)\ndefault_api.turn_off_the_lights()\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", ">>> function_responses=[FunctionResponse(id='function-call-9618657932610871193', name='turn_on_the_lights', response={'result': 'ok'})]\n", "\n", ">>> function_responses=[FunctionResponse(id='function-call-3563942360527970136', name='turn_off_the_lights', response={'result': 'ok'})]\n" ] } ], "source": [ "prompt=\"Can you turn on the lights wait 10s and then turn them off?\"\n", "\n", "tools = [\n", " {'code_execution': {}},\n", " {'function_declarations': [turn_on_the_lights, turn_off_the_lights]}\n", "]\n", "\n", "await run(prompt, tools=tools, modality=\"TEXT\")" ] }, { "cell_type": "markdown", "metadata": { "id": "G78sxDEcqHyO" }, "source": [ "## Google search" ] }, { "cell_type": "markdown", "metadata": { "id": "FW10vRPN6UNp" }, "source": [ "The `google_search` tool lets the model conduct google searches. For example, try asking it about events that are too recent to be in the training data.\n", "\n", "The search will still execute in `AUDIO` mode, but you won't see the detailed results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QKvWzROJic60" }, "outputs": [ { "data": { "text/markdown": "Can you use google search tell me about the largest earthquake in california the week of Dec 5 2024?", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "``` python\nprint(google_search.search(queries=[\"largest earthquake in California week of December 5 2024\", \"California earthquakes week of December 5 2024\"]))\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "```\nLooking up information on Google Search.\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "The", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " largest earthquake in California during the week of December 5, 202", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "4, occurred on **December 5, 2024**, and", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " it had a magnitude of **7.0**. This earthquake's epicenter was located approximately 60 miles offshore of Ferndale, California, in the Pacific", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " Ocean, west of the Mendocino Triple Junction.\n\nHere's a summary of what is known about this earthquake:\n\n* **Magnitude:** ", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "7.0\n* **Date:** December 5, 2024\n* **Time:** Approximately 10:44 AM PT\n* **Location:** About 60 miles offshore of Ferndale", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": ", California, and about 100 km southwest of Ferndale, in the Pacific Ocean.\n* **Tectonic Setting:** The earthquake occurred along the Mendocino fracture zone, a transform boundary between the Pacific and Juan de", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " Fuca/Gorda plates, near the Mendocino triple junction where the Pacific, North America, and Juan de Fuca/Gorda plates meet.\n* **Impact:** The earthquake was felt across Northern California, with reports of shaking as far as Davis, northeast of San Francisco, and even a rolling", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " motion in San Francisco.\n* **Tsunami Warning:** A tsunami warning was issued for coastal areas from Davenport, California to 10 miles south of Florence, Oregon, but it was canceled shortly before 11 AM PT.\n\nThis earthquake was the strongest in the region since at least 200", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "5, when a magnitude 7.2 earthquake occurred. The area near the Mendocino triple junction is known to be seismically active, with five of the eleven magnitude 7 and larger earthquakes in California since 1900 occurring in this area.\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "prompt=\"Can you use google search tell me about the largest earthquake in california the week of Dec 5 2024?\"\n", "\n", "tools = [\n", " {'google_search': {}}\n", "]\n", "\n", "await run(prompt, tools=tools, modality=\"TEXT\")" ] }, { "cell_type": "markdown", "metadata": { "id": "HM9y5rwfqKfY" }, "source": [ "## Multi-tool\n" ] }, { "cell_type": "markdown", "metadata": { "id": "qrxAQjYA6vQX" }, "source": [ "The biggest difference with the new API however is that you're no longer limited to using 1-tool per request. Try combining those tasks from the previous sections:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QmB_4XPOslyA" }, "outputs": [ { "data": { "text/markdown": " Hey, I need you to do three things for me.\n\n 1. Then compute the largest prime plaindrome under 100000.\n 2. Then use google search to lookup unformation about the largest earthquake in california the week of Dec 5 2024?\n 3. Turn on the lights\n\n Thanks!\n ", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "Okay", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": ", I will perform those tasks for you. First, let's find the", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " largest prime palindrome under 100000.\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "``` python\ndef is_palindrome(n):\n return str(n) == str(n)[::-1]\n\ndef is_prime(n):\n if n <= 1:\n return False\n if n <= 3:\n return True\n if n % 2 == 0 or n % 3 == 0:\n return False\n i = 5\n while i * i <= n:\n if n % i == 0 or n % (i + 2) == 0:\n return False\n i += 6\n return True\n\nlargest_palindrome_prime = 0\nfor i in range(99999, 1, -1):\n if is_palindrome(i) and is_prime(i):\n largest_palindrome_prime = i\n break\n\nprint(largest_palindrome_prime)\n\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "```\n98689\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "Okay", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": ", the largest prime palindrome under 100000 is 9", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "8689.\n\nNext, I will search for the largest earthquake in", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " California the week of December 5, 2024.\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "``` python\nconcise_search(\"largest earthquake california week of December 5 2024\", max_num_results=3)\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "```\nLooking up information on Google Search.\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "Based", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " on the search results, the largest earthquake in California the week of December 5", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": ", 2024 was a magnitude 7.0 earthquake off the", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": " coast of Cape Mendocino. It occurred on December 5, 2024.\n\nFinally, I will turn on the lights.\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "``` python\ndefault_api.turn_on_the_lights()\n\n```", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/markdown": "-------------------------------", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", ">>> function_responses=[FunctionResponse(id='function-call-10219056536360411067', name='turn_on_the_lights', response={'result': 'ok'})]\n" ] } ], "source": [ "prompt = \"\"\"\\\n", " Hey, I need you to do three things for me.\n", "\n", " 1. Then compute the largest prime plaindrome under 100000.\n", " 2. Then use google search to lookup unformation about the largest earthquake in california the week of Dec 5 2024?\n", " 3. Turn on the lights\n", "\n", " Thanks!\n", " \"\"\"\n", "\n", "tools = [\n", " {'google_search': {}},\n", " {'code_execution': {}},\n", " {'function_declarations': [turn_on_the_lights, turn_off_the_lights]}\n", "]\n", "\n", "await run(prompt, tools=tools, modality=\"TEXT\")" ] }, { "cell_type": "markdown", "metadata": { "id": "Y0OhM95KkMzl" }, "source": [ "## Next Steps\n", "\n", "- For more information about the SDK see the [SDK docs](https://googleapis.github.io/python-genai/)\n", "- This tutorial uses the high level SDK, if you're interested in the lower-level details, try the [Websocket version of this tutorial](../gemini-2/websocket/search_tool.ipynb)\n", "- This tutorial only covers _basic_ usage of these tools for deeper (and more fun) example see the [Search tool tutorial](../gemini-2/search_tool.ipynb)\n", "\n", "Or check the other Gemini 2.0 capabilities from the [Cookbook](https://github.com/google-gemini/cookbook/blob/main/gemini-2/), in particular this other [multi-tool](../gemini-2/plotting_and_mapping.ipynb) example and the one about Gemini [spatial capabilities](../gemini-2/spatial_understanding.ipynb)." ] } ], "metadata": { "colab": { "name": "live_api_tool_use.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }