{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VI0xLxmm7HBa"
      },
      "source": [
        "# Challenge 4: Telemetry to the Rescue"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BUUj-wyRc5za"
      },
      "source": [
        "## Find crash in telemetry data"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VZYkXrwW-24E"
      },
      "source": [
        "You need to design a SQL query that returns the `car_number`, `driver_name` and the average `brake` & `speed` for the full second just before the crash. You should have the timestamp of the crash from the previous challenge.\n",
        "\n",
        "> Keep in mind that the timestamp is in local time, whereas the telemetry data uses UTC."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6mxgerm8F6nP"
      },
      "outputs": [],
      "source": [
        "# Find the crash in telemetry data that is loaded into BQ\n",
        "%%bigquery telemetry_during_crash\n",
        "-- TODO a SQL query that aggregates the brake and speed data for the second just before the crash"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Op9epADFsn5C"
      },
      "source": [
        "## Find the drivers"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0067gn0l_ZuH"
      },
      "source": [
        "Now we have the filtered & aggregated data, let's ask Gemini to tell us which two drivers were involved in the crash.\n",
        "\n",
        "Design a prompt that determines **which drivers** were involved and **why the model thinks that**.\n",
        "\n",
        "> Note that we're *appending the result of the query from the previous cell to your prompt*, so you can refer to that."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sTynac1B67aY"
      },
      "outputs": [],
      "source": [
        "prompt = \"\"\"\n",
        "TODO a prompt that asks Gemini to analyze the outcome of the SQL query\n",
        "\"\"\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "kc1-lRve4Qu0"
      },
      "outputs": [],
      "source": [
        "import vertexai\n",
        "from vertexai.generative_models import GenerativeModel\n",
        "\n",
        "\n",
        "shell_output = ! gcloud config list project --format 'value(core.project)' 2>/dev/null\n",
        "PROJECT_ID = shell_output[0]\n",
        "REGION = \"us-central1\"\n",
        "\n",
        "vertexai.init(project=PROJECT_ID, location=REGION)\n",
        "model = GenerativeModel(\"gemini-2.0-flash\")\n",
        "response = model.generate_content([prompt, telemetry_during_crash.to_markdown()])\n",
        "\n",
        "print(response.text)\n"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "cell_execution_strategy": "setup",
      "name": "Formula_E_Challenge_2",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}