{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", " Try in Google Colab\n", " \n", " \n", " \n", " \n", " Share via nbviewer\n", " \n", " \n", " \n", " \n", " View on GitHub\n", " \n", " \n", " \n", " \n", " Download notebook\n", " \n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# FiftyOne Quickstart\n", "\n", "Hello there! This notebook provides a brief walkthrough of [FiftyOne](https://voxel51.com/docs/fiftyone), highlighting features that will help you build better datasets and computer vision models.\n", "\n", "We'll cover the following concepts:\n", "\n", "- Loading a dataset [into FiftyOne](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/index.html)\n", "- Using FiftyOne [in a notebook](https://voxel51.com/docs/fiftyone/environments/index.html#notebooks)\n", "- Using [views](https://voxel51.com/docs/fiftyone/user_guide/using_views.html) and [the App](https://voxel51.com/docs/fiftyone/user_guide/app.html) to explore different aspects of your dataset\n", "- [Evaluating](https://voxel51.com/docs/fiftyone/user_guide/evaluation.html) your model's predictions\n", "- [Finding label mistakes](https://voxel51.com/docs/fiftyone/user_guide/brain.html#label-mistakes) in your datasets" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install FiftyOne\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install fiftyone" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load a dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's get started by importing the FiftyOne library:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import fiftyone as fo" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "FiftyOne provides a number of helpful data/model resources to get you up and running on your projects. In this example, we'll load a small detection dataset from the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_zoo/index.html).\n", "\n", "The command below downloads the dataset from the web and loads it into a [FiftyOne Dataset](https://voxel51.com/docs/fiftyone/user_guide/basics.html) that we'll use to explore the capabilities of FiftyOne:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Dataset already downloaded\n", "Loading 'quickstart'\n", " 100% |█████████████████| 200/200 [5.7s elapsed, 0s remaining, 26.4 samples/s] \n", "Dataset 'quickstart' created\n" ] } ], "source": [ "import fiftyone.zoo as foz\n", "\n", "dataset = foz.load_zoo_dataset(\"quickstart\")" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Name: quickstart\n", "Media type: image\n", "Num samples: 200\n", "Persistent: False\n", "Info: {}\n", "Tags: ['validation']\n", "Sample fields:\n", " filepath: fiftyone.core.fields.StringField\n", " tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n", " metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.Metadata)\n", " ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\n", " uniqueness: fiftyone.core.fields.FloatField\n", " predictions: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\n" ] } ], "source": [ "print(dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's launch the [FiftyOne App](https://voxel51.com/docs/fiftyone/user_guide/app.html) so we can explore the dataset visually. Right away you will see that because we are in a notebook, an embedded instance of the App with our dataset loaded has been rendered in the cell's output.\n", "\n", "The [Session](https://voxel51.com/docs/fiftyone/api/fiftyone.core.session.html#fiftyone.core.session.Session) object created below is a bi-directional connection between your Python kernel and the FiftyOne App, as we'll see later." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "
\n", " \n", "
\n", " \n", "
\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "session = fo.launch_app(dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Automatic screenshots as you work\n", "\n", "Notebooks are great for many reasons, one of which is the ability to share your work with others. FiftyOne is designed to help you write notebooks that capture your work on visual datasets, using a feature we call **automatic screenshotting**.\n", "\n", "Whenever you open a new App instance in a notebook cell, e.g., by updating your [Session](https://voxel51.com/docs/fiftyone/api/fiftyone.core.session.html#fiftyone.core.session.Session) object, any previous App instances will be automatically replaced with a static screenshot. In fact, that's what you're seeing below; screenshots of the Apps we opened when we created this notebook!\n", "\n", "The cell below issues a [session.show()](https://voxel51.com/docs/fiftyone/api/fiftyone.core.session.html#fiftyone.core.session.Session.show) command, which opens a new App instance in the cell's output. When you run the cell for yourself, notice that the App instance in the previous cell is automatically replaced with a screenshot of its current state. You can reactivate old App instances by hovering over them and clicking anywhere.\n", "\n", "After running the cell below, try double-clicking on an image in the grid to expand the sample." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "scrolled": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "
\n", " \n", "
\n", " \n", "
\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "session.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dataset Views\n", "\n", "The power of FiftyOne truly comes alive when using [dataset views](https://voxel51.com/docs/fiftyone/user_guide/using_views.html).\n", "\n", "Think of a [Dataset](https://voxel51.com/docs/fiftyone/api/fiftyone.core.dataset.html#fiftyone.core.dataset.Dataset) as the root view into your all of your data. Creating a [DatasetView](https://voxel51.com/docs/fiftyone/api/fiftyone.core.view.html#fiftyone.core.view.DatasetView) allows you to study a specific subset of the samples and/or fields of your dataset.\n", "\n", "Dataset views can be created and modified both in Python and in the App. The active view in the App is always available via the [Session.view](https://voxel51.com/docs/fiftyone/api/fiftyone.core.session.html#fiftyone.core.session.Session.view) property of your session. This means that if you update your view in the App, its state will be captured by [Session.view](https://voxel51.com/docs/fiftyone/api/fiftyone.core.session.html#fiftyone.core.session.Session.view). Or, you can create a view programmatically in Python and open it in the App by setting the [Session.view](https://voxel51.com/docs/fiftyone/api/fiftyone.core.session.html#fiftyone.core.session.Session.view) property.\n", "\n", "Let's start by creating a view into our dataset via the App. We'll sort the dataset by the `uniqueness` field to show the most unique images first. To do this, we will click `+ add stage` in the View Bar and add a `SortBy` stage with `uniqueness` as the field, and `reverse` equal to `True`." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "
\n", " \n", "
\n", " \n", "
\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "session.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can then access the view in Python and, for example, print the most unique sample:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ ",\n", " 'iscrowd': ,\n", " }),\n", " 'label': 'airplane',\n", " 'bounding_box': BaseList([\n", " 0.05365625,\n", " 0.34533957845433255,\n", " 0.769828125,\n", " 0.45049180327868854,\n", " ]),\n", " 'mask': None,\n", " 'confidence': None,\n", " 'index': None,\n", " }>,\n", " ]),\n", " }>,\n", " 'uniqueness': 1.0,\n", " 'predictions': ,\n", " ,\n", " ,\n", " ]),\n", " }>,\n", "}>\n" ] } ], "source": [ "print(session.view.first())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Complex views in Python\n", "\n", "Sometimes you may be interested in creating a [complex view](https://voxel51.com/docs/fiftyone/user_guide/using_views.html#view-stages) into a dataset that is specified by a series of conditions or complex filtering operations.\n", "\n", "You can achieve this in FiftyOne by [chaining view stages](https://voxel51.com/docs/fiftyone/user_guide/using_views.html#tips-tricks) together to define the view you want.\n", "\n", "As an example, let's create a view that contains only the 25 most unique samples in the dataset, and only predictions on those samples with confidence > 0.5.\n", "\n", "Remember that, because we are working in a notebook, any time we change our [Session](https://voxel51.com/docs/fiftyone/api/fiftyone.core.session.html#fiftyone.core.session.Session) object, a new App will be displayed in the cell's output." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "scrolled": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "
\n", " \n", "
\n", " \n", "
\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from fiftyone import ViewField as F\n", "\n", "session.view = (\n", " dataset\n", " .sort_by(\"uniqueness\", reverse=True)\n", " .limit(25)\n", " .filter_labels(\"predictions\", F(\"confidence\") > 0.5)\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Debugging your model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A primary use case of FiftyOne is being able to easily visualize and explore your model predictions to find failure cases that need to be addressed to improve performance.\n", "\n", "The `quickstart` dataset already has predictions in its `predictions` field, but you can easily [add your own model predictions](https://voxel51.com/docs/fiftyone/recipes/model_inference.html) to datasets for a variety of tasks including [classification, detection, segmentation, keypoints, and more](https://voxel51.com/docs/fiftyone/user_guide/using_datasets.html#labels). Or, if you don't have your own model, you can check out the [FiftyOne Model Zoo](https://voxel51.com/docs/fiftyone/user_guide/model_zoo/index.html#fiftyone-model-zoo) to download a pre-trained model and generate predictions on your data with just a couple lines of code.\n", "\n", "Once you have predictions on your dataset, you can use FiftyOne's powerful [evaluation framework](https://voxel51.com/docs/fiftyone/user_guide/evaluation.html) to evaluate it. For example, let's compute the COCO-style mean average precision (mAP) of our predictions using the builtin [evaluate_detections()](https://voxel51.com/docs/fiftyone/api/fiftyone.core.collections.html?highlight=evaluate_detections#fiftyone.core.collections.SampleCollection.evaluate_detections) method of our dataset:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Evaluating detections...\n", " 100% |█████████████████| 200/200 [6.6s elapsed, 0s remaining, 24.6 samples/s] \n", "Performing IoU sweep...\n", " 100% |█████████████████| 200/200 [12.2s elapsed, 0s remaining, 13.4 samples/s] \n", "\n", "mAP: 0.3957\n" ] } ], "source": [ "# Computes the mAP of the predictions in the `predictions` field\n", "# w.r.t. the ground truth labels in the `ground_truth` field\n", "results = dataset.evaluate_detections(\n", " \"predictions\",\n", " gt_field=\"ground_truth\",\n", " compute_mAP=True,\n", ")\n", "\n", "print(\"\\nmAP: %.4f\" % results.mAP())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's evaluate only predictions with confidence greater than 0.75:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Evaluating detections...\n", " 100% |█████████████████| 199/199 [4.5s elapsed, 0s remaining, 34.5 samples/s] \n" ] } ], "source": [ "# Create a view that only contains predictions with confidence > 0.75\n", "high_conf_view = dataset.filter_labels(\"predictions\", F(\"confidence\") > 0.75)\n", "\n", "# Evaluate the predictions in the `predictions` field w.r.t. the ground truth\n", "# labels in the `ground_truth` field\n", "results = high_conf_view.evaluate_detections(\n", " \"predictions\",\n", " gt_field=\"ground_truth\",\n", " eval_key=\"eval\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `results` object that is returned provides handy methods for generating various performance reports for our model.\n", "\n", "For example, let's print a classification report for the top-10 most common object classes:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " precision recall f1-score support\n", "\n", " person 0.85 0.72 0.78 412\n", " kite 0.84 0.68 0.75 91\n", " car 0.74 0.51 0.60 61\n", " bird 0.91 0.48 0.63 64\n", " carrot 0.58 0.40 0.48 47\n", " boat 0.62 0.35 0.45 37\n", " surfboard 0.63 0.40 0.49 30\n", " airplane 0.90 0.79 0.84 24\n", "traffic light 0.88 0.62 0.73 24\n", " umbrella 0.91 0.72 0.81 29\n", "\n", " micro avg 0.82 0.63 0.71 819\n", " macro avg 0.79 0.57 0.66 819\n", " weighted avg 0.81 0.63 0.71 819\n", "\n" ] } ], "source": [ "# Get the 10 most common classes in the dataset\n", "counts = dataset.count_values(\"ground_truth.detections.label\")\n", "classes = sorted(counts, key=counts.get, reverse=True)[:10]\n", "\n", "# Print a classification report for the top-10 classes\n", "results.print_report(classes=classes)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Aggregate metrics alone don’t give the full picture of a model's performance. In practice, the limiting factor of a model is often data quality issues that you need to **see** to address. FiftyOne is designed to make it easy to do just that.\n", "\n", "Note that the last [evaluate_detections()](https://voxel51.com/docs/fiftyone/api/fiftyone.core.collections.html?highlight=evaluate_detections#fiftyone.core.collections.SampleCollection.evaluate_detections) method that we ran populated new fields on our dataset that count the number of true positives, false positives, and false negative objects in every sample:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Name: quickstart\n", "Media type: image\n", "Num samples: 200\n", "Persistent: False\n", "Info: {}\n", "Tags: ['validation']\n", "Sample fields:\n", " filepath: fiftyone.core.fields.StringField\n", " tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n", " metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.Metadata)\n", " ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\n", " uniqueness: fiftyone.core.fields.FloatField\n", " predictions: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\n", " eval_tp: fiftyone.core.fields.IntField\n", " eval_fp: fiftyone.core.fields.IntField\n", " eval_fn: fiftyone.core.fields.IntField\n" ] } ], "source": [ "print(dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's use this information to visualize the samples with the most false positives in the App:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "
\n", " \n", "
\n", " \n", "
\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "session.view = high_conf_view.sort_by(\"eval_fp\", reverse=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see that the samples with the most number of false positives are crowded scenes, indicating that we should change our training scheme/dataset to better account for crowds of objects.\n", "\n", "This is just a taste of the evaluation that can be done with FiftyOne. Check out our [tutorials](https://voxel51.com/docs/fiftyone/tutorials/index.html) and [blog posts](https://medium.com/voxel5) for more examples of debugging different kinds of models." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Finding label mistakes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another core use case of FiftyOne is to load and explore your dataset and annotations to get a feel for your data distribution and annotation quality.\n", "\n", "In addition to self-guided analysis in the App, the [FiftyOne Brain](https://voxel51.com/docs/fiftyone/user_guide/brain.html#fiftyone-brain) provides methods that can help you gather insights about your dataset automatically:\n", "\n", "- [Uniqueness](https://voxel51.com/docs/fiftyone/user_guide/brain.html#image-uniqueness) - A score comparing similarity in the content of images or image patches with all others in the dataset\n", "- [Mistakenness](https://voxel51.com/docs/fiftyone/user_guide/brain.html#label-mistakes) - A score representing the liklihood of mistakenness for a given label\n", "- [Hardness](https://voxel51.com/docs/fiftyone/user_guide/brain.html#sample-hardness) - A score representing how hard a sample is to train on allowing you to easily mine hard samples for your training set\n", "\n", "Continuing with our `quickstart` dataset, let's compute the mistakenness of the annotations in the `ground_truth` field using the (high-confidence) model predictions in the `predictions` field of the dataset as a reference point:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Evaluating detections...\n", " 100% |█████████████████| 199/199 [5.0s elapsed, 0s remaining, 31.4 samples/s] \n", "Computing mistakenness...\n", " 100% |█████████████████| 199/199 [3.0s elapsed, 0s remaining, 52.5 samples/s] \n" ] } ], "source": [ "import fiftyone.brain as fob\n", "\n", "# Computes the mistakenness of the labels in the `ground_truth` field, \n", "# which scores the chance that the labels are incorrect, using the\n", "# high confidence predictions in the `predictions` field as a reference\n", "fob.compute_mistakenness(\n", " high_conf_view,\n", " \"predictions\",\n", " label_field=\"ground_truth\",\n", " use_logits=False,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's print the dataset's schema to see what happened:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Name: quickstart\n", "Media type: image\n", "Num samples: 200\n", "Persistent: False\n", "Info: {}\n", "Tags: ['validation']\n", "Sample fields:\n", " filepath: fiftyone.core.fields.StringField\n", " tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n", " metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.Metadata)\n", " ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\n", " uniqueness: fiftyone.core.fields.FloatField\n", " predictions: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\n", " eval_tp: fiftyone.core.fields.IntField\n", " eval_fp: fiftyone.core.fields.IntField\n", " eval_fn: fiftyone.core.fields.IntField\n", " mistakenness: fiftyone.core.fields.FloatField\n", " possible_missing: fiftyone.core.fields.IntField\n", " possible_spurious: fiftyone.core.fields.IntField\n" ] } ], "source": [ "print(dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A numeric `mistakenness` field was added to each sample on the dataset, which measures the (maximum) mistakenness of the annotations in the sample.\n", "\n", "In addition, each detection in the `ground_truth` field has been assigned a `mistakenness` value that measures it's likelihood of being incorrect:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ ",\n", " 'iscrowd': ,\n", " }),\n", " 'label': 'bird',\n", " 'bounding_box': BaseList([\n", " 0.21084375,\n", " 0.0034375,\n", " 0.46190625,\n", " 0.9442083333333334,\n", " ]),\n", " 'mask': None,\n", " 'confidence': None,\n", " 'index': None,\n", " 'eval': 'tp',\n", " 'eval_id': '5f452c60ef00e6374aad9394',\n", " 'eval_iou': 0.8575063187115628,\n", " 'mistakenness': 0.01245725154876709,\n", " 'mistakenness_loc': 0.2903442955979618,\n", "}>\n" ] } ], "source": [ "# Ground truth detections now have a `mistakeness` value\n", "sample = dataset.first()\n", "print(sample.ground_truth.detections[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's view the annotations that were flagged as likely mistakes in the App to see if we should fix any of them:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "
\n", " \n", "
\n", " \n", "
\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "session.view = high_conf_view.filter_labels(\"ground_truth\", F(\"mistakenness\") > 0.95)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `mistakenness` scores are computed using the confidence of the model predictions you provide. The model used to generate the predictions used here was [this Faster-RCNN model](https://voxel51.com/docs/fiftyone/user_guide/model_zoo/models.html#faster-rcnn-resnet50-fpn-coco-torch), which is a few years old. However, trying [other models from the Model Zoo](https://voxel51.com/docs/fiftyone/user_guide/model_zoo/models.html) will result in more informative `mistakenness` scores!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sharing notebooks\n", "\n", "To make a notebook ready for sharing, you'll need to screenshot the currently active App by calling [Session.freeze()](https://voxel51.com/docs/fiftyone/api/fiftyone.core.session.html#fiftyone.core.session.Session.freeze):" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "session.freeze()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now when you share this notebook, publish it online, etc., all of your App outputs will be available for readers to see when they first open the notebook." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Further reading\n", "\n", "This quickstart touched on only a few of the possibilities of using FiftyOne. If you'd like to learn more, check out these [tutorials](https://voxel51.com/docs/fiftyone/tutorials/index.html) and [recipes](https://voxel51.com/docs/fiftyone/recipes/index.html) to see more concrete use cases and best practices.\n", "\n", "And did we mention that FiftyOne is open source? Check out the project [on GitHub](https://github.com/voxel51/fiftyone) and [leave an issue](https://github.com/voxel51/fiftyone/issues/new/choose) if you think something is missing.\n", "\n", "Thanks for tuning in!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 4 }