{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "<!-- Autogenerated by `scripts/make_examples.py` -->\n", "<table align=\"left\">\n", " <td>\n", " <a target=\"_blank\" href=\"https://colab.research.google.com/github/voxel51/fiftyone-examples/blob/master/examples/zilliz_advent_of_code.ipynb\">\n", " <img src=\"https://user-images.githubusercontent.com/25985824/104791629-6e618700-5769-11eb-857f-d176b37d2496.png\" height=\"32\" width=\"32\">\n", " Try in Google Colab\n", " </a>\n", " </td>\n", " <td>\n", " <a target=\"_blank\" href=\"https://nbviewer.jupyter.org/github/voxel51/fiftyone-examples/blob/master/examples/zilliz_advent_of_code.ipynb\">\n", " <img src=\"https://user-images.githubusercontent.com/25985824/104791634-6efa1d80-5769-11eb-8a4c-71d6cb53ccf0.png\" height=\"32\" width=\"32\">\n", " Share via nbviewer\n", " </a>\n", " </td>\n", " <td>\n", " <a target=\"_blank\" href=\"https://github.com/voxel51/fiftyone-examples/blob/master/examples/zilliz_advent_of_code.ipynb\">\n", " <img src=\"https://user-images.githubusercontent.com/25985824/104791633-6efa1d80-5769-11eb-8ee3-4b2123fe4b66.png\" height=\"32\" width=\"32\">\n", " View on GitHub\n", " </a>\n", " </td>\n", " <td>\n", " <a href=\"https://github.com/voxel51/fiftyone-examples/raw/master/examples/zilliz_advent_of_code.ipynb\" download>\n", " <img src=\"https://user-images.githubusercontent.com/25985824/104792428-60f9cc00-576c-11eb-95a4-5709d803023a.png\" height=\"32\" width=\"32\">\n", " Download notebook\n", " </a>\n", " </td>\n", "</table>\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# FiftyOne <> Zilliz Advent of Open Source 2023!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Welcome to day 2 of Zilliz's [Advent of Code for Open Source](https://zilliz.com/blog/advent-of-code-for-open-source) 2023! Today we're going to be looking at the [FiftyOne](https://github.com/voxel51/fiftyone) library, which is a Python package for curation, visualization and analysis of machine learning datasets. It's a great tool for exploring datasets and debugging models, and it's also a great way to get started with machine learning if you're new to the field." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook, we'll show you how to load a dataset, visualize it, and augment the dataset with generative AI. Along the way, you'll get a whirlwind tour of some of FiftyOne's features, and you'll learn how to use it to explore your own datasets." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Installation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this walkthrough, we'll be using a few libraries in addition to FiftyOne:\n", "\n", "- [torch](https://pytorch.org/) as our machine learning framework\n", "- [diffusers](https://huggingface.co/docs/diffusers/index) from Hugging Face for generative AI\n", "- [umap-learn](https://umap-learn.readthedocs.io/en/latest/) for dimensionality reduction" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install fiftyone torch diffusers==0.24.0 umap-learn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we're ready to get started!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading a dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's import FiftyOne and some of its modules:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "import fiftyone as fo\n", "import fiftyone.zoo as foz\n", "import fiftyone.brain as fob\n", "from fiftyone import ViewField as F" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The FiftyOne Zoo contains a collection of common [datasets](https://docs.voxel51.com/user_guide/dataset_zoo/index.html) (MNIST, CIFAR, COCO, ...) and [models](https://docs.voxel51.com/user_guide/model_zoo/index.html) (YOLO, CLIP, SAM, DINO, ...) that you can load with a single line of code.\n", "\n", "💡 You can list all available datasets and models with `foz.list_zoo_datasets()` and `foz.list_zoo_models()`.\n", "\n", "The [FiftyOne Brain](https://docs.voxel51.com/user_guide/brain.html) contains machine learning methods that you can apply to better understand your data. \n", "\n", "And the `ViewField` will make it easy for us to programmatically filter our dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this walkthrough, we'll be using a subset of the Caltech-101 dataset, which contains 101 categories of objects, with 40 to 800 images per category. First, we'll load the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "caltech101 = foz.load_zoo_dataset(\"caltech101\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can print out the dataset to see what it contains:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Name: caltech101\n", "Media type: image\n", "Num samples: 9145\n", "Persistent: False\n", "Tags: []\n", "Sample fields:\n", " id: fiftyone.core.fields.ObjectIdField\n", " filepath: fiftyone.core.fields.StringField\n", " tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n", " metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.ImageMetadata)\n", " ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification)" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "caltech101" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can launch the FiftyOne App to explore the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "session = fo.launch_app(caltech101, auto=False) # Launches the App, which you can manually open in a browser tab at http://localhost:5151\n", "## session = fo.launch_app(caltech101) # Launches the App in the cell output" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Filtering the dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While the App is running, you can click on any image to see its label and metadata. You can also filter the dataset by label, view images in a grid or list, and more. Here, we'll only be interested in a subset of the Caltech-101 dataset — namely the images falling into any of the following categories: `[\"ibis\", \"flamingo\", \"emu\", \"pigeon\"]`. We can filter the dataset to only include these categories:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "## create a `View` of the dataset\n", "classes = [\"ibis\", \"flamingo\", \"emu\", \"pigeon\"]\n", "bird_view = caltech101.match_labels(filter=F(\"label\").is_in(classes))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can print out this `DatasetView` to see that it contains only the images we're interested in, and we can set the view of our session to see the subset:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Dataset: caltech101\n", "Media type: image\n", "Num samples: 245\n", "Sample fields:\n", " id: fiftyone.core.fields.ObjectIdField\n", " filepath: fiftyone.core.fields.StringField\n", " tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n", " metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.ImageMetadata)\n", " ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification)\n", "View stages:\n", " 1. MatchLabels(labels=None, ids=None, tags=None, filter={'$in': ['$$FIELD.label', [...]]}, fields=None, bool=True)\n" ] } ], "source": [ "print(bird_view)\n", "session.view = bird_view" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we can see that this subset contains 245 images. We can also see that the filter we have applied is represented internally as a `ViewStage`. FiftyOne has a powerful query language that allows you to filter your dataset in many different ways. You can learn more about it in the [User Guide](https://docs.voxel51.com/user_guide/using_datasets.html#datasetviews), and in the [Views](https://docs.voxel51.com/cheat_sheets/views_cheat_sheet.html) and [Filtering](https://docs.voxel51.com/cheat_sheets/filtering_cheat_sheet.html) cheat sheets. You can also get a quick list of the view stage methods available to you by calling a `Dataset` or `DatasetView`'s `list_view_stages()` method." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we've isolated the subset of interest, let's create a new dataset containing only the images in this subset using the `clone()` method:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "dataset = bird_view.shuffle().clone(name=\"caltech-birds\", persistent=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we have cloned the view into a new dataset called `caltech-birds`, and we've made the dataset persistent so that changes we make from now on are persisted to disk. We also threw in a `shuffle()` call to shuffle the dataset, just for fun." ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Name: caltech-birds\n", "Media type: image\n", "Num samples: 245\n", "Persistent: True\n", "Tags: []\n", "Sample fields:\n", " id: fiftyone.core.fields.ObjectIdField\n", " filepath: fiftyone.core.fields.StringField\n", " tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n", " metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.ImageMetadata)\n", " ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification)\n" ] } ], "source": [ "print(dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multimodal Magic" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Semantic Search" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have our dataset, we're ready to bring on the machine learning. Given that it is unlikely that a model has been trained on precisely the four classes we have selected, we'll use a multimodal foundation model, which will bring open-world knowledge to the table. In particular, we'll use the [CLIP](https://openai.com/blog/clip/) model from OpenAI. CLIP is integrated into the FiftyOne Model Zoo, so we can use it without installing anything else." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first thing we'll do is create a multimodal similarity index on the dataset using the FiftyOne Brain's `compute_similarity()` method. We will specify the model by name. This will allow us to search by text or image!\n", "\n", "💡 For datasets with a large number of samples, it is helpful to use a vector database. FiftyOne has native integrations with [Milvus](https://docs.voxel51.com/integrations/milvus.html), [Qdrant](https://docs.voxel51.com/integrations/qdrant.html), [Pinecone](https://docs.voxel51.com/integrations/pinecone.html), and [LanceDB](https://docs.voxel51.com/integrations/lancedb.html)!" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Computing embeddings...\n", " 100% |█████████████████| 245/245 [11.8s elapsed, 0s remaining, 21.6 samples/s] \n" ] }, { "data": { "text/plain": [ "<fiftyone.brain.internal.core.sklearn.SklearnSimilarityIndex at 0x2a3181eb0>" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fob.compute_similarity(dataset, brain_key=\"clip_sim\", model=\"clip-vit-base32-torch\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "💡 Once you have the FiftyOne Core Plugins installed, you can also compute similarity on the FiftyOne App!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now you can search the dataset by images or text, either in the App or programmatically. Here's an example of searching by text in Python:" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "pink_flamingos = dataset.sort_by_similarity(\"pink flamingo\", k = 25)\n", "session.view = pink_flamingos" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also achieve the same effect in the App (after refreshing) by clicking on the magnifying glass icon in the menu bar and typing in a search query!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To search by image, select an image in the App and press the image icon in the menu bar." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Zero-Shot Classification" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In a similar vein, we can run zero-shot classification on the dataset. For this, we'll use a [FiftyOne Plugin](https://voxel51.com/plugins/). Plugins are modular extensions to FiftyOne that provide additional functionality. There are a ton of already [existing plugins](https://github.com/voxel51/fiftyone-plugins) for a variety of use-cases and workflows, and you can also write your own!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll use the FiftyOne Command Line Interface syntax to install the [Zero Shot Prediction plugin](https://github.com/jacobmarks/zero-shot-prediction-plugin):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!fiftyone plugins download https://github.com/jacobmarks/zero-shot-prediction-plugin" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While we're at it, we will install two more plugins that we'll use shortly — the [FiftyOne Core Plugins](https://github.com/voxel51/fiftyone-plugins#core-plugins) (which wrap SDK functionality in simple UIs), and a [Text-to-Image plugin](https://github.com/jacobmarks/text-to-image) (which we will use to add synthetic images to our dataset):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!fiftyone plugins download https://github.com/voxel51/fiftyone-plugins" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!fiftyone plugins download https://github.com/jacobmarks/text-to-image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Refresh the app, and hit the \"~\" button on your keyboard, and you'll see a long list of \"operators\" appear. Select the `zero_shot_classify` option, and you'll see a dynamic form appear, giving you tons of options for configuring the zero-shot classification." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Depending on what Python packages you have installed, you'll see different options in the dropdown for `Classification Model`. You can specify class names directly, or via a text file. In our case as there are only 4 classes, we'll paste them as a comma-separated list: `ibis, flamingo, emu, pigeon`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are only a few hundred images in the dataset, so we can run this zero-shot classification in seconds, but if you have a larger dataset, consider [delegating execution](https://docs.voxel51.com/plugins/using_plugins.html#delegated-operations)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluating a model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have ground truth labels and (zero-shot) predictions, we can evaluate the quality of our model predictions. FiftyOne provides an [Evaluation API](https://docs.voxel51.com/user_guide/evaluation.html) that encompasses classification, detection, and segmentation tasks. For classification tasks like ours, we would use the `evaluate_classifications()` method (read about it [here](https://docs.voxel51.com/user_guide/evaluation.html)). However, now that we have installed the FiftyOne Evaluation plugin, we can achieve this via the app!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Press the \"~\" button on your keyboard again, and select `evaluate_model` from the list of operators. Then fill out the dynamic form, and hit `Execute`!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now in the sidebar on the left, you'll see a new field with the name that you assigned to the \"Evaluation key\" in the form. Expanding this, you'll see `True` and `False` value counts, and clicking on these will show you the images that were correctly and incorrectly classified, respectively." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The CLIP model did very well on the whole, but the majority of the errors were between the `ibis` and `flamingo` classes, which is understandable given the similarity between the two. If we wanted to improve the model, we could collect more data for these classes... or we could use generative AI to augment our dataset!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Augmenting the dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we know where the model is struggling, we can use generative AI to augment our dataset with synthetic images. If we wanted to generate very high quality images from text prompts, we could use a state-of-the-art model like Stable Diffusion XL. In the spirit of the Advent of Code, however, we'll use a much simpler model that will generate images in a matter of seconds: a Latent Consistency Model called [LCM_Dreamshaper_v7](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) from Hugging Face's [Diffusers](https://huggingface.co/docs/diffusers/index) library. This model is integrated into the FiftyOne Text-to-Image plugin that we installed earlier, so we can use it without installing anything else." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are a few prompts that you can try out:\n", "\n", "- \"Close-up of a standing flamingo in shallow water at sunset.\"\n", "- \"Group of flying ibises over a wetland against a blue sky.\"\n", "- \"Resting flamingo lying on the ground, head tucked in feathers.\"\n", "- \"Juvenile ibis on a branch, showing brown and white transitioning plumage.\"\n", "- \"Flamingo feeding in a lake, beak underwater with ripple effects.\"\n", "- \"Ibis foraging in an urban park with city buildings in the background.\"\n", "- \"Side view of flying flamingos at sunset, legs and necks extended.\"\n", "- \"Close-up of an ibis's head, focusing on eye and beak details.\"\n", "- \"Panoramic of flamingos in a coastal habitat during low tide.\"\n", "- \"Ibis in a rainy wetland, raindrops visible on feathers.\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Go back to the App, and press the \"~\" button on your keyboard again. This time, select `txt2img` from the list of operators. Fill out the dynamic form, and hit `Execute`! The plugin will call the model, generate an image from the text prompt, and add it to the dataset. You can repeat this as many times as you like, with as many prompts as you like, and varied hyperparameters. If you have `replicate` or `openai` accounts and API keys, you can also use the models exposed by those APIs as well!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The images will be added directly to the dataset, at the bottom of the sample grid. You may also notice that there are some new fields in the sidebar: `prompt`, `model`, `date_created`, and model configuration fields. These are automatically added by the plugin, and you can use them to filter the dataset to only include synthetic images, or to only include images generated by a particular model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "💡 You can also call the `txt2img` operator [from Python](https://github.com/jacobmarks/text-to-image#python-sdk)!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Comparing Real and Synthetic Images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The last stop on our whirlwind tour of FiftyOne's functionality takes us back to the FiftyOne Brain. We will use the `compute_visualization()` method to generate embeddings for our samples, use UMAP to reduce the dimensionality of the embeddings, and then visualize the embeddings in two dimensions. We can run this method from the App, or [programmatically](https://docs.voxel51.com/api/fiftyone.brain.html#fiftyone.brain.compute_visualization)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once we've run the method, we can visualize the embeddings by clicking the \"+\" button in the menu bar and selecting `Embeddings`. We can than select the name of the brain run that we just created, and we'll see a 2D visualization of the embeddings, and we can color the points by any field in the dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we color by the `model`, we can see how the synthetic images compare to the real ones:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On the other hand, if we color by the ground truth `label`, we can see the differences in how the model sees the different classes:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One takeaway from this exercise is that augmenting your dataset with synthetically generated images isn't always a quick fix. The new images may look realistic to the human eye, but there may be subtle differences that models can pick up on. In this case, we can see that the new images of ibises and flamingos are being clustered together, and are not part of the original clusters of ibises and flamingos, which is not what we want. We could try to fix this by generating more images, or by using a more powerful model, but we could also try to fix it by changing the prompts that we use to generate the images. For example, we could try to generate images of ibises and flamingos in different poses, or in different environments." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The meta-takeaway is that FiftyOne streamlines the process of exploring your data and debugging your models, and it enables you to iterate quickly on your ideas. The FiftyOne query language makes it easy to filter your dataset, the FiftyOne Brain allows you to apply machine learning methods to your data, the FiftyOne App makes it easy to visualize your data and share your findings with others, and the FiftyOne Plugins make it easy to extend FiftyOne's functionality to suit your needs. We hope you've enjoyed this walkthrough, and we hope you'll give FiftyOne a try on your own datasets!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 🚀 Next Steps" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you'd like to learn more about FiftyOne, here are some resources to get you started:\n", "\n", "- [FiftyOne Documentation](https://voxel51.com/docs/fiftyone/)\n", "- [FiftyOne Community Slack](https://slack.voxel51.com/)\n", "\n", "We also have a complete [Getting Started with FiftyOne Workshop](https://voxel51.com/computer-vision-events/fiftyone-workshop-dec-6/) taking place on December 6th at 8AM PT / 11AM ET / 5PM CET. We hope to see you there!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For all things Advent of Code, join the [Advent of Code Discord](https://discord.com/invite/7hwQAHgKMS)!\n", "\n", "- On deck: [Day 3: quivr](https://github.com/StanGirard/quivr)\n", "- In the hole: [Day 4: haystack](https://github.com/deepset-ai/haystack)\n", "\n", "And don't forget to check out [Milvus](https://github.com/milvus-io/milvus), the open-source vector database from Zilliz!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 2 }