{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ "---\n", "title: Model Management Walkthrough\n", "output-file: walkthrough.html\n", "description: Catalog and version models, track metadata and lineage, promote the best models to production, and report on evaluation analytics.\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2FhrEhWp8yIK7bNMjvWioX%2FScreen%20Shot%202022-06-21%20at%2010.22.27%20AM.png?alt=media&token=255db898-d67d-4c8c-8fb8-e59e4781b098)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this walkthrough you'll learn how to use Weights & Biases for Model Management. Track, visualize, and report on the complete production model workflow.\n", "\n", "1. **Model Versioning**: Save and restore every version of your model & learned parameters - organize versions by use case and objective. Track training metrics, assign custom metadata, and document rich markdown descriptions of your models. \n", "2. **Model Lineage:** Track the exact code, hyperparameters, & training dataset used to produce the model. Enable model reproducibility.\n", "3. **Model Lifecycle:** Promote promising models to positions like \"staging\" or \"production\" - allowing downstream users to fetch the best model automatically. Communicate progress collaboratively in Reports.\n", "\n", "_We are actively building new Model Management features. Please reach out with questions or suggestions at support@wandb.com._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "::: {.callout-note}\n", "\n", "Please see the [Artifact Tab](https://docs.wandb.ai/ref/app/pages/project-page#artifacts-tab) details for a discussion of all content available in the Model Registry!\n", "\n", ":::" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will walk through a canonical workflow for producing, organizing, and consuming trained models:\n", "\n", "1. [Create a new Registered Model](https://docs.wandb.ai/guides/models/walkthrough#1.-create-a-new-model-portfolio)\n", "2. [Train & log Model Versions](https://docs.wandb.ai/guides/models/walkthrough#2.-train-and-log-model-versions)\n", "3. [Link Model Versions to the Registered Model](https://docs.wandb.ai/guides/models/walkthrough#3.-link-model-versions-to-the-portfolio)\n", "4. [Using a Model Version](https://docs.wandb.ai/guides/models/walkthrough#4.-use-a-model-version)\n", "5. [Evaluate Model Performance](https://docs.wandb.ai/guides/models/walkthrough#5.-evaluate-model-performance)\n", "6. [Promote a Version to Production](https://docs.wandb.ai/guides/models/walkthrough#6.-promote-a-version-to-production)\n", "7. [Use the Production Model for Inference](https://docs.wandb.ai/guides/models/walkthrough#7.-consume-the-production-model)\n", "8. [Build a Reporting Dashboard](https://docs.wandb.ai/guides/models/walkthrough#8.-build-a-reporting-dashboard)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**A** [**companion colab notebook**](https://colab.research.google.com/drive/1wjgr9AHICOa3EM1Ikr_Ps_MAm5D7QnCC) **is provided which covers step 2-3 in the first code block and steps 4-6 in the second code block.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2FNkDrBn4Dyl7OGjekeADh%2FScreen%20Shot%202022-06-21%20at%2010.24.10%20AM.png?alt=media&token=1b4342a6-5f85-4e12-a6a8-0bf1d8f9fb9d)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Create a new Registered Model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, create a Registered Model to hold all the candidate models for your particular modeling task. In this tutorial, we will use the classic [MNIST Dataset](https://pytorch.org/vision/stable/generated/torchvision.datasets.MNIST.html#torchvision.datasets.MNIST) - 28x28 grayscale input images with output classes from 0-9. The video below demonstrates how to create a new Registered Model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "::: {.panel-tabset}\n", "\n", "\n", "### Using Model Registry\n", "\n", "1. Visit your Model Registry at [wandb.ai/registry/model](https://wandb.ai/registry/model) (linked from homepage).\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2Fb2CkO93ZQtcAAB6F4CrJ%2FScreen%20Shot%202022-06-21%20at%2010.14.10%20AM.png?alt=media&token=8c18f6ac-b606-43c5-8bae-d0b632061765)\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2F9AS8BRreKgHeEZ3axDCe%2FScreen%20Shot%202022-06-21%20at%2010.18.28%20AM.png?alt=media&token=04586044-40ac-441c-8ebd-45eea5277cac)\n", "\n", "2. Click the `Create Registered Model` button at the top of the Model Registry.\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2FseDobiyiAniuIA4bxUSn%2FScreen%20Shot%202022-06-21%20at%2010.17.24%20AM.png?alt=media&token=1e28c8f6-b389-468d-91e3-4fdb8aaaab46)\n", "\n", "3. Make sure the `Owning Entity` and `Owning Project` are set correctly to the values you desire. Enter a unique name for your new Registered Model that describes the modeling task or use-case of interest.\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2FE8H3r7lvIOAWohdP56mb%2FScreen%20Shot%202022-06-21%20at%2010.20.23%20AM.png?alt=media&token=f4aa72dc-fa7d-4bba-90a6-c8cfbc5e41c5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using Artifact Browser\n", "\n", "1. Visit your Project's Artifact Browser: `wandb.ai///artifacts`\n", "2. Click the `+` icon on the bottom of the Artifact Browser Sidebar\n", "3. Select `Type: model`, `Style: Collection`, and enter a name. In our case `MNIST Grayscale 28x28`. Remember, a Collection should map to a modeling task - enter a unique name that describes the use case.\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2F1HUyeXPDq1esREYGdN7T%2F2022-05-17%2014.20.36.gif?alt=media&token=689ca931-b834-447b-875f-3e8631e3ff35)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ":::" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Train & log Model Versions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, you will log a model from your training script:\n", "\n", "1. (Optional) Declare your dataset as a dependency so that it is tracked for reproducibility and audibility\n", "\n", "2. **Serialize** your model to disk periodically (and/or at the end of training) using the serialization process provided by your modeling library (eg [PyTorch](https://pytorch.org/tutorials/beginner/saving_loading_models.html) & [Keras](https://www.tensorflow.org/guide/keras/save_and_serialize)).\n", "\n", "3. **Add** your model files to an Artifact of type \"model\"\n", " - Note: We use the name `f'mnist-nn-{wandb.run.id}'`. While not required, it is advisable to name-space your \"draft\" Artifacts with the Run id in order to stay organized\n", "\n", "4. (Optional) Log training metrics associated with the performance of your model during training.\n", " - Note: The data logged immediately before logging your Model Version will automatically be associated with that version\n", "5. **Log** your model\n", " - Note: If you are logging multiple versions, it is advisable to add an alias of \"best\" to your Model Version when it outperforms the prior versions. This will make it easy to find the model with peak performance - especially when the tail end of training may overfit!\n", "\n", "By default, you should use the native W&B Artifacts API to log your serialized model. However, since this pattern is so common, we have provided a single method which combines serialization, Artifact creation, and logging. See the \"(Beta) Using `log_model`\" tab for details." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "::: {.panel-tabset}\n", "\n", "### Using Artifacts" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```python\n", "import wandb\n", "\n", "# Always initialize a W&B run to start tracking\n", "wandb.init()\n", "\n", "# (Optional) Declare an upstream dataset dependency\n", "# see the `Declare Dataset Dependency` tab for\n", "# alternative examples.\n", "dataset = wandb.use_artifact(\"mnist:latest\")\n", "\n", "# At the end of every epoch (or at the end of your script)...\n", "# ... Serialize your model\n", "model.save(\"path/to/model.pt\")\n", "# ... Create a Model Version\n", "art = wandb.Artifact(f'mnist-nn-{wandb.run.id}', type=\"model\")\n", "# ... Add the serialized files\n", "art.add_file(\"path/to/model.pt\", \"model.pt\")\n", "# (optional) Log training metrics\n", "wandb.log({\"train_loss\": 0.345, \"val_loss\": 0.456})\n", "# ... Log the Version\n", "if model_is_best:\n", " # If the model is the best model so far, add \"best\" to the aliases\n", " wandb.log_artifact(art, aliases=[\"latest\", \"best\"])\n", "else:\n", " wandb.log_artifact(art)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### (Beta) Using `log_model`\n", "\n", "::: {.callout-warning}\n", "\n", "The following code snippet leverages actively developed `beta` APIs and therefore is subject to change and not guaranteed to be backwards compatible.\n", "\n", ":::\n", "\n", "```python\n", "from wandb.beta.workflows import log_model\n", "\n", "# (Optional) Declare an upstream dataset dependency\n", "# see the `Declare Dataset Dependency` tab for\n", "# alternative examples.\n", "dataset = wandb.use_artifact(\"mnist:latest\")\n", "\n", "# (optional) Log training metrics\n", "wandb.log({\"train_loss\": 0.345, \"val_loss\": 0.456})\n", "\n", "# This one method will serialize the model, start a run, create a version\n", "# add the files to the version, and log the version. You can override\n", "# the default name, project, aliases, metadata, and more!\n", "log_model(model, \"mnist-nn\", aliases=[\"best\"] if model_is_best else [])\n", "```\n", "\n", "Note: you may want to define custom serialization and deserialization strategies. You can do so by subclassing the [`_SavedModel` class](https://github.com/wandb/wandb/blob/9dfa60b14599f2716ab94dd85aa0c1113cb5d073/wandb/sdk/data_types/saved_model.py#L73), similar to the [`_PytorchSavedModel` class](https://github.com/wandb/wandb/blob/9dfa60b14599f2716ab94dd85aa0c1113cb5d073/wandb/sdk/data_types/saved_model.py#L358). All subclasses will automatically be loaded into the serialization registry. _As this is a beta feature, please reach out to support@wandb.com with questions or comments._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Declare Dataset Dependency\n", "\n", "If you would like to track your training data, you can declare a dependency by calling `wandb.use_artifact` on your dataset. Here are 3 examples of how you can declare a dataset dependency:\n", "\n", "**Dataset stored in W&B**\n", "\n", "```python\n", "dataset = wandb.use_artifact(\"[[entity/]project/]name:alias\")\n", "```\n", "\n", "**Dataset stored on Local Filesystem**\n", "\n", "```python\n", "art = wandb.Artifact(\"dataset_name\", \"dataset\")\n", "art.add_dir(\"path/to/data\") # or art.add_file(\"path/to/data.csv\")\n", "dataset = wandb.use_artifact(art)\n", "```\n", "\n", "**Dataset stored on Remote Bucket**\n", "\n", "```python\n", "art = wandb.Artifact(\"dataset_name\", \"dataset\")\n", "art.add_reference(\"s3://path/to/data\")\n", "dataset = wandb.use_artifact(art)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ":::" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After logging 1 or more Model Versions, you will notice that your will have a new Model Artifact in your Artifact Browser. Here, we can see the results of logging 5 versions to an artifact named `mnist_nn-1r9jjogr`.\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2FW95CUXm3wFWmShgriUus%2FScreen%20Shot%202022-06-21%20at%2010.25.13%20AM.png?alt=media&token=896e1dad-9a32-46c4-81d4-df61e641b740)\n", "\n", "If you are following along the example notebook, you should see a Run Workspace with charts similar to the image below\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2FHy8efgajVKuaTgOHy7db%2FScreen%20Shot%202022-05-12%20at%2011.42.12%20AM.png?alt=media&token=4508d5ee-c900-46b3-a332-acb6e0ad7994)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Link Model Versions to the Registered Model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's say that we are ready to link one of our Model Versions to the Registered Model. We can accomplish this manually as well as via an API." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "::: {.panel-tabset}\n", "\n", "### Manual Linking\n", "\n", "The following video below demonstrates how to manually link a Model Version to your newly created Registered Model:\n", "\n", "1. Navigate to the Model Version of interest\n", "2. Click the link icon\n", "3. Select the target Registered Model\n", "4. (optional): Add additional aliases\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2FpKissGM67eXf771EoUQq%2F2022-05-11%2015.13.48.gif?alt=media&token=923c6514-6c22-4c0d-a8d4-2ed7edcf51e7)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Programatic Linking\n", "\n", "While manual linking is useful for one-off Models, it is often useful to programmatically link Model Versions to a Collection - consider a nightly job or CI pipeline that wants to link the best Model Version from every training job. Depending on your context and use case, you may use one of 3 different linking APIs:\n", "\n", "**Fetch Model Artifact from Public API:**\n", "\n", "```python\n", "import wandb\n", "\n", "# Fetch the Model Version via API\n", "art = wandb.Api().artifact(...)\n", "\n", "# Link the Model Version to the Model Collection\n", "art.link(\"[[entity/]project/]collectionName\")\n", "```\n", "\n", "**Model Artifact is \"used\" by the current Run:**\n", "\n", "```python\n", "import wandb\n", "\n", "# Initialize a W&B run to start tracking\n", "wandb.init()\n", "\n", "# Obtain a reference to a Model Version\n", "art = wandb.use_artifact(...)\n", "\n", "# Link the Model Version to the Model Collection\n", "art.link(\"[[entity/]project/]collectionName\")\n", "```\n", "\n", "**Model Artifact is logged by the current Run:**\n", "\n", "```python\n", "import wandb\n", "\n", "# Initialize a W&B run to start tracking\n", "wandb.init()\n", "\n", "# Create an Model Version\n", "art = wandb.Artifact(...)\n", "\n", "# Log the Model Version\n", "wandb.log_artifact(art)\n", "\n", "# Link the Model Version to the Collection\n", "wandb.run.link_artifact(art, \"[[entity/]project/]collectionName\")\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### (Beta) Using `link_model`\n", "\n", "The following code snippet leverages actively developed `beta` APIs and therefore is subject to change and not guaranteed to be backwards compatible.\n", "\n", "In the case that you logged a model with the beta `log_model` discussed above, then you can use it's companion method: `link_model`\n", "\n", "```python\n", "from wandb.beta.workflows import log_model, link_model\n", "\n", "# Obtain a Model Version\n", "model_version = wb.log_model(model, \"mnist_nn\")\n", "\n", "# Link the Model Version\n", "link_model(model_version, \"[[entity/]project/]collectionName\")\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After you link the Model Version, you will see hyperlinks connecting the Version in the Registered Model to the source Artifact and visa versa.\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2Fo19AA9cXsA0ZMslGkb89%2F13_edit.png?alt=media&token=4feb1cd8-5ba2-444f-9a0f-0c93a7228cd8)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ":::" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Use a Model Version" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we are ready to consume a Model - perhaps to evaluate its performance, make predictions against a dataset, or use in a live production context. Similar to logging a Model, you may choose to use the raw Artifact API or the more opinionated beta APIs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "::: {.panel-tabset}\n", "\n", "\n", "### Using Artifacts\n", "\n", "You can load in a Model Version using the `use_artifact` method.\n", "\n", "```python\n", "import wandb\n", "\n", "# Always initialize a W&B run to start tracking\n", "wandb.init()\n", "\n", "# Download your Model Version files\n", "path = wandb.use_artifact(\"[[entity/]project/]collectionName:latest\").download()\n", "\n", "# Deserialize your model (this will be your custom code to load in\n", "# a model from disk - reversing the serialization process used in step 2)\n", "model = make_model_from_data(path)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### (Beta) Using `use_model`\n", "\n", "::: {.callout-warning}\n", "The following code snippet leverages actively developed `beta` APIs and therefore is subject to change and not guaranteed to be backwards compatible.\n", ":::\n", "\n", "Directly manipulating model files and handling deserialization can be tricky - especially if you were not the one who serialized the model. As a companion to `log_model`, `use_model` automatically deserializes and reconstructs your model for use.\n", "\n", "```python\n", "from wandb.beta.workflows import use_model\n", "\n", "model = use_model(\"[[entity/]project/]collectionName\").model_obj()\n", "```\n", "\n", ":::" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Evaluate Model Performance" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After training many Models, you will likely want to evaluate the performance of those models. In most circumstances you will have some held-out data which serves as a test dataset, independent of the dataset your models have access to during training. To evaluate a Model Version, you will want to first complete step 4 above to load a model into memory. Then:\n", "\n", "1. (Optional) Declare a data dependency to your evaluation data\n", "2. **Log** metrics, media, tables, and anything else useful for evaluation\n", "\n", "\n", "```python\n", "# ... continuation from 4\n", "\n", "# (Optional) Declare an upstream evaluation dataset dependency\n", "dataset = wandb.use_artifact(\"mnist-evaluation:latest\")\n", "\n", "# Evaluate your model in whatever way makes sense for your\n", "loss, accuracy, predictions = evaluate_model(model, dataset)\n", "\n", "# Log out metrics, images, tables, or any data useful for evaluation.\n", "wandb.log({\"loss\": loss, \"accuracy\": accuracy, \"predictions\": predictions})\n", "```\n", "\n", "If you are executing similar code, as demonstrated in the notebook, you should see a workspace similar to the image below - here we even show model predictions against the test data!\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2FJd1Ztg57guvPGFD06mbM%2FScreen%20Shot%202022-05-12%20at%2011.45.09%20AM.png?alt=media&token=d10a7066-7d77-4863-aea0-634485f40cef)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. Promote a Version to Production" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, you will likely want to denote which version in the Registered Model is intended to be used for Production. Here, we use the concept of aliases. Each Registered Model can have any aliases which make sense for your use case - however we often see `production` as the most common alias. Each alias can only be assigned to a single Version at a time." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "::: {.panel-tabset}\n", "\n", "### via UI Interface\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2F6g0FZl6bbOt9cYRKAMRX%2FScreen%20Shot%202022-06-06%20at%207.50.27%20AM.png?alt=media&token=e73a4446-e6a6-460e-8e04-99e1f3c5906e)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### via API\n", "\n", "Follow steps in [Part 3. Link Model Versions to the Collection](https://docs.wandb.ai/guides/models/walkthrough#3.-linking-model-versions-to-the-portfolio) and add the aliases you want to the `aliases` parameter." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The image below shows the new `production` alias added to v1 of the Registered Model!\n", "\n", "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2FCYJr0uUQV71NOvvtvvpJ%2FScreen%20Shot%202022-05-12%20at%2011.46.43%20AM.png?alt=media&token=23767732-6788-4373-b0c6-0d7db2519f82)\n", "\n", ":::" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 7. Consume the Production Model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, you will likely want to use your production Model for inference. To do so, simply follow the steps outlined in [Part 4. Using a Model Version](https://docs.wandb.ai/guides/models/walkthrough#4.-evaluate-model-performance), with the `production` alias. For example:\n", "\n", "```python\n", "wandb.use_artifact(\"[[entity/]project/]registeredModelName:production\")\n", "```\n", "\n", "You can reference a Version within the Registered Model using different alias strategies:\n", "\n", "- `latest` - which will fetch the most recently linked Version\n", "- `v#` - using `v0`, `v1`, `v2`, ... you can fetch a specific version in the Registered Model\n", "- `production` - you can use any custom alias that you and your team have assigned\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 8. Build a Reporting Dashboard" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using Weave Panels, you can display any of the Model Registry/Artifact views inside of Reports! See a [demo here](https://wandb.ai/timssweeney/model_management_docs_official_v0/reports/MNIST-Grayscale-28x28-Model-Dashboard--VmlldzoyMDI0Mzc1). Below is a full-page screenshot of an example Model Dashboard." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![](https://1039519455-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-Lqya5RvLedGEWPhtkjU-1972196547%2Fuploads%2FagKefedLijcKacK3wUEb%2FScreenshot%202022-06-21%20at%2010-42-44%20Weights%20%26%20Biases.png?alt=media&token=a72df30a-46a2-4ea9-8acd-ab7914a17002)" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 4 }