{ "cells": [ { "cell_type": "markdown", "id": "5aa74260", "metadata": {}, "source": [ "# Building an AWS® ML Pipeline with Workbench (Classification)\n", "\n", "
\n", "\"workbench_pipeline\"
\n", "\n", "This notebook uses the Workbench Science Workbench to quickly build an AWS® Machine Learning Pipeline with the AQSolDB public dataset. This dataset aggregates aqueous solubility data for a large set of compounds.\n", "\n", "We're going to set up a full AWS Machine Learning Pipeline from start to finish. Since the Workbench Classes encapsulate, organize, and manage sets of AWS® Services, setting up our ML pipeline will be straight forward.\n", "\n", "Workbench also provides visibility into AWS services for every step of the process so we know exactly what we've got and how to use it.\n", "

\n", "\n", "## Data\n", "Wine Dataset: A classic dataset used in pattern recognition, machine learning, and data mining, the Wine dataset comprises 178 wine samples sourced from three different cultivars in Italy. The dataset features 13 physico-chemical attributes for each wine sample, providing a multi-dimensional feature space ideal for classification tasks. The aim is to correctly classify the wine samples into one of the three cultivars based on these chemical constituents. This dataset is widely employed for testing and benchmarking classification algorithms and is notable for its well-balanced distribution among classes. It serves as a straightforward, real-world example for classification tasks in machine learning.\n", "\n", "**Main Reference:**\n", "Forster, P. (1991). Machine Learning of Natural Language and Ontology (Technical Report DAI-TR-261). Department of Artificial Intelligence, University of Edinburgh.\n", "\n", "**Important Note:** We've made a small change to the wine dataset to have string based target column called 'wine_class' with string labels instead of integer.\n", "\n", "**Download Data** \n", "\n", " Modified wine_dataset.csv\n", "\n", "## Workbench\n", "Workbench is a medium granularity framework that manages and aggregates AWS® Services into classes and concepts. When you use Workbench you think about DataSources, FeatureSets, Models, and Endpoints. Underneath the hood those classes handle all the details around updating and\n", "\n", "## Notebook\n", "This notebook uses the Workbench Science Workbench to quickly build an AWS® Machine Learning Pipeline.\n", "\n", "We're going to set up a full AWS Machine Learning Pipeline from start to finish. Since the Workbench Classes encapsulate, organize, and manage sets of AWS® Services, setting up our ML pipeline will be straight forward.\n", "\n", "Workbench also provides visibility into AWS services for every step of the process so we know exactly what we've got and how to use it.\n", "

\n", "\n", "® Amazon Web Services, AWS, the Powered by AWS logo, are trademarks of Amazon.com, Inc. or its affiliates." ] }, { "cell_type": "code", "execution_count": 1, "id": "a7ae1c21", "metadata": {}, "source": [ "# Workbench has verbose log messages so set to warning\n", "import workbench\n", "import logging\n", "logging.getLogger(\"workbench\").setLevel(logging.WARNING)" ], "outputs": [] }, { "cell_type": "code", "execution_count": 2, "id": "97243583", "metadata": {}, "source": [ "# Note: If you want to use local data just use a file path\n", "from workbench.api.data_source import DataSource\n", "s3_path = \"s3://workbench-public-data/common/wine_dataset.csv\"\n", "data_source = DataSource(s3_path, 'wine_data')" ], "outputs": [] }, { "cell_type": "markdown", "id": "31affdf1", "metadata": {}, "source": [ "
\n", "\n", "# So what just happened?\n", "Okay, so it was just a few lines of code but Workbench did the following for you:\n", " \n", "- Transformed the CSV to a **Parquet** formatted dataset and stored it in AWS S3\n", "- Created an AWS Data Catalog database/table with the columns names/types\n", "- Athena Queries can now be done directly on this data in AWS Athena Console\n", "\n", "The new 'DataSource' will show up in AWS and of course the Workbench AWS Dashboard. Anyone can see the data, get information on it, use AWS® Athena to query it, and of course use it as part of their analysis pipelines." ] }, { "cell_type": "markdown", "id": "2b781d74", "metadata": {}, "source": [ "
\n", "\n", "# Visibility and Easy to Use AWS Athena Queries\n", "Since Workbench manages a broad range of AWS Services it means that you get visibility into exactly what data you have in AWS. It also means nice perks like hitting the 'Query' link in the Dashboard Web Interface and getting a direct Athena console on your dataset. With AWS Athena you can use typical SQL statements to inspect and investigate your data.\n", " \n", "**But that's not all!**\n", " \n", "Workbench also provides API to directly query DataSources and FeatureSets right from the API, so lets do that now." ] }, { "cell_type": "code", "execution_count": 3, "id": "174e06f0", "metadata": {}, "source": [ "# Athena queries are easy\n", "data_source.query('SELECT * from wine_data limit 5')" ], "outputs": [] }, { "cell_type": "markdown", "id": "2b191dd1-1c0b-4aed-aee2-3189c3318bfa", "metadata": {}, "source": [ "# Labels can be strings\n", "We can see in the dataframe above that our target column has **strings** in it. You do not need to convert these to integers, just use the transformation classes and a LabelEncoder will be used internally for training and prediction/inference." ] }, { "cell_type": "markdown", "id": "0fe38834", "metadata": {}, "source": [ "# The AWS ML Pipeline Awaits\n", "Okay, so in a few lines of code we created a 'DataSource' (which is simply a set of orchestrated AWS Services) but now we'll go through the construction of the rest of our Machine Learning pipeline.\n", "\n", "
\n", "\"workbench_pipeline\"
\n", "\n", "## ML Pipeline\n", "- DataSource **(done)**\n", "- FeatureSet\n", "- Model\n", "- Endpoint (serves models)" ] }, { "cell_type": "markdown", "id": "4292590a", "metadata": {}, "source": [ "# Create a FeatureSet\n", "**Note:** Normally this is where you'd do a deep dive on the data/features, look at data quality metrics, redudant features and engineer new features. For the purposes of this notebook we're simply going to take the given 13 physico-chemical attributes for each wine sample." ] }, { "cell_type": "code", "execution_count": 4, "id": "37674152", "metadata": {}, "source": [ "data_source.column_details()" ], "outputs": [] }, { "cell_type": "code", "execution_count": 5, "id": "48cfd9bc-d6bd-454d-8a34-90f4eeaed03d", "metadata": {}, "source": [ "help(data_source.to_features)" ], "outputs": [] }, { "cell_type": "markdown", "id": "9d417430-92d1-43aa-b58b-a4fe48f59154", "metadata": {}, "source": [ "# Creating the FeatureSet (takes at least 15 minutes)" ] }, { "cell_type": "markdown", "id": "7d2e1f0c-04e6-4a42-8f74-f4d677421fc0", "metadata": {}, "source": [ "# Why does creating a FeatureSet take a long time?\n", "Great question, between row 'ingestion' and waiting for the offline store to finish populating itself it does take a **long time**. Workbench is simply invoking the AWS Service APIs and those APIs are taking a while to do their thing.\n", "\n", "The good news is that Workbench can monitor and query the status of the object and let you know when things are ready." ] }, { "cell_type": "code", "execution_count": null, "id": "1c3bf3b7", "metadata": {}, "source": [ "data_source.to_features(\"wine_features\", target_column=\"wine_class\", tags=[\"wine\", \"classification\", \"uci\"])" ], "outputs": [] }, { "cell_type": "markdown", "id": "09b88130", "metadata": {}, "source": [ "# New FeatureSet shows up in Dashboard\n", "Now we see our new feature set automatically pop up in our dashboard. FeatureSet creation involves the most complex set of AWS Services:\n", "- New Entry in AWS Feature Store\n", "- Specific Type and Field Requirements are handled\n", "- Plus all the AWS Services associated with DataSources (see above)\n", "\n", "The new 'FeatureSet' will show up in AWS and of course the Workbench AWS Dashboard. Anyone can see the feature set, get information on it, use AWS® Athena to query it, and of course use it as part of their analysis pipelines.\n", "\n", "
\n", " \n", "**Important:** All inputs are stored to track provenance on your data as it goes through the pipeline. We can see the last field in the FeatureSet shows the input DataSource." ] }, { "cell_type": "markdown", "id": "3943e7c0", "metadata": {}, "source": [ "# Publishing our Model\n", "**Note:** Normally this is where you'd do a deep dive on the feature set. For the purposes of this notebook we're simply going to take the features given to us and make a reference model that can track our baseline model performance for other to improve upon. :)" ] }, { "cell_type": "code", "execution_count": 10, "id": "010006a6", "metadata": {}, "source": [ "from workbench.api.feature_set import FeatureSet\n", "from workbench.api.model import Model, ModelType\n", "\n", "fs = FeatureSet(\"wine_features\")\n", "help(fs.to_model)" ], "outputs": [] }, { "cell_type": "code", "execution_count": 8, "id": "22775585-ff93-4983-bc8b-96875c3731a5", "metadata": {}, "source": [ "fs.column_names()" ], "outputs": [] }, { "cell_type": "code", "execution_count": null, "id": "f35c920d", "metadata": {}, "source": [ "tags = [\"wine\", \"classification\", \"public\"]\n", "fs.to_model(name=\"wine-classification\", model_type=ModelType.CLASSIFIER, target_column=\"wine_class\", \n", " tags=tags, description=\"Wine Classification Model\")" ], "outputs": [] }, { "cell_type": "markdown", "id": "981c9381", "metadata": {}, "source": [ "# Deploying an AWS Endpoint\n", "Okay now that are model has been published we can deploy an AWS Endpoint to serve inference requests for that model. Deploying an Endpoint allows a large set of servies/APIs to use our model in production." ] }, { "cell_type": "code", "execution_count": null, "id": "a362f172", "metadata": {}, "source": [ "model = Model(\"wine-classification\"\n", "model.to_endpoint(\"wine-classification-end\", tags=[\"wine\", \"classification\"])" ], "outputs": [] }, { "cell_type": "markdown", "id": "04024783", "metadata": {}, "source": [ "# Model Inference from the Endpoint\n", "AWS Endpoints will bundle up a model as a service that responds to HTTP requests. The typical way to use an endpoint is to send a POST request with your features in CSV format. Workbench provides a nice DataFrame based interface that takes care of many details for you." ] }, { "cell_type": "code", "execution_count": 11, "id": "289d3380", "metadata": {}, "source": [ "# Get the Endpoint\n", "from workbench.api.endpoint import Endpoint\n", "my_endpoint = Endpoint('wine-classification-end')" ], "outputs": [] }, { "cell_type": "markdown", "id": "1a1cdebe", "metadata": {}, "source": [ "# Model Provenance is locked into Workbench\n", "We can now look at the model, see what FeatureSet was used to train it and even better see exactly which ROWS in that training set where used to create the model. We can make a query that returns the ROWS that were not used for training." ] }, { "cell_type": "code", "execution_count": 13, "id": "a12b00ea", "metadata": {}, "source": [ "table = fs.view(\"training\").table\n", "test_df = fs.query(f\"select * from {table} where training=0\")\n", "test_df.head()" ], "outputs": [] }, { "cell_type": "code", "execution_count": 14, "id": "ed6c088a", "metadata": {}, "source": [ "# Okay now use the Workbench Endpoint to make prediction on TEST data\n", "prediction_df = my_endpoint.predict(test_df)\n", "metrics = my_endpoint.classification_metrics(\"wine_class\", prediction_df)\n", "metrics" ], "outputs": [] }, { "cell_type": "markdown", "id": "f2a20529", "metadata": {}, "source": [ "# Follow Up on Predictions\n", "Looking at the prediction plot above we can see that many predictions were close to the actual value but about 10 of the predictions were WAY off. So at this point we'd use Workbench to investigate those predictions, map them back to our FeatureSet and DataSource and see if there were irregularities in the training data." ] }, { "cell_type": "markdown", "id": "2358b668", "metadata": {}, "source": [ "# Wrap up: Building an AWS® ML Pipeline with Workbench\n", "\n", "
\n", "\n", "\n", "\n", "This notebook used the Workbench Science Toolkit to quickly build an AWS® Machine Learning Pipeline with the AQSolDB public dataset. We built a full AWS Machine Learning Pipeline from start to finish.\n", "\n", "Workbench made it easy:\n", "- Visibility into AWS services for every step of the process.\n", "- Managed the complexity of organizing the data and populating the AWS services.\n", "- Provided an easy to use API to perform Transformations and inspect Artifacts.\n", "\n", "Using Workbench will minimizize the time and manpower needed to incorporate AWS ML into your organization. If your company would like to be a Workbench Alpha Tester, contact us at [workbench@supercowpowers.com](mailto:workbench@supercowpowers.com)." ] }, { "cell_type": "markdown", "id": "1a5ac2c7", "metadata": {}, "source": [ "



\n", "



\n", "



\n", "



\n", "



\n", "



" ] }, { "cell_type": "markdown", "id": "f31162c1", "metadata": {}, "source": [ "# Helper Methods" ] }, { "cell_type": "code", "execution_count": null, "id": "c09b6c26", "metadata": {}, "source": [ "# Plotting defaults\n", "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('seaborn-deep')\n", "#plt.style.use('seaborn-dark')\n", "plt.rcParams['font.size'] = 12.0\n", "plt.rcParams['figure.figsize'] = 14.0, 7.0" ], "outputs": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.13" } }, "nbformat": 4, "nbformat_minor": 5 }