{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Amazon SageMaker Autopilot Candidate Definition Notebook\n", "\n", "This notebook was automatically generated by the AutoML job **automl-dm-1675608463**.\n", "This notebook allows you to customize the candidate definitions and execute the SageMaker Autopilot workflow.\n", "\n", "The dataset has **2** columns and the column named **sentiment** is used as\n", "the target column. This is being treated as a **MulticlassClassification** problem. The dataset also has **3** classes.\n", "This notebook will build a **[MulticlassClassification](https://en.wikipedia.org/wiki/Multiclass_classification)** model that\n", "**maximizes** the \"**ACCURACY**\" quality metric of the trained models.\n", "The \"**ACCURACY**\" metric provides the percentage of times the model predicted the correct class.\n", "\n", "As part of the AutoML job, the input dataset has been randomly split into two pieces, one for **training** and one for\n", "**validation**. This notebook helps you inspect and modify the data transformation approaches proposed by Amazon SageMaker Autopilot. You can interactively\n", "train the data transformation models and use them to transform the data. Finally, you can execute a multiple algorithm hyperparameter optimization (multi-algo HPO)\n", "job that helps you find the best model for your dataset by jointly optimizing the data transformations and machine learning algorithms.\n", "\n", "
💡 Available Knobs\n", "Look for sections like this for recommended settings that you can change.\n", "
\n", "\n", "\n", "---\n", "\n", "## Contents\n", "\n", "1. [Sagemaker Setup](#Sagemaker-Setup)\n", " 1. [Downloading Generated Candidates](#Downloading-Generated-Modules)\n", " 1. [SageMaker Autopilot Job and Amazon Simple Storage Service (Amazon S3) Configuration](#SageMaker-Autopilot-Job-and-Amazon-Simple-Storage-Service-(Amazon-S3)-Configuration)\n", "1. [Candidate Pipelines](#Candidate-Pipelines)\n", " 1. [Generated Candidates](#Generated-Candidates)\n", " 1. [Selected Candidates](#Selected-Candidates)\n", "1. [Executing the Candidate Pipelines](#Executing-the-Candidate-Pipelines)\n", " 1. [Run Data Transformation Steps](#Run-Data-Transformation-Steps)\n", " 1. [Multi Algorithm Hyperparameter Tuning](#Multi-Algorithm-Hyperparameter-Tuning)\n", "1. [Model Selection and Deployment](#Model-Selection-and-Deployment)\n", " 1. [Tuning Job Result Overview](#Tuning-Job-Result-Overview)\n", " 1. [Model Deployment](#Model-Deployment)\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sagemaker Setup\n", "\n", "Before you launch the SageMaker Autopilot jobs, we'll setup the environment for Amazon SageMaker\n", "- Check environment & dependencies.\n", "- Create a few helper objects/function to organize input/output data and SageMaker sessions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Minimal Environment Requirements**\n", "\n", "- Jupyter: Tested on `JupyterLab 1.0.6`, `jupyter_core 4.5.0` and `IPython 6.4.0`\n", "- Kernel: `conda_python3`\n", "- Dependencies required\n", " - `sagemaker-python-sdk>=2.40.0`\n", " - Use `!pip install sagemaker==2.40.0` to download this dependency.\n", " - Kernel may need to be restarted after download.\n", "- Expected Execution Role/permission\n", " - S3 access to the bucket that stores the notebook." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Downloading Generated Modules\n", "Download the generated data transformation modules and an SageMaker Autopilot helper module used by this notebook.\n", "Those artifacts will be downloaded to **automl-dm-1675608463-artifacts** folder." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!mkdir -p automl-dm-1675608463-artifacts\n", "!aws s3 sync s3://sagemaker-us-east-1-491783890788/autopilot/automl-dm-1675608463/sagemaker-automl-candidates/automl-dm-1675608463-pr-1-210c7900f5854fdc89ce01c59579c034fb883/generated_module automl-dm-1675608463-artifacts/generated_module --only-show-errors\n", "!aws s3 sync s3://sagemaker-us-east-1-491783890788/autopilot/automl-dm-1675608463/sagemaker-automl-candidates/automl-dm-1675608463-pr-1-210c7900f5854fdc89ce01c59579c034fb883/notebooks/sagemaker_automl automl-dm-1675608463-artifacts/sagemaker_automl --only-show-errors\n", "\n", "import sys\n", "sys.path.append(\"automl-dm-1675608463-artifacts\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### SageMaker Autopilot Job and Amazon Simple Storage Service (Amazon S3) Configuration\n", "\n", "The following configuration has been derived from the SageMaker Autopilot job. These items configure where this notebook will\n", "look for generated candidates, and where input and output data is stored on Amazon S3." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker_automl import uid, AutoMLLocalRunConfig\n", "\n", "# Where the preprocessed data from the existing AutoML job is stored\n", "BASE_AUTOML_JOB_NAME = 'automl-dm-1675608463'\n", "BASE_AUTOML_JOB_CONFIG = {\n", " 'automl_job_name': BASE_AUTOML_JOB_NAME,\n", " 'automl_output_s3_base_path': 's3://sagemaker-us-east-1-491783890788/autopilot/automl-dm-1675608463',\n", " 'data_transformer_image_repo_version': '2.5-1-cpu-py3',\n", " 'algo_image_repo_versions': {'xgboost': '1.3-1-cpu-py3'},\n", " 'algo_inference_image_repo_versions': {'xgboost': '1.3-1-cpu-py3'}\n", "}\n", "\n", "# Path conventions of the output data storage path from the local AutoML job run of this notebook\n", "LOCAL_AUTOML_JOB_NAME = 'automl-dm--notebook-run-{}'.format(uid())\n", "LOCAL_AUTOML_JOB_CONFIG = {\n", " 'local_automl_job_name': LOCAL_AUTOML_JOB_NAME,\n", " 'local_automl_job_output_s3_base_path': 's3://sagemaker-us-east-1-491783890788/autopilot/automl-dm-1675608463/{}'.format(LOCAL_AUTOML_JOB_NAME),\n", " 'data_processing_model_dir': 'data-processor-models',\n", " 'data_processing_transformed_output_dir': 'transformed-data',\n", " 'multi_algo_tuning_output_dir': 'multi-algo-tuning'\n", "}\n", "\n", "AUTOML_LOCAL_RUN_CONFIG = AutoMLLocalRunConfig(\n", " role='arn:aws:iam::491783890788:role/sagemaker-studio-vpc-firewall-us-east-1-sagemaker-execution-role',\n", " base_automl_job_config=BASE_AUTOML_JOB_CONFIG,\n", " local_automl_job_config=LOCAL_AUTOML_JOB_CONFIG,\n", " security_config={'EnableInterContainerTrafficEncryption': False, 'VpcConfig': {}})\n", "\n", "AUTOML_LOCAL_RUN_CONFIG.display()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Candidate Pipelines\n", "\n", "The `AutoMLLocalRunner` keeps track of selected candidates and automates many of the steps needed to execute feature engineering and tuning steps." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker_automl import AutoMLInteractiveRunner, AutoMLLocalCandidate\n", "\n", "automl_interactive_runner = AutoMLInteractiveRunner(AUTOML_LOCAL_RUN_CONFIG)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Generated Candidates\n", "\n", "The SageMaker Autopilot Job has analyzed the dataset and has generated **3** machine learning\n", "pipeline(s) that use **1** algorithm(s). Each pipeline contains a set of feature transformers and an\n", "algorithm.\n", "\n", "
💡 Available Knobs\n", "\n", "1. The resource configuration: instance type & count\n", "1. Select candidate pipeline definitions by cells\n", "1. The linked data transformation script can be reviewed and updated. Please refer to the [README.md](./automl-dm-1675608463-artifacts/generated_module/README.md) for detailed customization instructions.\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**[dpp0-xgboost](automl-dm-1675608463-artifacts/generated_module/candidate_data_processors/dpp0.py)**: This data transformation strategy first transforms 'text' features using [MultiColumnTfidfVectorizer](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/feature_extraction/text.py). It merges all the generated features and applies [RobustStandardScaler](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/preprocessing/data.py). The\n", "transformed data will be used to tune a *xgboost* model. Here is the definition:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "automl_interactive_runner.select_candidate({\n", " \"data_transformer\": {\n", " \"name\": \"dpp0\",\n", " \"training_resource_config\": {\n", " \"instance_type\": \"ml.m5.12xlarge\",\n", " \"instance_count\": 1,\n", " \"volume_size_in_gb\": 50\n", " },\n", " \"transform_resource_config\": {\n", " \"instance_type\": \"ml.m5.4xlarge\",\n", " \"instance_count\": 1,\n", " },\n", " \"transforms_label\": True,\n", " \"transformed_data_format\": \"application/x-recordio-protobuf\",\n", " \"sparse_encoding\": True\n", " },\n", " \"algorithm\": {\n", " \"name\": \"xgboost\",\n", " \"training_resource_config\": {\n", " \"instance_type\": \"ml.m5.12xlarge\",\n", " \"instance_count\": 1,\n", " },\n", " }\n", "})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**[dpp1-xgboost](automl-dm-1675608463-artifacts/generated_module/candidate_data_processors/dpp1.py)**: This data transformation strategy first transforms 'text' features using [MultiColumnTfidfVectorizer](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/feature_extraction/text.py). It merges all the generated features and applies [RobustPCA](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/decomposition/robust_pca.py) followed by [RobustStandardScaler](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/preprocessing/data.py). The\n", "transformed data will be used to tune a *xgboost* model. Here is the definition:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "automl_interactive_runner.select_candidate({\n", " \"data_transformer\": {\n", " \"name\": \"dpp1\",\n", " \"training_resource_config\": {\n", " \"instance_type\": \"ml.m5.12xlarge\",\n", " \"instance_count\": 1,\n", " \"volume_size_in_gb\": 50\n", " },\n", " \"transform_resource_config\": {\n", " \"instance_type\": \"ml.m5.4xlarge\",\n", " \"instance_count\": 1,\n", " },\n", " \"transforms_label\": True,\n", " \"transformed_data_format\": \"text/csv\",\n", " \"sparse_encoding\": False\n", " },\n", " \"algorithm\": {\n", " \"name\": \"xgboost\",\n", " \"training_resource_config\": {\n", " \"instance_type\": \"ml.m5.12xlarge\",\n", " \"instance_count\": 1,\n", " },\n", " }\n", "})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**[dpp2-xgboost](automl-dm-1675608463-artifacts/generated_module/candidate_data_processors/dpp2.py)**: This data transformation strategy first transforms 'text' features using [MultiColumnTfidfVectorizer](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/feature_extraction/text.py). It merges all the generated features and applies [RobustStandardScaler](https://github.com/aws/sagemaker-scikit-learn-extension/blob/master/src/sagemaker_sklearn_extension/preprocessing/data.py). The\n", "transformed data will be used to tune a *xgboost* model. Here is the definition:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "automl_interactive_runner.select_candidate({\n", " \"data_transformer\": {\n", " \"name\": \"dpp2\",\n", " \"training_resource_config\": {\n", " \"instance_type\": \"ml.m5.12xlarge\",\n", " \"instance_count\": 1,\n", " \"volume_size_in_gb\": 50\n", " },\n", " \"transform_resource_config\": {\n", " \"instance_type\": \"ml.m5.4xlarge\",\n", " \"instance_count\": 1,\n", " },\n", " \"transforms_label\": True,\n", " \"transformed_data_format\": \"application/x-recordio-protobuf\",\n", " \"sparse_encoding\": True\n", " },\n", " \"algorithm\": {\n", " \"name\": \"xgboost\",\n", " \"training_resource_config\": {\n", " \"instance_type\": \"ml.m5.12xlarge\",\n", " \"instance_count\": 1,\n", " },\n", " }\n", "})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Selected Candidates\n", "\n", "You have selected the following candidates (please run the cell below and click on the feature transformer links for details):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "automl_interactive_runner.display_candidates()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The feature engineering pipeline consists of two SageMaker jobs:\n", "\n", "1. Generated trainable data transformer Python modules like [dpp0.py](automl-dm-1675608463-artifacts/generated_module/candidate_data_processors/dpp0.py), which has been downloaded to the local file system\n", "2. A **training** job to train the data transformers\n", "3. A **batch transform** job to apply the trained transformation to the dataset to generate the algorithm compatible data\n", "\n", "The transformers and its training pipeline are built using open sourced **[sagemaker-scikit-learn-container][]** and **[sagemaker-scikit-learn-extension][]**.\n", "\n", "[sagemaker-scikit-learn-container]: https://github.com/aws/sagemaker-scikit-learn-container\n", "[sagemaker-scikit-learn-extension]: https://github.com/aws/sagemaker-scikit-learn-extension" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Executing the Candidate Pipelines\n", "\n", "Each candidate pipeline consists of two steps, feature transformation and algorithm training.\n", "For efficiency first execute the feature transformation step which will generate a featurized dataset on S3\n", "for each pipeline.\n", "\n", "After each featurized dataset is prepared, execute a multi-algorithm tuning job that will run tuning jobs\n", "in parallel for each pipeline. This tuning job will execute training jobs to find the best set of\n", "hyper-parameters for each pipeline, as well as finding the overall best performing pipeline.\n", "\n", "### Run Data Transformation Steps\n", "\n", "Now you are ready to start execution all data transformation steps. The cell below may take some time to finish,\n", "feel free to go grab a cup of coffee. To expedite the process you can set the number of `parallel_jobs` to be up to 10.\n", "Please check the account limits to increase the limits before increasing the number of jobs to run in parallel." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "automl_interactive_runner.fit_data_transformers(parallel_jobs=7)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Multi Algorithm Hyperparameter Tuning\n", "\n", "Now that the algorithm compatible transformed datasets are ready, you can start the multi-algorithm model tuning job\n", "to find the best predictive model. The following algorithm training job configuration for each\n", "algorithm is auto-generated by the AutoML Job as part of the recommendation.\n", "\n", "
💡 Available Knobs\n", "\n", "1. Hyperparameter ranges\n", "2. Objective metrics\n", "3. Recommended static algorithm hyperparameters.\n", "\n", "Please refers to [Xgboost tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost-tuning.html) and [Linear learner tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner-tuning.html) for detailed explanations of the parameters.\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The AutoML recommendation job has recommended the following hyperparameters, objectives and accuracy metrics for\n", "the algorithm and problem type:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ALGORITHM_OBJECTIVE_METRICS = {\n", " 'xgboost': 'validation:accuracy',\n", "}\n", "\n", "STATIC_HYPERPARAMETERS = {\n", " 'xgboost': {\n", " 'objective': 'multi:softprob',\n", " 'eval_metric': 'accuracy,f1,balanced_accuracy,precision_macro,recall_macro,mlogloss',\n", " 'num_class': 3,\n", " '_kfold': 5,\n", " },\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following tunable hyperparameters search ranges are recommended for the Multi-Algo tuning job:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.parameter import CategoricalParameter, ContinuousParameter, IntegerParameter\n", "\n", "ALGORITHM_TUNABLE_HYPERPARAMETER_RANGES = {\n", " 'xgboost': {\n", " 'num_round': IntegerParameter(64, 1024, scaling_type='Logarithmic'),\n", " 'max_depth': IntegerParameter(2, 8, scaling_type='Logarithmic'),\n", " 'eta': ContinuousParameter(1e-3, 1.0, scaling_type='Logarithmic'),\n", " 'gamma': ContinuousParameter(1e-6, 64.0, scaling_type='Logarithmic'),\n", " 'min_child_weight': ContinuousParameter(1e-6, 32.0, scaling_type='Logarithmic'),\n", " 'subsample': ContinuousParameter(0.5, 1.0, scaling_type='Linear'),\n", " 'colsample_bytree': ContinuousParameter(0.3, 1.0, scaling_type='Linear'),\n", " 'lambda': ContinuousParameter(1e-6, 2.0, scaling_type='Logarithmic'),\n", " 'alpha': ContinuousParameter(1e-6, 2.0, scaling_type='Logarithmic'),\n", " },\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Prepare Multi-Algorithm Tuner Input\n", "\n", "To use the multi-algorithm HPO tuner, prepare some inputs and parameters. Prepare a dictionary whose key is the name of the trained pipeline candidates and the values are respectively:\n", "\n", "1. Estimators for the recommended algorithm\n", "2. Hyperparameters search ranges\n", "3. Objective metrics" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "multi_algo_tuning_parameters = automl_interactive_runner.prepare_multi_algo_parameters(\n", " objective_metrics=ALGORITHM_OBJECTIVE_METRICS,\n", " static_hyperparameters=STATIC_HYPERPARAMETERS,\n", " hyperparameters_search_ranges=ALGORITHM_TUNABLE_HYPERPARAMETER_RANGES)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below you prepare the inputs data to the multi-algo tuner:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "multi_algo_tuning_inputs = automl_interactive_runner.prepare_multi_algo_inputs()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create Multi-Algorithm Tuner\n", "\n", "With the recommended Hyperparameter ranges and the transformed dataset, create a multi-algorithm model tuning job\n", "that coordinates hyper parameter optimizations across the different possible algorithms and feature processing strategies.\n", "\n", "
💡 Available Knobs\n", "\n", "1. Tuner strategy: [Bayesian](https://en.wikipedia.org/wiki/Hyperparameter_optimization#Bayesian_optimization), [Random Search](https://en.wikipedia.org/wiki/Hyperparameter_optimization#Random_search)\n", "2. Objective type: `Minimize`, `Maximize`, see [optimization](https://en.wikipedia.org/wiki/Mathematical_optimization)\n", "3. Max Job size: the max number of training jobs HPO would be launching to run experiments. Note the default value is **250**\n", " which is the default of the managed flow.\n", "4. Parallelism. Number of jobs that will be executed in parallel. Higher value will expedite the tuning process.\n", " Please check the account limits to increase the limits before increasing the number of jobs to run in parallel\n", "5. Please use a different tuning job name if you re-run this cell after applied customizations.\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.tuner import HyperparameterTuner\n", "\n", "base_tuning_job_name = \"{}-tuning\".format(AUTOML_LOCAL_RUN_CONFIG.local_automl_job_name)\n", "\n", "tuner = HyperparameterTuner.create(\n", " base_tuning_job_name=base_tuning_job_name,\n", " strategy='Bayesian',\n", " objective_type='Maximize',\n", " max_parallel_jobs=7,\n", " max_jobs=250,\n", " **multi_algo_tuning_parameters,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Run Multi-Algorithm Tuning\n", "\n", "Now you are ready to start running the **Multi-Algo Tuning** job. After the job is finished, store the tuning job name which you use to select models in the next section.\n", "The tuning process will take some time, please track the progress in the Amazon SageMaker Hyperparameter tuning jobs console." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import display, Markdown\n", "\n", "# Run tuning\n", "tuner.fit(inputs=multi_algo_tuning_inputs, include_cls_metadata=None)\n", "tuning_job_name = tuner.latest_tuning_job.name\n", "\n", "display(\n", " Markdown(f\"Tuning Job {tuning_job_name} started, please track the progress from [here](https://{AUTOML_LOCAL_RUN_CONFIG.region}.console.aws.amazon.com/sagemaker/home?region={AUTOML_LOCAL_RUN_CONFIG.region}#/hyper-tuning-jobs/{tuning_job_name})\"))\n", "\n", "# Wait for tuning job to finish\n", "tuner.wait()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model Selection and Deployment\n", "\n", "This section guides you through the model selection process. Afterward, you construct an inference pipeline\n", "on Amazon SageMaker to host the best candidate.\n", "\n", "Because you executed the feature transformation and algorithm training in two separate steps, you now need to manually\n", "link each trained model with the feature transformer that it is associated with. When running a regular Amazon\n", "SageMaker Autopilot job, this will automatically be done for you." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Tuning Job Result Overview\n", "\n", "The performance of each candidate pipeline can be viewed as a Pandas dataframe. For more interactive usage please\n", "refers to [model tuning monitor](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-monitor.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pprint import pprint\n", "from sagemaker.analytics import HyperparameterTuningJobAnalytics\n", "\n", "SAGEMAKER_SESSION = AUTOML_LOCAL_RUN_CONFIG.sagemaker_session\n", "SAGEMAKER_ROLE = AUTOML_LOCAL_RUN_CONFIG.role\n", "\n", "tuner_analytics = HyperparameterTuningJobAnalytics(\n", " tuner.latest_tuning_job.name, sagemaker_session=SAGEMAKER_SESSION)\n", "\n", "df_tuning_job_analytics = tuner_analytics.dataframe()\n", "\n", "# Sort the tuning job analytics by the final metrics value\n", "df_tuning_job_analytics.sort_values(\n", " by=['FinalObjectiveValue'],\n", " inplace=True,\n", " ascending=False if tuner.objective_type == \"Maximize\" else True)\n", "\n", "# Show detailed analytics for the top 20 models\n", "df_tuning_job_analytics.head(20)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The best training job can be selected as below:\n", "\n", "
💡 Tips: \n", "You could select alternative job by using the value from `TrainingJobName` column above and assign to `best_training_job` below\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "attached_tuner = HyperparameterTuner.attach(tuner.latest_tuning_job.name, sagemaker_session=SAGEMAKER_SESSION)\n", "best_training_job = attached_tuner.best_training_job()\n", "\n", "print(\"Best Multi Algorithm HPO training job name is {}\".format(best_training_job))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Linking Best Training Job with Feature Pipelines\n", "\n", "Finally, deploy the best training job to Amazon SageMaker along with its companion feature engineering models.\n", "At the end of the section, you get an endpoint that's ready to serve online inference or start batch transform jobs!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Deploy a [PipelineModel](https://sagemaker.readthedocs.io/en/stable/pipeline.html) that has multiple containers of the following:\n", "\n", "1. Data Transformation Container: a container built from the model we selected and trained during the data transformer sections\n", "2. Algorithm Container: a container built from the trained model we selected above from the best HPO training job.\n", "3. Inverse Label Transformer Container: a container that converts numerical intermediate prediction value back to non-numerical label value.\n", "\n", "Get both best data transformation model and algorithm model from best training job and create an pipeline model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.estimator import Estimator\n", "from sagemaker import PipelineModel\n", "from sagemaker_automl import select_inference_output\n", "\n", "# Get a data transformation model from chosen candidate\n", "best_candidate = automl_interactive_runner.choose_candidate(df_tuning_job_analytics, best_training_job)\n", "best_data_transformer_model = best_candidate.get_data_transformer_model(role=SAGEMAKER_ROLE, sagemaker_session=SAGEMAKER_SESSION)\n", "\n", "# Our first data transformation container will always return recordio-protobuf format\n", "best_data_transformer_model.env[\"SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT\"] = 'application/x-recordio-protobuf'\n", "# Add environment variable for sparse encoding\n", "if best_candidate.data_transformer_step.sparse_encoding:\n", " best_data_transformer_model.env[\"AUTOML_SPARSE_ENCODE_RECORDIO_PROTOBUF\"] = '1'\n", "\n", "# Get a algo model from chosen training job of the candidate\n", "algo_estimator = Estimator.attach(best_training_job)\n", "best_algo_model = algo_estimator.create_model(**best_candidate.algo_step.get_inference_container_config())\n", "\n", "# Final pipeline model is composed of data transformation models and algo model and an\n", "# inverse label transform model if we need to transform the intermediates back to non-numerical value\n", "model_containers = [best_data_transformer_model, best_algo_model]\n", "if best_candidate.transforms_label:\n", " model_containers.append(best_candidate.get_data_transformer_model(\n", " transform_mode=\"inverse-label-transform\",\n", " role=SAGEMAKER_ROLE,\n", " sagemaker_session=SAGEMAKER_SESSION))\n", "\n", "# This model can emit response ['predicted_label', 'probability', 'labels', 'probabilities']. To enable the model to emit one or more\n", "# of the response content, pass the keys to `output_key` keyword argument in the select_inference_output method.\n", "\n", "model_containers = select_inference_output(\"MulticlassClassification\", model_containers, output_keys=['predicted_label'])\n", "\n", "\n", "pipeline_model = PipelineModel(\n", " name=\"AutoML-{}\".format(AUTOML_LOCAL_RUN_CONFIG.local_automl_job_name),\n", " role=SAGEMAKER_ROLE,\n", " models=model_containers,\n", " vpc_config=AUTOML_LOCAL_RUN_CONFIG.vpc_config)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Deploying Best Pipeline\n", "\n", "
💡 Available Knobs\n", "\n", "1. You can customize the initial instance count and instance type used to deploy this model.\n", "2. Endpoint name can be changed to avoid conflict with existing endpoints.\n", "\n", "
\n", "\n", "Finally, deploy the model to SageMaker to make it functional." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline_model.deploy(initial_instance_count=1,\n", " instance_type='ml.m5.2xlarge',\n", " endpoint_name=pipeline_model.name,\n", " wait=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Congratulations! Now you could visit the sagemaker\n", "[endpoint console page](https://us-east-1.console.aws.amazon.com/sagemaker/home?region=us-east-1#/endpoints) to find the deployed endpoint (it'll take a few minutes to be in service).\n", "\n", "
\n", " To rerun this notebook, delete or change the name of your endpoint!
\n", "If you rerun this notebook, you'll run into an error on the last step because the endpoint already exists. You can either delete the endpoint from the endpoint console page or you can change the endpoint_name in the previous code block.\n", "
" ] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" } }, "nbformat": 4, "nbformat_minor": 2 }