{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Training DonkeyCar model with SageMaker\n", "\n", "The **SageMaker Python SDK** makes it easy to train and deploy ML models. In this notebook we train a model from data collected from the robocar.\n", "\n", "## Setup the environment\n", "\n", "First we need to define a few variables that will be needed later. Here we specify a bucket to use and the role that will be used for working with SageMaker." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker import get_execution_role\n", "\n", "#Bucket with input data\n", "data_location = 's3://'\n", "\n", "#IAM execution role that gives SageMaker access to resources in your AWS account.\n", "#We can use the SageMaker Python SDK to get the role from our notebook environment. \n", "role = get_execution_role()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create the session \n", "\n", "The session remembers our connection parameters to SageMaker. We'll use it to perform all of our SageMaker operations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sagemaker as sage\n", "from datetime import datetime\n", "\n", "sess = sage.Session()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create an estimator and fit the model\n", "\n", "In order to use SageMaker to fit our algorithm, we'll create an Estimator that defines how to use the container to train. This includes the configuration we need to invoke SageMaker training:\n", "* The container name. This is constructed as in the shell commands above.\n", "* The role. As defined above.\n", "* The instance count which is the number of machines to use for training.\n", "* The instance type which is the type of machine to use for training.\n", "* The output path determines where the model artifact will be written.\n", "* The session is the SageMaker session object that we defined above.\n", "Then we use fit() on the estimator to train against the data that we uploaded above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "region = sess.boto_session.region_name\n", "image = '831212071815.dkr.ecr.{}.amazonaws.com/donkey:latest'.format(region)\n", "dt = datetime.now().isoformat()\n", "\n", "tree = sage.estimator.Estimator(image,\n", " role, 1, 'ml.c5.2xlarge',\n", " output_path=\"{}/output/{}\".format(data_location, dt),\n", " sagemaker_session=sess)\n", "\n", "tree.fit(\"{}/input/\".format(data_location))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Download the model and deploy to robocar\n", "\n", "Run the code below to get the S3 location for model data. Download this onto the robocar and test running with it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(tree.model_data)" ] } ], "metadata": { "kernelspec": { "display_name": "conda_tensorflow_p36", "language": "python", "name": "conda_tensorflow_p36" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }