{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "Kz-RRWaZwIE_" }, "source": [ "## Environment Setup" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "XOLh456Zj-vi", "outputId": "956dc7dd-a4ad-451c-fd04-3fa1fa22327d" }, "outputs": [], "source": [ "!mkdir -p ~/.aws && cp /content/drive/MyDrive/AWS/684947_admin ~/.aws/credentials\n", "!chmod 600 ~/.aws/credentials\n", "!pip install -qq awscli boto3\n", "!aws sts get-caller-identity" ] }, { "cell_type": "markdown", "metadata": { "id": "Yk0iTNpAnUVd" }, "source": [ "### Create S3 Bucket and Clone Files" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "VxuTpdB0SvZO", "outputId": "f4da58ed-0ff1-41d1-c86f-5bd3abcf5d5d" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "env: BUCKET_NAME=wys-glueworkshop\n" ] } ], "source": [ "BUCKET_NAME = \"wys-glueworkshop\"\n", "%env BUCKET_NAME=$BUCKET_NAME" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7JjrtQoQmHUy" }, "outputs": [], "source": [ "!aws s3 mb s3://$BUCKET_NAME\n", "!aws s3api put-public-access-block --bucket $BUCKET_NAME \\\n", " --public-access-block-configuration \"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true\"" ] }, { "cell_type": "markdown", "metadata": { "id": "e98y68PLqJ81" }, "source": [ "Note: We will use data from a public COVID-19 dataset curated by AWS. If you are interested in learning more about the dataset, read this [blog post](https://aws.amazon.com/blogs/big-data/a-public-data-lake-for-analysis-of-covid-19-data/) for more information." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "UKu-c_psoivx" }, "outputs": [], "source": [ "!curl 'https://static.us-east-1.prod.workshops.aws/public/058af3a5-469d-4a2a-9619-1190c6a970ec/static/download/glue-workshop.zip' --output glue-workshop.zip\n", "!unzip glue-workshop.zip\n", "\n", "!mkdir glue-workshop/library\n", "!mkdir glue-workshop/output\n", "\n", "!git clone https://github.com/jefftune/pycountry-convert.git\n", "%cd pycountry-convert\n", "!zip -r pycountry_convert.zip pycountry_convert/\n", "%cd ..\n", "!mv pycountry-convert/pycountry_convert.zip glue-workshop/library/\n", "\n", "!aws s3 sync glue-workshop/code/ s3://$BUCKET_NAME/script/\n", "!aws s3 sync glue-workshop/data/ s3://$BUCKET_NAME/input/\n", "!aws s3 sync glue-workshop/library/ s3://$BUCKET_NAME/library/\n", "!aws s3 sync s3://covid19-lake/rearc-covid-19-testing-data/json/states_daily/ s3://$BUCKET_NAME/input/lab5/json/" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "NM8Jebb5pr4S", "outputId": "fded9db5-0f4b-451a-c861-c5cf74d81c73" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " PRE input/\n", " PRE library/\n", " PRE script/\n" ] } ], "source": [ "!aws s3 ls $BUCKET_NAME/" ] }, { "cell_type": "markdown", "metadata": { "id": "x9wVIAtZq-Sg" }, "source": [ "### Deploy CloudFormation Template" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "xwEuP6lTwziw" }, "outputs": [], "source": [ "%%writefile NoVPC.yaml\n", "AWSTemplateFormatVersion: 2010-09-09\n", "Parameters:\n", " UniquePostfix:\n", " Type: String\n", " Default: glueworkshop\n", " Description: 'Enter a unique postfix value, must be all lower cases!'\n", " S3Bucket:\n", " Type: String\n", " Default: s3://\n", " Description: 'enter the S3 bucket path for workshop'\n", "Resources:\n", " AWSGlueServiceRole:\n", " Type: 'AWS::IAM::Role'\n", " Properties:\n", " RoleName: !Join \n", " - '' \n", " - - AWSGlueServiceRole- \n", " - !Ref UniquePostfix\n", " AssumeRolePolicyDocument:\n", " Version: 2012-10-17\n", " Statement:\n", " - Effect: Allow\n", " Principal:\n", " Service: glue.amazonaws.com\n", " Action: 'sts:AssumeRole'\n", " ManagedPolicyArns:\n", " - 'arn:aws:iam::aws:policy/AmazonS3FullAccess'\n", " - 'arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole'\n", " - 'arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess'\n", " - 'arn:aws:iam::aws:policy/AmazonKinesisFullAccess'\n", " Policies:\n", " - PolicyName: \"iam-passrole\"\n", " PolicyDocument:\n", " Version: 2012-10-17\n", " Statement:\n", " - Effect: Allow\n", " Action: 'iam:PassRole'\n", " Resource: !Sub 'arn:aws:iam::${AWS::AccountId}:role/AWSGlueServiceRole-${UniquePostfix}'\n", " AWSGlueServiceSageMakerNotebookRole:\n", " Type: 'AWS::IAM::Role'\n", " Properties:\n", " RoleName: !Join \n", " - ''\n", " - - AWSGlueServiceSageMakerNotebookRole-\n", " - !Ref UniquePostfix\n", " AssumeRolePolicyDocument:\n", " Version: 2012-10-17\n", " Statement:\n", " - Effect: Allow\n", " Principal:\n", " Service: sagemaker.amazonaws.com\n", " Action: 'sts:AssumeRole'\n", " ManagedPolicyArns:\n", " - 'arn:aws:iam::aws:policy/AmazonS3FullAccess'\n", " - 'arn:aws:iam::aws:policy/service-role/AWSGlueServiceNotebookRole'\n", " - 'arn:aws:iam::aws:policy/AmazonSageMakerFullAccess'\n", " - 'arn:aws:iam::aws:policy/CloudWatchLogsFullAccess' \n", " AWSGlueDataBrewServiceRole:\n", " Type: 'AWS::IAM::Role'\n", " Properties:\n", " RoleName: !Join \n", " - ''\n", " - - AWSGlueDataBrewServiceRole-\n", " - !Ref UniquePostfix\n", " AssumeRolePolicyDocument:\n", " Version: 2012-10-17\n", " Statement:\n", " - Effect: Allow\n", " Principal:\n", " Service: databrew.amazonaws.com\n", " Action: 'sts:AssumeRole'\n", " ManagedPolicyArns:\n", " - 'arn:aws:iam::aws:policy/AmazonS3FullAccess'\n", " - 'arn:aws:iam::aws:policy/service-role/AWSGlueDataBrewServiceRole'\n", " KinesisStream: \n", " Type: AWS::Kinesis::Stream \n", " Properties: \n", " Name: !Ref UniquePostfix \n", " RetentionPeriodHours: 24 \n", " ShardCount: 2\n", " GlueCatalogDatabase:\n", " Type: AWS::Glue::Database\n", " Properties:\n", " CatalogId: !Ref AWS::AccountId\n", " DatabaseInput:\n", " Name: !Join \n", " - '' \n", " - - !Ref UniquePostfix \n", " - -cloudformation \n", " Description: Database to tables for workshop\n", " JsonStreamingTable:\n", " DependsOn: GlueCatalogDatabase\n", " Type: AWS::Glue::Table\n", " Properties:\n", " CatalogId: !Ref AWS::AccountId\n", " DatabaseName: !Ref GlueCatalogDatabase\n", " TableInput:\n", " Name: json-streaming-table\n", " Description: Define schema for streaming json\n", " TableType: EXTERNAL_TABLE\n", " Parameters: { \"classification\": \"json\" }\n", " StorageDescriptor:\n", " OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\n", " InputFormat: org.apache.hadoop.mapred.TextInputFormat\n", " Columns:\n", " - Name: \"uuid\"\n", " Type: bigint\n", " - Name: \"country\"\n", " Type: string\n", " - Name: \"item type\"\n", " Type: string\n", " - Name: \"sales channel\"\n", " Type: string \n", " - Name: \"order priority\"\n", " Type: string\n", " - Name: \"order date\"\n", " Type: string\n", " - Name: \"region\"\n", " Type: string\n", " - Name: \"ship date\"\n", " Type: string\n", " - Name: \"units sold\"\n", " Type: int\n", " - Name: \"unit price\"\n", " Type: decimal\n", " - Name: \"unit cost\"\n", " Type: decimal\n", " - Name: \"total revenue\"\n", " Type: decimal\n", " - Name: \"total cost\"\n", " Type: decimal\n", " - Name: \"total profit\"\n", " Type: decimal \n", " Parameters: {\"endpointUrl\": \"https://kinesis.us-east-2.amazonaws.com\", \"streamName\": !Ref UniquePostfix,\"typeOfData\": \"kinesis\"}\n", " SerdeInfo:\n", " Parameters: {\"paths\": \"Country,Item Type,Order Date,Order Priority,Region,Sales Channel,Ship Date,Total Cost,Total Profit,Total Revenue,Unit Cost,Unit Price,Units Sold,uuid\"}\n", " SerializationLibrary: org.openx.data.jsonserde.JsonSerDe\n", " JsonStaticTable:\n", " DependsOn: GlueCatalogDatabase\n", " Type: AWS::Glue::Table\n", " Properties:\n", " CatalogId: !Ref AWS::AccountId\n", " DatabaseName: !Ref GlueCatalogDatabase\n", " TableInput:\n", " Name: json-static-table\n", " Description: Define schema for static json\n", " TableType: EXTERNAL_TABLE\n", " Parameters: { \"classification\": \"json\" }\n", " StorageDescriptor:\n", " OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\n", " InputFormat: org.apache.hadoop.mapred.TextInputFormat\n", " Columns:\n", " - Name: \"uuid\"\n", " Type: bigint\n", " - Name: \"country\"\n", " Type: string\n", " - Name: \"item type\"\n", " Type: string\n", " - Name: \"sales channel\"\n", " Type: string \n", " - Name: \"order priority\"\n", " Type: string\n", " - Name: \"order date\"\n", " Type: string\n", " - Name: \"region\"\n", " Type: string\n", " - Name: \"ship date\"\n", " Type: string\n", " - Name: \"units sold\"\n", " Type: int\n", " - Name: \"unit price\"\n", " Type: decimal\n", " - Name: \"unit cost\"\n", " Type: decimal\n", " - Name: \"total revenue\"\n", " Type: decimal\n", " - Name: \"total cost\"\n", " Type: decimal\n", " - Name: \"total profit\"\n", " Type: decimal \n", " Location: !Join \n", " - '' \n", " - - !Ref S3Bucket \n", " - input/lab4/json/\n", " SerdeInfo:\n", " Parameters: {\"paths\": \"Country,Item Type,Order Date,Order Priority,Region,Sales Channel,Ship Date,Total Cost,Total Profit,Total Revenue,Unit Cost,Unit Price,Units Sold,uuid\"}\n", " SerializationLibrary: org.openx.data.jsonserde.JsonSerDe\n", " GlueDevEndpoint:\n", " Type: 'AWS::Glue::DevEndpoint'\n", " Properties: \n", " EndpointName: !Join \n", " - ''\n", " - - GlueSageMakerNotebook-\n", " - !Ref UniquePostfix\n", " Arguments: \n", " GLUE_PYTHON_VERSION: 3\n", " GlueVersion: 1.0\n", " NumberOfWorkers: 4\n", " WorkerType: Standard\n", " RoleArn: !GetAtt AWSGlueServiceRole.Arn\n", " ExtraPythonLibsS3Path: !Join \n", " - '' \n", " - - !Ref S3Bucket \n", " - 'library/pycountry_convert.zip'\n", " DependsOn: AWSGlueServiceRole\n", "Outputs:\n", " EndpointName:\n", " Value: !Ref GlueDevEndpoint\n", " Description: Endpoint created for Glue Workshop Lab." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1oMZqG6nqZBq" }, "outputs": [], "source": [ "!aws cloudformation create-stack --stack-name glueworkshop \\\n", " --template-body file://NoVPC.yaml \\\n", " --capabilities CAPABILITY_NAMED_IAM \\\n", " --parameters \\\n", " ParameterKey=UniquePostfix,ParameterValue=glueworkshop \\\n", " ParameterKey=S3Bucket,ParameterValue=s3://$BUCKET_NAME/" ] }, { "cell_type": "markdown", "metadata": { "id": "Xr8cS742wGDB" }, "source": [ "## Glue Data Catalog" ] }, { "cell_type": "markdown", "metadata": { "id": "LxnvTTqswjLD" }, "source": [ "We will configure an AWS Glue crawler to scan and create metadata definitions in the Glue Data Catalog." ] }, { "cell_type": "markdown", "metadata": { "id": "PAFmcvmwxnUw" }, "source": [ "### Create Data Catalog Database" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "QhvPxW4jwHAk", "outputId": "9e6d9aa3-026d-4ee3-f803-1727a9f14fec" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "uuid,Country,Item Type,Sales Channel,Order Priority,Order Date,Region,Ship Date,Units Sold,Unit Price,Unit Cost,Total Revenue,Total Cost,Total Profit\r\n", "535113847,Azerbaijan,Snacks,Online,C,10/8/14,Middle East and North Africa,10/23/14,934,152.58,97.44,142509.72,91008.96,51500.76\r\n", "874708545,Panama,Cosmetics,Offline,L,2/22/15,Central America and the Caribbean,2/27/15,4551,437.2,263.33,1989697.2,1198414.83,791282.37\r\n", "854349935,Sao Tome and Principe,Fruits,Offline,M,12/9/15,Sub-Saharan Africa,1/18/16,9986,9.33,6.92,93169.38,69103.12,24066.26\r\n", "892836844,Sao Tome and Principe,Personal Care,Online,M,9/17/14,Sub-Saharan Africa,10/12/14,9118,81.73,56.67,745214.14,516717.06,228497.08\r\n", "129280602,Belize,Household,Offline,H,2/4/10,Central America and the Caribbean,3/5/10,5858,668.27,502.54,3914725.66,2943879.32,970846.34\r\n", "473105037,Denmark,Clothes,Online,C,2/20/13,Europe,2/28/13,1149,109.28,35.84,125562.72,41180.16,84382.56\r\n", "754046475,Germany,Cosmetics,Offline,M,3/31/13,Europe,5/3/13,7964,437.2,263.33,3481860.8,2097160.12,1384700.68\r\n", "772153747,Turkey,Fruits,Online,C,3/26/12,Middle East and North Africa,4/7/12,6307,9.33,6.92,58844.31,43644.44,15199.87\r\n", "847788178,United Kingdom,Snacks,Online,H,12/29/12,Europe,1/15/13,8217,152.58,97.44,1253749.86,800664.48,453085.38\r\n" ] } ], "source": [ "!head glue-workshop/data/lab1/csv/sample.csv" ] }, { "cell_type": "markdown", "metadata": { "id": "ju4dwcrTxiHM" }, "source": [ "Go to the AWS Glue console , click Databases on the left. You should see a database with name glueworkshop-cloudformation. This was created by the CloudFormation template we launched during workshop setup and contains two pre-defined tables that we will use later in Glue streaming lab.\n", "\n", "Create another database with name glueworkshop by clicking Add Database and then clicking Create." ] }, { "cell_type": "markdown", "metadata": { "id": "V7byJbG9x8kE" }, "source": [ "### Create Data Crawler" ] }, { "cell_type": "markdown", "metadata": { "id": "DWWC7PFfx_29" }, "source": [ "We will create 2 crawlers to crawl CSV and JSON folders." ] }, { "cell_type": "markdown", "metadata": { "id": "GdQENrifyVXB" }, "source": [ "**Add Crawler for CSV folder**\n", "\n", "1. Click Crawlers on the left.\n", "1. Click Add Crawler.\n", "1. Provide a name for the new Crawler such as lab1-csv, click Next.\n", "1. On the Crawler source type page keep default values and click Next.\n", "1. On the Data Store page, under Include path, pick s3://${BUCKET_NAME}/input/lab1/csv/. Make sure you pick the csv folder rather than the file inside the folder, and then click Next.\n", "1. Click Next on the Add another data store page.\n", "1. Click Choose an existing IAM role and pick the role AWSGlueServiceRole-glueworkshop then click Next.\n", "1. Click Next on the \"Create a schedule for this crawler\" page.\n", "1. On the output page choose glueworkshop from the Database dropdown list then click Next.\n", "1. Click Finish." ] }, { "cell_type": "markdown", "metadata": { "id": "YWnLSdf5x6j8" }, "source": [ "**Add Crawler for JSON folder**\n", "\n", "1. Click Crawlers on the left.\n", "2. Click Add Crawler.\n", "3. Provide a name for the new Crawler such as covid-testing, click Next.\n", "4. On the Crawler source type page keep default values and click Next.\n", "5. On the Data Store page, under Include path, pick s3://${BUCKET_NAME}/input/lab5/json/. Make sure you pick the json folder rather than the file inside the folder, and then click Next.\n", "6. Click Next on the Add another data store page.\n", "7. Click Choose an existing IAM role and pick the role AWSGlueServiceRole-glueworkshop then click Next.\n", "8. Click Next on the \"Create a schedule for this crawler\" page.\n", "9. On the output page choose glueworkshop from the Database dropdown list then click Next.\n", "10. Click Finish." ] }, { "cell_type": "markdown", "metadata": { "id": "pCcOQu1czjoH" }, "source": [ "### Crawl Data in Catalog Table" ] }, { "cell_type": "markdown", "metadata": { "id": "RqyotLe1zvWH" }, "source": [ "- Once we have created both crawlers, click the check box next to each and choose to run the crawler by clicking the Run crawler button at the top of the page. It will take a minute or two for each crawler to run.\n", "- In our use case, CSV and JSON classifiers will be used to scan the files.\n", "- Once the crawlers finish running, you can see the results by clicking Tables on the left of the page. You should see 2 new tables that were created by the crawlers - csv and json.\n", "- Click on table csv and you will see the table schema automatically generated by the crawler based on the csv file.\n", "- Click on table json and you will see the table schema automatically generated by the crawler based on the json file.\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "Qyv8pcu71dUC" }, "source": [ "## Setup Glue development environment\n", "\n", "AWS Glue provides multiple options to develop and test Spark code. Data engineers and data scientists can use tools of their choice to author Glue ETL scripts before deploying them to production. Data scientists can continue to work with Sagemaker notebooks connected to Glue Dev Endpoint, others can use Glue Job Notebooks to quickly launch and use jupyter-based fully-managed notebooks directly in browser. If you prefer to work locally, you can use Glue interactive sessions." ] }, { "cell_type": "markdown", "metadata": { "id": "1loHOfbB1nld" }, "source": [ "### Use Glue Studio Notebook" ] }, { "cell_type": "markdown", "metadata": { "id": "TvKraOEl2BtF" }, "source": [ "AWS Glue Studio Job Notebooks allows you to interactively author extract-transform-and-load (ETL) jobs in a notebook interface based on Jupyter Notebooks. AWS Glue Studio Job Notebooks requires minimal setup so developers can get started quickly, and feature one-click conversion of notebooks into AWS Glue data integration jobs. Notebooks also support live data integration, fast startup times, and built-in cost management. In this module you will learn how to create, configure and use AWS Glue Studio Job Notebooks to develop code that will be used as an independent Glue Job." ] }, { "cell_type": "markdown", "metadata": { "id": "RtCEDH2q2KUt" }, "source": [ "**Create Glue Job Notebook**\n", "\n", "1. Go to Glue Studio Jobs and Choose “Jupyter Notebook” among the other options to Create Job.\n", "2. Choose “Create a new notebook from scratch”. Then click Create button.\n", "3. Give your notebook a name and choose Glue Service IAM role and click Start notebook job and wait for notebook to load.\n", "4. Before we start coding let’s change some of the default configurations that Glue Notebook initialized with. Insert three new cells after the Available Magics markdown cell.:\n", " 1. Run `%idle_timeout 30` to ensure your session will automatically stop after 30 minutes of inactivity so you dont need to pay for idle resources.\n", " 2. Run `%number_of_workers 2` to reduce number of active workers.\n", " 3. Run `%extra_py_files \"s3://${BUCKET_NAME}/library/pycountry_convert.zip\"` - this will load 3rd party python library that we will use in next modules.\n", "5. Now it is time to start a Glue Session. Click on the cell with the Glue Initialization code and press Run the Selected cells control.\n", "6. Wait until you see a message: Session has been created. Now insert new cell and run %status magic to review Glue Notebook configurations and ensure that your changes were applied.\n", "7. Now you are ready to start developing some Spark code." ] }, { "cell_type": "markdown", "metadata": { "id": "0VJJ3tXAOxvM" }, "source": [ "## Glue ETL Job" ] }, { "cell_type": "markdown", "metadata": { "id": "vm_9LHUoOzPs" }, "source": [ "We will see how you can use the development environment created to create and test ETL code. We will package and deploy the code we created to AWS Glue and execute it as a Glue job. Then we will use Glue Triggers and a Workflow to manage job executions." ] }, { "cell_type": "markdown", "metadata": { "id": "eZ2iwDZDPBWR" }, "source": [ "### Develop Code in Notebook" ] }, { "cell_type": "markdown", "metadata": { "id": "iRIE-SVoPF30" }, "source": [ "We will now write some PySpark code. For each step you will need to copy the code block into a notebook cell and then click Run.\n", "\n", "The first code we will write is some boiler-plate imports that will generally be included in the start of every Spark/Glue job and then an import statement for the 3rd party library." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8-83oBDy-O8B" }, "outputs": [], "source": [ "from pyspark.sql.functions import udf, col\n", "from pyspark.sql.types import IntegerType, StringType\n", "from pyspark import SparkContext\n", "from pyspark.sql import SQLContext\n", "\n", "from datetime import datetime\n", "from pycountry_convert import (\n", " convert_country_alpha2_to_country_name,\n", " convert_country_alpha2_to_continent,\n", " convert_country_name_to_country_alpha2,\n", " convert_country_alpha3_to_country_alpha2,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "K_uk_wLYPnhr" }, "source": [ "We will define a UDF (user defined function) to use for processing a Spark dataframe. UDFs allow a developer to extend the standard Spark functionality using Python code. To do that your code needs to be in the form of a UDF lambda. The code below creates a Spark UDF udf_get_country_code2 to convert a country name into a two-letter code." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hBSYpwEUPnt0" }, "outputs": [], "source": [ "def get_country_code2(country_name):\n", " country_code2 = 'US'\n", " try:\n", " country_code2 = convert_country_name_to_country_alpha2(country_name)\n", " except KeyError:\n", " country_code2 = ''\n", " return country_code2\n", "\n", "\n", "udf_get_country_code2 = udf(lambda z: get_country_code2(z), StringType())" ] }, { "cell_type": "markdown", "metadata": { "id": "DxMQ87BVPtLe" }, "source": [ "Load the data from S3 into a Spark dataframe (named sparkDF) then check the schema of the Spark DataFrame." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GFBcz1ztPtXE" }, "outputs": [], "source": [ "sparkDF = spark.read.load(\"s3://${BUCKET_NAME}/input/lab2/sample.csv\", \n", " format=\"csv\", \n", " sep=\",\", \n", " inferSchema=\"true\",\n", " header=\"true\")" ] }, { "cell_type": "markdown", "metadata": { "id": "RndqwzI1Py-7" }, "source": [ "Now let's look at the data loaded by Spark. Compare this with the data you examined earlier and see if the schema inferred by Spark is the same as you what you saw earlier." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "g_SHtM36P2WD" }, "outputs": [], "source": [ "sparkDF.printSchema()" ] }, { "cell_type": "markdown", "metadata": { "id": "e62tZjChQJ-5" }, "source": [ "Next we will create a new dataframe that includes a column created using the UDF we created previously." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "mbaTqLfTQKRz" }, "outputs": [], "source": [ "new_df = sparkDF.withColumn('country_code_2', udf_get_country_code2(col(\"Country\")))\n", "new_df.printSchema()" ] }, { "cell_type": "markdown", "metadata": { "id": "wHSMksdaQSzZ" }, "source": [ "Let's take a look at the data in this new dataframe - notice the new column country_code_2. This contains two-letter country codes that were determined based on the Country column." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "s_q3aSpUQTQR" }, "outputs": [], "source": [ "new_df.show(10)" ] }, { "cell_type": "markdown", "metadata": { "id": "jn9I81wrQm2s" }, "source": [ "So far, we have been running standard Spark code. Now, we will try some Glue-flavored code. Remember the Glue Data Catalog tables we created earlier? We will now load them into a Glue dynamic frame. After the data is loaded into a Glue dynamic frame, compare the schema it presented with the schema stored in the Glue Data Catalog table.\n", "\n", "Glue dynamic frames differ from Spark dataframes because they have a flexible schema definition - hence the name dynamic. This enables each record to have a different schema which is computed on-the-fly. This is especially useful in handling messy data.\n", "\n", "Notice here we don't need to specify an S3 location - this is because the Glue Data Catalog knows where the data lives thanks to the crawler we configured and ran earlier." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "q502rBDgQywf" }, "outputs": [], "source": [ "import sys\n", "from awsglue.transforms import *\n", "from awsglue.utils import getResolvedOptions\n", "from pyspark.context import SparkContext\n", "from awsglue.context import GlueContext\n", "from awsglue.job import Job\n", "from awsglue.dynamicframe import DynamicFrame\n", "\n", "glueContext = GlueContext(SparkContext.getOrCreate())\n", "\n", "dynaFrame = glueContext.create_dynamic_frame.from_catalog(database=\"glueworkshop\", table_name=\"csv\")\n", "dynaFrame.printSchema()" ] }, { "cell_type": "markdown", "metadata": { "id": "7dtIFwa9Qv26" }, "source": [ "Just as with the Spark dataframe, we can view the data in the Glue dynamic frame by calling the toDF function on it and then using the standard Spark show function." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1Cpf-Dp4Qphx" }, "outputs": [], "source": [ "dynaFrame.toDF().show(10)" ] }, { "cell_type": "markdown", "metadata": { "id": "MCBnPX0ARCES" }, "source": [ "### Deploy Glue ETL Job" ] }, { "cell_type": "markdown", "metadata": { "id": "BsXqykgMRDxu" }, "source": [ "Now we will package together the code snippets we have been testing and exploring on their own and create a Spark script for Glue. The code below is the combined and cleaned-up code from previous sections that we had run in the notebook. It is standard Spark code - not using anything Glue-specific except for the imports at the beginning." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ENwOCL1pRDco" }, "outputs": [], "source": [ "import sys\n", "from awsglue.transforms import *\n", "from awsglue.utils import getResolvedOptions\n", "from awsglue.context import GlueContext\n", "from awsglue.job import Job\n", "\n", "from pyspark.sql.functions import udf, col\n", "from pyspark.sql.types import IntegerType, StringType\n", "from pyspark.sql import SQLContext\n", "from pyspark.context import SparkContext\n", "\n", "from datetime import datetime\n", "from pycountry_convert import (\n", " convert_country_alpha2_to_country_name,\n", " convert_country_alpha2_to_continent,\n", " convert_country_name_to_country_alpha2,\n", " convert_country_alpha3_to_country_alpha2,\n", ")\n", "\n", "\n", "def get_country_code2(country_name):\n", " country_code2 = 'US'\n", " try:\n", " country_code2 = convert_country_name_to_country_alpha2(country_name)\n", " except KeyError:\n", " country_code2 = ''\n", " return country_code2\n", "\n", "\n", "udf_get_country_code2 = udf(lambda z: get_country_code2(z), StringType())\n", "\n", "## @params: [JOB_NAME]\n", "args = getResolvedOptions(sys.argv, ['JOB_NAME', 's3_bucket'])\n", "\n", "s3_bucket = args['s3_bucket']\n", "job_time_string = datetime.now().strftime(\"%Y%m%d%H%M%S\")\n", "\n", "sc = SparkContext()\n", "glueContext = GlueContext(sc)\n", "spark = glueContext.spark_session\n", "job = Job(glueContext)\n", "job.init(args['JOB_NAME'], args)\n", "\n", "df = spark.read.load(s3_bucket + \"input/lab2/sample.csv\", \n", " format=\"csv\", \n", " sep=\",\", \n", " inferSchema=\"true\", \n", " header=\"true\")\n", "new_df = df.withColumn('country_code_2', udf_get_country_code2(col(\"Country\")))\n", "new_df.write.csv(s3_bucket + \"/output/lab3/notebook/\" + job_time_string + \"/\")\n", "\n", "job.commit()" ] }, { "cell_type": "markdown", "metadata": { "id": "Qf1l3Le5RKZh" }, "source": [ "Run the following command to create a Glue ETL job glueworkshop-lab3-etl-job with the same Spark code we created earlier which is stored in s3://${BUCKET_NAME}/script/lab3/spark.py." ] }, { "cell_type": "code", "execution_count": 54, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "gGp0NpN7RP6H", "outputId": "563df190-a253-4df6-bff2-1760ebf821bb" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " \"Name\": \"glueworkshop-lab3-etl-job\"\n", "}\n" ] } ], "source": [ "!aws glue create-job \\\n", " --name glueworkshop-lab3-etl-job \\\n", " --role AWSGlueServiceRole-glueworkshop \\\n", " --command \"Name=glueetl,ScriptLocation=s3://${BUCKET_NAME}/script/lab3/spark.py,PythonVersion=3\" \\\n", " --glue-version '2.0' \\\n", " --default-arguments \"{\\\"--extra-py-files\\\": \\\"s3://${BUCKET_NAME}/library/pycountry_convert.zip\\\", \\\n", " \\\"--s3_bucket\\\": \\\"s3://${BUCKET_NAME}/\\\" }\"" ] }, { "cell_type": "markdown", "metadata": { "id": "Rxa7pohvTGAM" }, "source": [ "### Run Glue ETL Job" ] }, { "cell_type": "markdown", "metadata": { "id": "jt4CquXmTvVN" }, "source": [ "1. Go to the Glue Job console and explore the Job you created. Explore the Script and Job details section.\n", "2. Click on the Run button to run the job.\n", "3. Go to the Runs section to track the Job. You can check the job execution status and log by clicking on the highlighted Log hyperlink. It will bring you to the CloudWatch console and show a detailed log. Wait for the Job to finish.\n", "4. Once the job finishes, you can go to the S3 console and to your s3://${BUCKET_NAME}/output/lab3/ folder. You should see a new folder created with recent timestamp. You can download these files to your local environment using following command: `aws s3 cp s3://${BUCKET_NAME}/output/ glue-workshop/output --recursive`" ] }, { "cell_type": "code", "execution_count": 55, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "LVAFLqKxTFkD", "outputId": "74553a4c-536b-4a29-d189-89f5497cbfd3" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "download: s3://wys-glueworkshop/output/lab3/20221124062626/part-00000-60212af7-cbbc-4187-9b7c-d034d2241dd2-c000.csv to glue-workshop/output/lab3/20221124062626/part-00000-60212af7-cbbc-4187-9b7c-d034d2241dd2-c000.csv\n", "download: s3://wys-glueworkshop/output/lab3/20221124062626/part-00002-60212af7-cbbc-4187-9b7c-d034d2241dd2-c000.csv to glue-workshop/output/lab3/20221124062626/part-00002-60212af7-cbbc-4187-9b7c-d034d2241dd2-c000.csv\n", "download: s3://wys-glueworkshop/output/lab3/20221124062626/part-00001-60212af7-cbbc-4187-9b7c-d034d2241dd2-c000.csv to glue-workshop/output/lab3/20221124062626/part-00001-60212af7-cbbc-4187-9b7c-d034d2241dd2-c000.csv\n" ] } ], "source": [ "!aws s3 sync s3://wys-glueworkshop/output/ glue-workshop/output" ] }, { "cell_type": "markdown", "metadata": { "id": "QJQjUKPMVaPs" }, "source": [ "### Create Glue Trigger" ] }, { "cell_type": "markdown", "metadata": { "id": "iPKLd1BZVqw5" }, "source": [ "Follow the steps below to create a scheduled trigger to run the ETL job every hour.\n", "\n", "1. Click Triggers on the left.\n", "2. Click Add Trigger.\n", "3. In Set up your trigger's properties, set the trigger name as glueworkshop-lab3-etl-job-trigger, set Trigger type as Schedule, set Frequency as Hourly and Start Minute as 00, then click Next.\n", "4. In Choose jobs to trigger, click the highlighted Add next to glueworkshop-lab3-etl-job and it will be added under Jobs to start list, click Next.\n", "5. Click Finish.\n", "6. Now you have a Glue trigger which, when activated, will kick off the attached jobs on a regular schedule. The trigger we just created is not activated on creation. If you want to use it to trigger execution every hour, check the checkbox next to the job name, and choose Activate trigger in Action ▼ dropdown. Now the jobs associated with the trigger will run every hour." ] }, { "cell_type": "markdown", "metadata": { "id": "IH8H25BBWtZ5" }, "source": [ "## Glue Streaming Job" ] }, { "cell_type": "markdown", "metadata": { "id": "ZZ7JD6HKXBAK" }, "source": [ "### Develop Glue Streaming Job in Notebook" ] }, { "cell_type": "markdown", "metadata": { "id": "fEbDhXQPW_P4" }, "source": [ "Before creating a streaming ETL job, you must manually create a Data Catalog table that specifies the source data stream properties, including the data schema. This table is used as the data source for the streaming ETL job. We will use the Data Catalog table json-streaming-table created earlier by CloudFormation. This table's data source is AWS Kinesis and it has the schema definition of the JSON data we will send through the stream.\n", "\n", "Go to the AWS Kinesis console and click Data streams on the left to open the UI for Kinesis Data Streams. You should see a data stream with name glueworkshop which was created by CloudFormation." ] }, { "cell_type": "markdown", "metadata": { "id": "n3Y3SEJLXcE6" }, "source": [ "During the streaming processing, we will use a lookup table to convert a country name from the full name to a 2-letter country code. The lookup table data is stored in S3 at s3://${BUCKET_NAME}/input/lab4/country_lookup/." ] }, { "cell_type": "markdown", "metadata": { "id": "uti6JRP6YHaf" }, "source": [ "Next, we will develop the code for the streaming job. Glue streaming is micro-batch based and the streaming job processes incoming data using a Tumbling Window method. All data inside a given window will be processed by a batch function. Inside the Glue Streaming job, the invocation of the tumbling window's function is shown below. The window functions is named batch_function and takes in the micro-batch dataframe, processing it at the window interval (20 seconds in this case)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "81iWV-InSbsJ" }, "outputs": [], "source": [ "glueContext.forEachBatch(frame=sourceData,\n", " batch_function=processBatch,\n", " options={\"windowSize\": \"60 seconds\", \"checkpointLocation\": checkpoint_location})" ] }, { "cell_type": "markdown", "metadata": { "id": "ca1KIL57Ygl7" }, "source": [ "Our goal is to use this Jupyter Notebook to develop and test a batch_function named processBatch. This function will process a given dataframe with the same schema as the streaming data inside our development environment.\n", "\n", "Copy the following code to notebook cells.\n", "\n", "Set up the environment and variables for the test." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "S8hTV_UIYgWx" }, "outputs": [], "source": [ "import sys\n", "from awsglue.transforms import *\n", "from awsglue.utils import getResolvedOptions\n", "from pyspark.context import SparkContext\n", "from awsglue.context import GlueContext\n", "from awsglue.job import Job\n", "from awsglue.dynamicframe import DynamicFrame\n", "from pyspark.sql.functions import udf, col\n", "from pyspark.sql.types import IntegerType, StringType\n", "from pyspark import SparkContext\n", "from pyspark.sql import SQLContext\n", "from datetime import datetime\n", "\n", "glueContext = GlueContext(SparkContext.getOrCreate())\n", "s3_bucket = \"s3://${BUCKET_NAME}\"\n", "output_path = s3_bucket + \"/output/lab4/notebook/\"\n", "job_time_string = datetime.now().strftime(\"%Y%m%d%H%M%S\")\n", "s3_target = output_path + job_time_string" ] }, { "cell_type": "markdown", "metadata": { "id": "FtXKP8unYoVS" }, "source": [ "Load the lookup dataframe from the S3 folder." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "vSrhHnSHYooT" }, "outputs": [], "source": [ "country_lookup_frame = glueContext.create_dynamic_frame.from_options(\n", " format_options = {\"withHeader\":True, \"separator\":',', \"quoteChar\":\"\\\"\"},\n", " connection_type = \"s3\",\n", " format = \"csv\",\n", " connection_options = {\"paths\": [s3_bucket + \"/input/lab4/country_lookup/\"], \"recurse\":True}, \n", " transformation_ctx = \"country_lookup_frame\")" ] }, { "cell_type": "markdown", "metadata": { "id": "fgn-rh-wYsnS" }, "source": [ "Here is the batch function body where we do type conversion and a look-up transformation on the incoming data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "wU2vUeICYtEh" }, "outputs": [], "source": [ "def processBatch(data_frame, batchId):\n", " if (data_frame.count() > 0):\n", " dynamic_frame = DynamicFrame.fromDF(data_frame, glueContext, \"from_data_frame\")\n", " apply_mapping = ApplyMapping.apply(frame=dynamic_frame, mappings=[\n", " (\"uuid\", \"string\", \"uuid\", \"bigint\"),\n", " (\"country\", \"string\", \"country\", \"string\"),\n", " (\"item type\", \"string\", \"item type\", \"string\"),\n", " (\"sales channel\", \"string\", \"sales channel\", \"string\"),\n", " (\"order priority\", \"string\", \"order priority\", \"string\"),\n", " (\"order date\", \"string\", \"order date\", \"string\"),\n", " (\"region\", \"string\", \"region\", \"string\"),\n", " (\"ship date\", \"string\", \"ship date\", \"string\"),\n", " (\"units sold\", \"int\", \"units sold\", \"int\"),\n", " (\"unit price\", \"string\", \"unit price\", \"decimal\"),\n", " (\"unit cost\", \"string\", \"unit cost\", \"decimal\"),\n", " (\"total revenue\", \"string\", \"total revenue\", \"decimal\"),\n", " (\"total cost\", \"string\", \"total cost\", \"decimal\"),\n", " (\"total profit\", \"string\", \"total profit\", \"decimal\")],\n", " transformation_ctx=\"apply_mapping\")\n", "\n", " final_frame = Join.apply(apply_mapping, country_lookup_frame, 'country', 'CountryName').drop_fields(\n", " ['CountryName', 'country', 'unit price', 'unit cost', 'total revenue', 'total cost', 'total profit'])\n", "\n", " s3sink = glueContext.write_dynamic_frame.from_options(frame=final_frame,\n", " connection_type=\"s3\",\n", " connection_options={\"path\": s3_target},\n", " format=\"csv\",\n", " transformation_ctx=\"s3sink\")" ] }, { "cell_type": "markdown", "metadata": { "id": "9GyoF3DNYv0y" }, "source": [ "Now we will load some test data to test the batch function." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "PwC8Xk4AYwgH" }, "outputs": [], "source": [ "dynaFrame = glueContext.create_dynamic_frame.from_catalog(database=\"glueworkshop-cloudformation\", \n", " table_name=\"json-static-table\")\n", "processBatch(dynaFrame.toDF(), \"12\")" ] }, { "cell_type": "markdown", "metadata": { "id": "bfcBrmZLY2lN" }, "source": [ "Check the output path of s3://${BUCKET_NAME}/output/lab4/notebook/ and you should see some new folders which are generated by the test script. Copy the following to your terminal. You should see a new folder with a recent timestamp. When you check the folder you should see some new files created by the test script." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2QPNlaH4Y3i2" }, "outputs": [], "source": [ "aws s3 cp s3://${BUCKET_NAME}/output/ ~/environment/glue-workshop/output --recursive" ] }, { "cell_type": "markdown", "metadata": { "id": "kkG6l-OCZPKv" }, "source": [ "### Deploy Glue Streaming Job" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "520yqkScZPkZ" }, "outputs": [], "source": [ "import sys\n", "from datetime import datetime\n", "import boto3\n", "import base64\n", "from pyspark.sql import DataFrame, Row\n", "from pyspark.context import SparkContext\n", "from pyspark.sql.types import *\n", "from pyspark.sql.functions import *\n", "from awsglue.transforms import *\n", "from awsglue.utils import getResolvedOptions\n", "from awsglue.context import GlueContext\n", "from awsglue.job import Job\n", "from awsglue import DynamicFrame\n", "\n", "args = getResolvedOptions(sys.argv, ['JOB_NAME', 's3_bucket'])\n", "\n", "sc = SparkContext()\n", "glueContext = GlueContext(sc)\n", "spark = glueContext.spark_session\n", "job = Job(glueContext)\n", "job.init(args['JOB_NAME'], args)\n", "\n", "# S3 sink locations\n", "output_path = args['s3_bucket'] + \"/output/lab4/\"\n", "job_time_string = datetime.now().strftime(\"%Y%m%d%H%M%S\")\n", "s3_target = output_path + job_time_string\n", "checkpoint_location = output_path + \"checkpoint/\"\n", "temp_path = output_path + \"temp/\"\n", "\n", "country_lookup_path = args['s3_bucket'] + \"/input/lab4/country_lookup/\"\n", "country_lookup_frame = glueContext.create_dynamic_frame.from_options( \\\n", " format_options = {\"withHeader\"\\:True, \"separator\":\",\", \"quoteChar\":\"\\\"\"}, \\\n", " connection_type = \"s3\", \\\n", " format = \"csv\", \\\n", " connection_options = {\"paths\": [country_lookup_path], \"recurse\"\\:True}, \\\n", " transformation_ctx = \"country_lookup_frame\")\n", "\n", "def processBatch(data_frame, batchId):\n", " if (data_frame.count() > 0):\n", " dynamic_frame = DynamicFrame.fromDF(data_frame, glueContext, \"from_data_frame\")\n", " apply_mapping = ApplyMapping.apply(frame = dynamic_frame, mappings = [ \\\n", " (\"uuid\", \"string\", \"uuid\", \"bigint\"), \\\n", " (\"country\", \"string\", \"country\", \"string\"), \\\n", " (\"item type\", \"string\", \"item type\", \"string\"), \\\n", " (\"sales channel\", \"string\", \"sales channel\", \"string\"), \\\n", " (\"order priority\", \"string\", \"order priority\", \"string\"), \\\n", " (\"order date\", \"string\", \"order date\", \"string\"), \\\n", " (\"region\", \"string\", \"region\", \"string\"), \\\n", " (\"ship date\", \"string\", \"ship date\", \"string\"), \\\n", " (\"units sold\", \"int\", \"units sold\", \"int\"), \\\n", " (\"unit price\", \"string\", \"unit price\", \"decimal\"), \\\n", " (\"unit cost\", \"string\", \"unit cost\", \"decimal\"), \\\n", " (\"total revenue\", \"string\", \"total revenue\", \"decimal\"), \\\n", " (\"total cost\", \"string\", \"total cost\", \"decimal\"), \\\n", " (\"total profit\", \"string\", \"total profit\", \"decimal\")],\\\n", " transformation_ctx = \"apply_mapping\")\n", "\n", " final_frame = Join.apply(apply_mapping, country_lookup_frame, 'country', 'CountryName').drop_fields( \\\n", " ['CountryName', 'country', 'unit price', 'unit cost', 'total revenue', 'total cost', 'total profit'])\n", "\n", " s3sink = glueContext.write_dynamic_frame.from_options( frame = final_frame, \\\n", " connection_type = \"s3\", \\\n", " connection_options = {\"path\": s3_target}, \\\n", " format = \"csv\", \\\n", " transformation_ctx = \"s3sink\")\n", "\n", "# Read from Kinesis Data Stream from catalog table\n", "sourceData = glueContext.create_data_frame.from_catalog( \\\n", " database = \"glueworkshop-cloudformation\", \\\n", " table_name = \"json-streaming-table\", \\\n", " transformation_ctx = \"datasource0\", \\\n", " additional_options = {\"startingPosition\": \"TRIM_HORIZON\", \"inferSchema\": \"true\"})\n", "\n", "glueContext.forEachBatch(frame = sourceData, \\\n", " batch_function = processBatch, \\\n", " options = {\"windowSize\": \"60 seconds\", \"checkpointLocation\": checkpoint_location})\n", "job.commit()" ] }, { "cell_type": "markdown", "metadata": { "id": "yxYej28eZr3X" }, "source": [ "Run the following command in the Cloud9 terminal to create a Glue Streaming job with the name glueworkshop_lab4_glue_streaming and the Spark code we created earlier. This code is stored in s3://${BUCKET_NAME}/script/lab4/streaming.py." ] }, { "cell_type": "code", "execution_count": 56, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "V1_W5Y-4ZsTf", "outputId": "a83e0573-c257-48a1-f086-942934a0716a" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " \"Name\": \"glueworkshop_lab4_glue_streaming\"\n", "}\n" ] } ], "source": [ "!aws glue create-job \\\n", " --name glueworkshop_lab4_glue_streaming \\\n", " --role AWSGlueServiceRole-glueworkshop \\\n", " --command \"Name=gluestreaming,ScriptLocation=s3://${BUCKET_NAME}/script/lab4/streaming.py,PythonVersion=3\" \\\n", " --glue-version \"2.0\" \\\n", " --default-arguments \"{\\\"--s3_bucket\\\": \\\"s3://${BUCKET_NAME}/\\\" }\"" ] }, { "cell_type": "markdown", "metadata": { "id": "K8zbucseaGFt" }, "source": [ "### Run Glue Streaming Job" ] }, { "cell_type": "markdown", "metadata": { "id": "NYsuNixYaKXm" }, "source": [ "1. Go to the AWS Glue Console and click on Jobs on the left. You should see the newly created Glue streaming job with name glueworkshop_lab4_glue_streaming.\n", "2. Click the checkbox next to the job name and select Edit job from the Action ▼ dropdown menu to examine the details of the steaming job configuration. Select Edit script from the Action ▼ dropdown menu to examine the Python script. It is the same code as shown earlier.\n", "3. When you are done exploring the job details, click Run job from the Action ▼ dropdown menu. Click the checkbox next to the job. This will display the job detail pane with 4 tabs. You can check the job execution log by clicking on the highlighted Log hyperlink. It will bring you to the CloudWatch console and display a detailed log.\n", "4. Once the streaming job is started, we will use a script to publish messages into Kinesis Stream. Use the following script to add data to the Kinesis data stream. You will see the script add a number of JSON messages into the Kinesis stream. You can use ^C (Ctrl-C) to stop the script.\n", " ```sh\n", " cd glue-workshop\n", " python code/lab4/PutRecord_Kinesis.py\n", " ```\n", "5. Once the job start running, wait a few minutes for the streaming data to be processed by the window function. Now go to the S3 console and to your s3://${BUCKET_NAME}/output/lab4/ folder. You should see a new folder created with a recent timestamp. That is the output from the Glue ETL streaming job.\n", "6. You can download the files to your terminal. As long as there is new data coming from the Kinesis stream, there will be new output files being added to the S3 folder every 60 seconds. This is because the window size of the streaming job is 60 seconds, indicating it will deliver on that schedule.\n", " ```sh\n", " aws s3 sync s3://${BUCKET_NAME}/output/ output\n", " ```\n", "7. Once you are done exploring the streaming job details and you have checked the result files, you can stop the streaming job by clicking Stop job run from the Action ▼ dropdown menu, selecting the running job from the list and clicking Stop job run. And go to the terminal that is running the script to add data to Kinesis, stop the script that is publishing messages to the Kinesis Data Stream by pressing ^C (Ctrl-C). The Glue streaming job will keep running if you don't stop it!" ] }, { "cell_type": "markdown", "metadata": { "id": "tBB5J_30e5A8" }, "source": [ "## Glue Databrew" ] }, { "cell_type": "markdown", "metadata": { "id": "jsD_qsVie7Yl" }, "source": [ "We will use a dataset from a public data lake for COVID-19 research and development hosted by AWS." ] }, { "cell_type": "markdown", "metadata": { "id": "5yMWo9Q4fFzF" }, "source": [ "### Glue Databrew Dataset" ] }, { "cell_type": "markdown", "metadata": { "id": "rH2bT1BafEvT" }, "source": [ "We will first create a DataBrew Dataset using a Glue crawler to explore the COVID-19 data stored in a Data Catalog table. In DataBrew, Dataset simply means a set of data—rows or records that are divided into columns or fields.\n", "\n", "When you create a DataBrew project, you connect to or upload data that you want to transform or prepare. DataBrew can work with data from any source, imported from formatted files, and it connects directly to a growing list of data stores. In DataBrew, a Dataset is a read-only connection to your data.\n", "\n", "DataBrew collects a set of descriptive metadata to refer to the data. No actual data can be altered or stored by DataBrew. For simplicity, we use the word dataset to refer to both the actual dataset and the metadata that DataBrew uses.\n", "\n", "Follow the steps below to create a new DataBrew dataset\n", "\n", "1. Click the DATASETS icon on the left.\n", "2. Click the Connect new dataset button in the middle.\n", "3. Under New dataset details set Dataset name to covid-testing.\n", "4. Under Connect to new dataset click Amazon S3 tables. Then, under AWS Glue Data Catalog on the left, click glueworkshop, and click the radio button next to json.\n", "5. Click Create dataset.\n", "\n", "Once the new dataset is created, we will run the profiling job on the new dataset. When you profile your data, DataBrew creates a report called a data profile. This summary tells you about the existing shape of your data, including the context of the content, the structure of the data, and its relationships. You can make a data profile for any dataset by running a data profile job.\n", "\n", "6. Click the checkbox next to the dataset.\n", "8. Click ▶ Run data profile on the top.\n", "9. Click Create profile job to open Create job page.\n", "10. Under Job details, set Job name to covid-testing-profile-job.\n", "11. Under Job run sample select Full dataset.\n", "12. Under Job output settings set the S3 location to s3://${BUCKET_NAME}/output/lab5/profile/\n", "13. Under Permission, select AWSGlueDataBrewServiceRole-glueworkshop for Role name.\n", "14. Click Create and run job.\n", "15. The profile job takes about 5 minutes. Once the job finishes, go to the covid-testing dataset and click the Data profile overview tab. Here you can explore the data profile information created by the profiling job.\n", "16. From the correlation coefficient, you can see positive has high correlation to death and hospitalization.\n", "17. At the bottom of the tab, you can see detailed information about each column, including data type, cardinality, data quality, distribution, min/max value and others. You can also click on each column to get more detailed statistics about an individual column." ] }, { "cell_type": "markdown", "metadata": { "id": "oEv7RP9oie1p" }, "source": [ "### Glue DataBrew Project" ] }, { "cell_type": "markdown", "metadata": { "id": "WDFljvUuii9F" }, "source": [ "We will create a DataBrew project and a recipe to transform the dataset we created earlier.\n", "\n", "1. Click Project on the left.\n", "2. Click Create project.\n", "3. Under Project details set Project name to covid-testing.\n", "4. Under Select a dataset choose My datasets then select the covid-testing dataset.\n", "5. Under Permission select AWSGlueDataBrewServiceRole-glueworkshop for Role name from the dropdown list.\n", "6. Click Create project." ] }, { "cell_type": "markdown", "metadata": { "id": "HMPFL3Toj02A" }, "source": [ "Once the project is created, you will have a work area with the sample data displayed in the data grid. You can also check the schema and profile of the data by clicking SCHEMA and PROFILE in the upper right-corner of the grid. On the right of the screen is the recipe work area.\n", "\n", "Next, we are going to create a simple recipe using built-in transformations.\n", "\n", "1. You should see a Recipe work area to the right of the data grid. If not, click the RECIPE icon in the upper-right corner.\n", "1. Click the Add step button in the recipe work area. You will see a long list of transformations provided by DataBrew that you can use to build your recipe. You can take a look to see what transformations are provided.\n", "1. Click Delete in the COLUMN ACTIONS group. In the Source columns list select:\n", " ```\n", " dateChecked\n", " death\n", " deathIncrease\n", " fips\n", " hash\n", " hospitalized\n", " hospitalizedIncrease\n", " negative\n", " negativeIncrease\n", " pending\n", " positive\n", " total\n", " totalTestResults\n", " ```\n", "1. Click Apply. You'll see the selected columns are gone in the sample data grid as a result.\n", "1. Click the Add step icon on the upper-right corner of the recipe area, then the Add step configuration will show up again on the right.\n", "1. Select DATEFORMAT in DATE FUNCTION group, select Source column in Values using, select date in Source column, select yyyy-mm-dd in Data format, name destination column date_formated, click Apply. A new string column with name date_formated is added in the grid.\n", "1. Click the Add step icon, select Change type in COLUMN ACTION group, select date_formated in Source column, select date in Change type to, then click Apply. Notice the datatype of column date_formated changed from string to date.\n", "1. Click the Add step icon, select By Condition in FILTER group, select state in Source column, select Is exactly in Filter condition, de-select all states values, then choose NY and CA in the list and click Apply. Notice sample data in the grid now only contains data from those 2 states.\n", "1. Click Reorder steps icon on the upper-right corner of the recipe area next to Add step icon. A new pop-up box will show all steps of the current recipe. Move the Filter values by state from the 4th step to the 2nd step then click Next. The validation will check the change. Once validation is finished, click Done. Notice in the recipe work area the order of the steps has changed.\n", "1. Click the Add step icon, select Delete in COLUMN ACTIONS group, select date in Source columns, click Apply.\n", "1. Click the Add step icon, select DIVIDE in MATH FUNCTION group, select Source columns in Values using. First check positiveIncrease and then totalTestResultsIncrease in Source columns. Set positiveRate as the name of Destination column then click Apply. A new double column with name positiveRate is added in the grid. Note that the sequence you check the source columns matters in divide function! The new positiveRate column should have values between 0 and 1. If you see values greater than 1 for the new column, please click Edit next to the step and redo it.\n", "1. Click the Add step icon, select MULTIPLY in MATH FUNCTION group, select Source column and value in Values using list, select positiverate in Source column, set Custom value to 100, set Destination column name to positivePercentage then click Apply. A new double column with name positivePercentage is added in the grid.\n", "1. Click the Add step icon, selete Delete in COLUMN ACTIONS group then in Source columns list select:\n", " ```\n", " positiveIncrease\n", " totalTestResultsIncrease\n", " positiveRate\n", " ```\n", "1. Click Apply.\n", "1. Click the Add step icon, select Pivot - rows to columns in PIVOT group. Select state in Pivot column list, select mean and positivePercentage in Pivot values, click Update preview to see what the pivot table will look like then click Finish. A new table structure will be created based on the pivot operation with 2 new columns named state_CA_positivePercentage_mean and state_NY_positivePercentage_mean\n" ] }, { "cell_type": "markdown", "metadata": { "id": "-0k3WoBboedq" }, "source": [ "### Manage Glue DataBrew Recipe" ] }, { "cell_type": "markdown", "metadata": { "id": "_isVnXajodcn" }, "source": [ "As you proceed with developing your recipe, you can save your work by publishing the recipe. DataBrew maintains a list of published versions for your recipe. You can use any published version in a recipe job to transform your dataset. You can also download a copy of the recipe steps so you can reuse the recipe in other projects or other transformations.\n", "\n", "1. Click the Publish icon at the upper right corner of you recipe work area to publish your recipe.\n", "2. Under Version description put in Note `Convert COVID testing data to time series of positive percentage in NY and CA`.\n", "3. Click the Publish button.\n", "4. Click the RECIPES icon on the left. You will see the recipe you just published.\n", "5. Click on the recipe name, covid-testing-recipe. You will see the details of the recipe.\n", "6. While still in recipe details, click the Action ▼ dropdown list and select Download as YMAL to store the recipe locally as a yaml file. This allows you to export and import recipes for reuse in other projects.\n", " ```yaml\n", " - Action:\n", " Operation: DELETE\n", " Parameters:\n", " sourceColumns: >-\n", " [\"dateChecked\",\"death\",\"deathIncrease\",\"fips\",\"hash\",\"hospitalized\",\"negative\",\"negativeIncrease\",\"pending\",\"positive\",\"total\",\"totalTestResults\"]\n", " - Action:\n", " Operation: REMOVE_VALUES\n", " Parameters:\n", " sourceColumn: state\n", " ConditionExpressions:\n", " - Condition: IS_NOT\n", " Value: '[\"NY\",\"CA\"]'\n", " TargetColumn: state\n", " - Action:\n", " Operation: DATE_FORMAT\n", " Parameters:\n", " dateTimeFormat: yyyy-mm-dd\n", " functionStepType: DATE_FORMAT\n", " sourceColumn: date\n", " targetColumn: date_formated\n", " - Action:\n", " Operation: CHANGE_DATA_TYPE\n", " Parameters:\n", " columnDataType: date\n", " replaceType: REPLACE_WITH_NULL\n", " sourceColumn: date_formated\n", " - Action:\n", " Operation: DELETE\n", " Parameters:\n", " sourceColumns: '[\"date\"]'\n", " - Action:\n", " Operation: DIVIDE\n", " Parameters:\n", " functionStepType: DIVIDE\n", " sourceColumn1: positiveIncrease\n", " sourceColumn2: totalTestResultsIncrease\n", " targetColumn: positiveRate\n", " - Action:\n", " Operation: MULTIPLY\n", " Parameters:\n", " functionStepType: MULTIPLY\n", " sourceColumn1: positiveRate\n", " targetColumn: positivePercentage\n", " value2: '100'\n", " - Action:\n", " Operation: DELETE\n", " Parameters:\n", " sourceColumns: '[\"totalTestResultsIncrease\",\"positiveRate\",\"positiveIncrease\"]'\n", " - Action:\n", " Operation: PIVOT\n", " Parameters:\n", " aggregateFunction: MEAN\n", " sourceColumn: state\n", " valueColumn: positivePercentage\n", " ```\n", "7. Click the RECIPES icon on the left and click the Published ▼ dropdown menu. Select All recipes. You should see both the published and working version of your recipes.\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "odHL5Y_QpnYn" }, "source": [ "### Run Glue DataBrew Job" ] }, { "cell_type": "markdown", "metadata": { "id": "Wblpu92HpqKu" }, "source": [ "**Create a new DataBrew job**\n", "\n", "1. Click JOBS on the left.\n", "1. Click Create job button.\n", "1. Under Job details set the Job name to covid-testing-recipe-job.\n", "1. Under Job type select Create a recipe job.\n", "1. Under Job input select Run on Project. In Select a project select covid-testing from the list.\n", "1. Under Job output settings set S3 location to s3://${BUCKET_NAME}/output/lab5/csv/.\n", "1. Under Permission select AWSGlueDataBrewServiceRole-glueworkshop in Role name.\n", "1. Click Create and run job.\n", "1. Once the job has been created, click the JOB icon on the left. Under the Recipe jobs tab you will see a new job with name covid-testing-job in the Running state. Wait for the job to finish.\n", "1. Once the job finishes, click on the job name. You should see one succeeded job run under Job run history tab. Click the Data linage tab to see the data linage graph for the job.\n", "1. You should see a new folder in S3 under s3://\\${BUCKET_NAME}/output/lab5/csv/ which contains the output of the job. Use the command below to copy the output files locally and explore the output." ] }, { "cell_type": "code", "execution_count": 62, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "90UgFDMiaGhB", "outputId": "42c32fb3-a914-43fb-f4e1-e4f8ced8f2a5" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Completed 199.1 KiB/199.1 KiB (239.2 KiB/s) with 1 file(s) remaining\rdownload: s3://wys-glueworkshop/output/lab5/profile/covid-testing_9c3921a56e8b9e67b5dd03c847f957d1551d3fa280d9969dd2d05d476f3b6e19.json to glue-workshop/output/lab5/profile/covid-testing_9c3921a56e8b9e67b5dd03c847f957d1551d3fa280d9969dd2d05d476f3b6e19.json\n" ] } ], "source": [ "!aws s3 sync s3://$BUCKET_NAME/output/ glue-workshop/output" ] }, { "cell_type": "markdown", "metadata": { "id": "DAvAfhpWr5Ay" }, "source": [ "## Glue Studio" ] }, { "cell_type": "markdown", "metadata": { "id": "yOKfm8onsL3D" }, "source": [ "### Create Glue Studio Job" ] }, { "cell_type": "markdown", "metadata": { "id": "QPXiknOBsOSt" }, "source": [ "We will be creating a new Glue Studio job. Start by opening the menu on the left and clicking Jobs.\n", "\n", "1. Under Create job, select Blank graph.\n", "1. Click Create.\n", "1. Rename the job to glueworkshop-lab6-etl-job. Now you have a blank Glue Studio visual job editor. On the top of the editor are the tabs for different configurations.\n", "1. Click Script tab, you should see an empty shell of Glue ETL script. As we add new steps in the Visual editor the script will be updated automatically. You will not ba able to edit the script inside Glue Studio.\n", "1. Click Job Details tab you can see all job configurations.\n", "1. Under IAM role select AWSGlueServiceRole-glueworkshop.\n", "1. Under Job bookmark select Disable. Note: In a production environment, you probably want to enable the bookmark. We will disable it for this lab so we can reuse the test dataset.\n", "1. You don't need to change any other settings here, but you should take some time to explore what settings are available in this tab. When you are done exploring, click Save on the upper right to save the changed settings.\n", "1. Click the Visual tab again to go back to visual editor. You should see 3 dropdown buttons: Source, Transform, and Target. Click Transform. You will notice there are a limited number of transformations compared to what Glue DataBrew offers as we have seen in a previous lab. This is because Glue Studio is designed to be used by developers who write Apache Spark code but want to leverage Glue Studio for job orchestration and monitoring. In a later section of this lab, we will demonstrate how to develop custom code in Glue Studio.\n", "1. Click Source dropdown icon and select S3,\n", " - On the right side, under Data source properties - S3 tab\n", " - Under S3 source type, select Data Catalog table\n", " - Under Database select glueworkshop\n", " - Under Table select json\n", " - This is the same COVID-19 dataset we used earlier - you should be familiar with the schema.\n", "1. Click Transform dropdown icon and select Custom transform. Copy the code below into code block on the right.\n", " ```py\n", " def DeleteFields (glueContext, dfc) -> DynamicFrameCollection:\n", " sparkDF = dfc.select(list(dfc.keys())[0]).toDF()\n", " sparkDF.createOrReplaceTempView(\"covidtesting\")\n", "\n", " df = spark.sql(\"select date, \\\n", " state , \\\n", " positiveIncrease , \\\n", " totalTestResultsIncrease \\\n", " from covidtesting\")\n", "\n", " dyf = DynamicFrame.fromDF(df, glueContext, \"results\")\n", " return DynamicFrameCollection({\"CustomTransform0\": dyf}, glueContext)\n", " ```\n", "1. Click the Output Schema tab on the right then click Edit. Leave only the following column in the output schema then click Apply.\n", " ```\n", " date\n", " state\n", " positiveincrease\n", " totaltestresultsincrease\n", " ```\n", "1. Click Transform dropdown and select SelectFromCollection. Make sure under Transform tab that the Frame index value is 0.\n", "1. Click the Target dropdown and select S3. Under the Data target properties - S3 tab set the S3 Target Location value to s3://${BUCKET_NAME}/output/lab6/json/temp1/ then replace \\${BUCKET_NAME} with your own S3 bucket name.\n", "1. Click the Save button in the upper right corner then click the Run button next to it.\n", "1. You can use the command below to copy the output files to your environment and explore the content. You will see the output JSON file only contains 4 fields - date, state, positiveIncrease and totalTestResultsIncrease." ] }, { "cell_type": "code", "execution_count": 65, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "8fkVrIuKrDQk", "outputId": "a9a5ce01-925f-4f40-81ef-2bd6c8aaa728" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Completed 256.0 KiB/1.8 MiB (339.0 KiB/s) with 1 file(s) remaining\rCompleted 512.0 KiB/1.8 MiB (607.6 KiB/s) with 1 file(s) remaining\rCompleted 768.0 KiB/1.8 MiB (829.3 KiB/s) with 1 file(s) remaining\rCompleted 1.0 MiB/1.8 MiB (1.1 MiB/s) with 1 file(s) remaining \rCompleted 1.2 MiB/1.8 MiB (1.2 MiB/s) with 1 file(s) remaining \rCompleted 1.5 MiB/1.8 MiB (1.5 MiB/s) with 1 file(s) remaining \rCompleted 1.8 MiB/1.8 MiB (1.7 MiB/s) with 1 file(s) remaining \rCompleted 1.8 MiB/1.8 MiB (1.8 MiB/s) with 1 file(s) remaining \rdownload: s3://wys-glueworkshop/output/lab6/json/temp1/run-1669278265659-part-r-00000 to glue-workshop/output/lab6/json/temp1/run-1669278265659-part-r-00000\n" ] } ], "source": [ "!aws s3 sync s3://$BUCKET_NAME/output/ glue-workshop/output" ] }, { "cell_type": "markdown", "metadata": { "id": "g0Oko2sQx3hk" }, "source": [ "### Glue Studio Custom Transformation" ] }, { "cell_type": "markdown", "metadata": { "id": "CfFIK3fVx2tZ" }, "source": [ "We used a Custom Transformation Node in last section of the lab. Here we will have a deep dive into the Custom Transformation Node and provide guidelines on how to develop scripts for Custom Transformation Node.\n", "\n", "A Custom Transformation Node can have any number of parent nodes, each providing a DynamicFrame as an input. A Custom Transformation Node returns a collection of DynamicFrames. Each DynamicFrame that is used as input has an associated schema. You must add a schema that describes each DynamicFrame returned by the custom code node.\n", "\n", "A Custom Transformation Node's script looks like the following. The input will be a GlueContext and a DynamicFrameCollection. The DynamicFrameCollection contains 1 to n DynamicFrame and at the start of the script we will get an Apache DataFrame from each of the DynamicFrame. Then transformations will be performed and the resulting Apache DataFrames will be converted back to DynamicFrame and returned in a DynamicFrameCollection." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "nX-a84FxvJoN" }, "outputs": [], "source": [ "def CustomTransform (glueContext, dfc) -> DynamicFrameCollection:\n", " df0 = dfc.select(list(dfc.keys())[0]).toDF()\n", " df1 = dfc.select(list(dfc.keys())[1]).toDF()\n", " ...\n", "\n", " # do transformation on the Spark DataFrame df0, df1, ... \n", " ...\n", " \n", " # The result DataFrames named have names like resultDF0, resultDF1, ... in the end\n", " # Convert them to DynamicFrame and return in a DynamicFrameCollection\n", "\n", " dyf0 = DynamicFrame.fromDF(resultDF, glueContext, \"result0\")\n", " dyf1 = DynamicFrame.fromDF(resultDF, glueContext, \"result1\")\n", " ...\n", " return(DynamicFrameCollection( {\n", " \"CustomTransform0\": dyf0,\n", " \"CustomTransform1\": dyf1,\n", " ...\n", " }, \n", " glueContext))" ] }, { "cell_type": "markdown", "metadata": { "id": "nLniw4VEySwt" }, "source": [ "Click on the data target node then click Remove at the top of the visual editor to remove it from the graph." ] }, { "cell_type": "markdown", "metadata": { "id": "tOWAsiUnyaFj" }, "source": [ "Click the Transform dropdown icon and select Custom transform. If the new node is not connected to the existing SelectFromCollection node, click Node properties and select it in the Node parents dropdown. Then copy the code below to the Code block field under Transform tab." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5RrAWJM2yTy1" }, "outputs": [], "source": [ "def ConvertDateStringToDate (glueContext, dfc) -> DynamicFrameCollection:\n", " sparkDF = dfc.select(list(dfc.keys())[0]).toDF()\n", " sparkDF.createOrReplaceTempView(\"inputTable\")\n", "\n", " df = spark.sql(\"select TO_DATE(CAST(UNIX_TIMESTAMP(date, 'yyyyMMdd') AS TIMESTAMP)) as date, \\\n", " state , \\\n", " positiveIncrease , \\\n", " totalTestResultsIncrease \\\n", " from inputTable\")\n", "\n", " dyf = DynamicFrame.fromDF(df, glueContext, \"results\")\n", " return DynamicFrameCollection({\"CustomTransform0\": dyf}, glueContext)" ] }, { "cell_type": "markdown", "metadata": { "id": "x_vQME9jywHi" }, "source": [ "Click the Output schema tab and click Edit. Change the data type of date from string to date, then click Apply.\n", "\n", "Click the Transform dropdown icon and select SelectFromCollection. Make sure under the Transform tab, the Frame index value is 0.\n", "\n", "Note: At this point we have showed the full process of developing scripts for Custom Transformation Node. You can repeat this process as you develop more custom scripts using Spark SQL. In later part of this section, we will add more Custom Transformation Nodes and finish the ETL job.\n", "\n", "Click the Transform dropdown icon and select Custom transform. Copy the code below to the Code block field under the Transform tab." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "uOBJ6GlRywaO" }, "outputs": [], "source": [ "def FilterAndCalculatePercentage (glueContext, dfc) -> DynamicFrameCollection:\n", " sparkDF = dfc.select(list(dfc.keys())[0]).toDF()\n", " sparkDF.createOrReplaceTempView(\"inputTable\")\n", "\n", " df = spark.sql(\"select date , \\\n", " state , \\\n", " (positiveIncrease * 100 / totalTestResultsIncrease) as positivePercentage \\\n", " from inputTable \\\n", " where state in ('NY', 'CA')\")\n", "\n", " dyf = DynamicFrame.fromDF(df, glueContext, \"results\")\n", " return DynamicFrameCollection({\"CustomTransform0\": dyf}, glueContext)" ] }, { "cell_type": "markdown", "metadata": { "id": "26U1maVbzJrk" }, "source": [ "Click the Output schema tab then click Edit. Remove columns positiveIncrease and totalTestResultsIncrease. Add a new column named positivePercentage with type double by clicking ... and then Add root key. Finally, click Apply.\n", "\n", "Click the Transform dropdown icon and choose SelectFromCollection. Make sure under the Transform tab the Frame index value is 0.\n", "\n", "Click the Transform dropdown icon and select Custom transform. Copy the code below to the Code block field under Transform tab." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "V1xBLtwNzYRU" }, "outputs": [], "source": [ "def PivotValue (glueContext, dfc) -> DynamicFrameCollection:\n", " sparkDF = dfc.select(list(dfc.keys())[0]).toDF()\n", " sparkDF.createOrReplaceTempView(\"inputTable\")\n", "\n", " df = spark.sql(\"select * from inputTable \\\n", " pivot (avg(positivePercentage) as positivePercentage \\\n", " for state in ('NY' as positivePercentageNY, 'CA' as positivePercentageCA))\")\n", "\n", " dyf = DynamicFrame.fromDF(df, glueContext, \"results\")\n", " return DynamicFrameCollection({\"CustomTransform0\": dyf}, glueContext)" ] }, { "cell_type": "markdown", "metadata": { "id": "e7WHsQ6vzgWW" }, "source": [ "- Click the Output schema tab then click Edit. Remove columns state and positivePercentage and add 2 new columns positivePercentageNY and positivePercentageCA with type double, then click Apply.\n", "\n", "- Click the Transform dropdown icon and select SelectFromCollection. Make sure under Transform tab the Frame index value is 0.\n", "\n", "- Click the Target dropdown icon and select S3. Under the Data target properties - S3 tab, set the S3 Target Location value to s3://${BUCKET_NAME}/output/lab6/json/finalResult/.\n", "\n", "- Click the Save button on the upper right corner and then click Run.\n", "\n", "- You can use the command below to copy the output files to your environment and explore the content. You will see that the output JSON file only contains 3 fields date, positivePercentageNY and positivePercentageCA." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2L4zZ-lhz6lH" }, "outputs": [], "source": [ "!aws s3 sync s3://$BUCKET_NAME/output/ glue-workshop/output" ] }, { "cell_type": "markdown", "metadata": { "id": "knw_rEQU1Ptl" }, "source": [ "### Glue Studio Streaming Job" ] }, { "cell_type": "markdown", "metadata": { "id": "q2oMqN6T1S3V" }, "source": [ "We will see how to create a Glue streaming ETL job with Glue Studio.\n", "\n", "1. Go to AWS Glue Studio Console , click Jobs in the menu on the left, select Blank graph and click Create.\n", "1. Rename the job to glueworkshop-lab6-streaming-job, click Job details tab and select AWSGlueServiceRole-glueworkshop for the IAM Role. Set Number of workers to 2 then under Job bookmark select Disable and click Save.\n", "1. Click Visual tab, click Source dropdown icon and select Kinesis. On the right side, under Data source properties - Kinesis Stream tab, select glueworkshop-cloudformation for the Database then select json-streaming-table under Table.\n", "1. Click the Source dropdown icon and select S3. On the right side under Node properties tab, set the node Name to country_lookup. Under the Data source properties - S3 tab, select S3 location then under S3 source type, set S3 URL to s3://${BUCKET_NAME}/input/lab6/country_lookup/. Set the Data format to CSV then click Infer schema button.\n", "1. Click the Transform dropdown icon and select Join. On the right side under the Node properties tab, find Node parents and select both nodes. Under the Transform tab, set Join type to Left join. Under Join conditions, select country. Under Kinesis, select CountryName for country_lookup. Note: when setting Join conditions in the UI, Kinesis should be on the left. If you see country_lookup on the left of the UI, go back to Node properties tab and de-select both nodes, select Kinesis, then country_lookup.\n", "1. Click the Transform dropdown icon and select DropFields on the right side. Under the Tranform tab, select the following columns:\n", " ```\n", " order priority\n", " order date\n", " region\n", " ship date\n", " units sold\n", " unit price\n", " unit cost\n", " CountryName\n", " ```\n", "1. Click the Target dropdown icon and select S3. On the right side under Data target properties - S3 tab, select CSV for Format, None for Compression Type, and set S3 Target Location to s3://\\${BUCKET_NAME}/output/lab6/streamingoutput/.\n", "1. Click Save button in the upper right corner and then click Run.\n", "1. Go to your terminal window where you ran the script to add data into the Kinesis data stream earlier. If the script is still running, leave it. If not, copy the commands below to publish JSON messages into the Kinesis data stream.\n", " ```sh\n", " cd glue-workshop\n", " python code/lab4/PutRecord_Kinesis.py \n", " ```\n", "1. In the terminal that is not running the script to add data to Kinesis data stream, use the command below to copy the output files and explore the content. You will see a directory structures like /output/lab6/streamingoutput/ingest_year=20xx/ingest_month=xx/ingest_day=xx/ingest_hour=xx/ and output files inside the directories." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SX52ftcE1QQ3" }, "outputs": [], "source": [ "!aws s3 sync s3://$BUCKET_NAME/output/ glue-workshop/output" ] }, { "cell_type": "markdown", "metadata": { "id": "rTC3xdj84GPE" }, "source": [ "### Monitoring with Glue Studio" ] }, { "cell_type": "markdown", "metadata": { "id": "7kMktxC44I1m" }, "source": [ "1. Click Monitor on the left - you should see the Glue Studio Monitor dashboard.\n", "1. At the top of the dashboard are high-level KPIs for the Glue jobs.\n", "1. In the middle of the dashboard you will find Job type breakdown and Worker type breakdown which contains information about the jobs. You should see one running streaming job that you started in the previous section.\n", "1. Below them is the Job runs timeline which contains the timeline of job executions.\n", "1. Scroll down to the bottom of the dashboard to see the Jobs history. This section includes all the jobs we have run for this workshop. You can use the DPU hours in the dashboard to estimate the cost for each job. You should see the streaming job from Lab 6 at Running state in the job list." ] }, { "cell_type": "markdown", "metadata": { "id": "QCPiXaRL4dKr" }, "source": [ "## Clean Up" ] }, { "cell_type": "markdown", "metadata": { "id": "1ya1BJxR4gmV" }, "source": [ "Execute the commands below to delete the resources. If you choose to use names that are different than the default names provided in the labs, please modify the names match your own resources in the command below.\n", "\n", "Delete the DataBrew project, jobs, and dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "4qnGKjZY31cY" }, "outputs": [], "source": [ "!aws databrew delete-job --name covid-testing-recipe-job\n", "!aws databrew delete-job --name covid-testing-profile-job\n", "!aws databrew delete-project --name covid-testing\n", "!aws databrew delete-dataset --name covid-testing" ] }, { "cell_type": "markdown", "metadata": { "id": "MUY61eRr4qDK" }, "source": [ "Find the recipe versions in the recipe store and delete the versions list by above cli command." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-hPIBfy44nTC" }, "outputs": [], "source": [ "!aws databrew list-recipe-versions --name covid-testing-recipe" ] }, { "cell_type": "markdown", "metadata": { "id": "ErzxRZUB40pb" }, "source": [ "Replace the recipe version number with what you find from command above." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "aRLl9SXv4tea" }, "outputs": [], "source": [ "!aws databrew batch-delete-recipe-version --name covid-testing-recipe --recipe-version \"1.0\"" ] }, { "cell_type": "markdown", "metadata": { "id": "Qto4qYLo5DwM" }, "source": [ "Delete the Glue Data Catalog database, crawlers, jobs, and triggers created manually." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "EE_ytHOl4_Fn" }, "outputs": [], "source": [ "!aws glue delete-job --job-name glueworkshop-lab3-etl-job \n", "!aws glue delete-job --job-name glueworkshop_lab4_glue_streaming \n", "!aws glue delete-job --job-name glueworkshop-lab6-streaming-job\n", "!aws glue delete-job --job-name glueworkshop-lab6-etl-job\n", "!aws glue delete-trigger --name glueworkshop-lab3-etl-job-trigger\n", "!aws glue delete-crawler --name lab1-csv\n", "!aws glue delete-crawler --name covid-testing\n", "!aws glue delete-database --name glueworkshop" ] }, { "cell_type": "markdown", "metadata": { "id": "FV0j1fp65JJt" }, "source": [ "Delete the CloudFormation stack." ] }, { "cell_type": "code", "execution_count": 74, "metadata": { "id": "rxEfADM95HQa" }, "outputs": [], "source": [ "!aws cloudformation delete-stack --stack-name glueworkshop" ] }, { "cell_type": "markdown", "metadata": { "id": "H4-OkulU5OcJ" }, "source": [ "Delete the S3 bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "RmCilK9F5K-N" }, "outputs": [], "source": [ "!aws s3 rb s3://$BUCKET_NAME --force" ] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "Python 3.9.10 64-bit", "language": "python", "name": "python3" }, "language_info": { "name": "python", "version": "3.9.10 (v3.9.10:f2f3f53782, Jan 13 2022, 17:02:14) \n[Clang 6.0 (clang-600.0.57)]" }, "vscode": { "interpreter": { "hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49" } } }, "nbformat": 4, "nbformat_minor": 0 }