{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# \"Data Engineering - Week 3\"\n", "> \"Week 3 - Data Engineering Zoomcamp course: Data Warehouse\"\n", "\n", "- toc: True\n", "- branch: master\n", "- badges: true\n", "- comments: true\n", "- categories: [data engineering, mlops]\n", "- image: images/some_folder/your_image.png\n", "- hide: false\n", "- search_exclude: true" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note**: The content of this post is from the course videos, my understandings and searches, and reference documentations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Data Warehouse\n", "\n", "## OLTP vs OLAP\n", "\n", "The two terms look similar but refer to different kinds of systems. Online transaction processing (OLTP) captures, stores, and processes data from transactions in real time. Online analytical processing (OLAP) uses complex queries to analyze aggregated historical data from OLTP systems.\n", "\n", "An OLTP system is a database that captures and retains transaction data. Individual database entries made up of numerous fields or columns are involved in each transaction. Banking and credit card transactions, as well as retail checkout scanning, are examples.\n", "Because OLTP databases are read, written, and updated frequently, the emphasis in OLTP is on fast processing. Built-in system logic protects data integrity if a transaction fails.\n", "\n", "For data mining, analytics, and business intelligence initiatives, OLAP applies complicated queries to massive amounts of historical data aggregated from OLTP databases and other sources. The emphasis in OLAP is on query response speed for these complicated queries. Each query has one or more columns of data derived from a large number of rows. Financial performance year over year or marketing lead generation trends are two examples. Analysts and decision-makers can utilize custom reporting tools to turn data into information using OLAP databases and data warehouses. OLAP query failure does not affect or delay client transaction processing, but it can affect or delay the accuracy of business intelligence insights. [[ref](https://www.stitchdata.com/resources/oltp-vs-olap/#:~:text=OLTP%20and%20OLAP%3A%20The%20two,historical%20data%20from%20OLTP%20systems.)]\n", "\n", "![](images/data-engineering-w3/1.png)\n", "\n", "![](images/data-engineering-w3/2.png)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data Warehouse\n", "\n", "> youtube: https://youtu.be/jrHljAoD6nM" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A data warehouse (DW or DWH), often known as an enterprise data warehouse (EDW), is a reporting and data analysis system that is considered a key component of business intelligence. DWs are central data repositories that combine data from a variety of sources. They keep current and historical data in one place and utilize it to generate analytical reports for employees across the company.\n", "\n", "The data in the warehouse comes from the operating systems and is uploaded there (such as marketing or sales). Before being used in the DW for reporting, the data may transit via an operational data store and require data cleansing for extra procedures to ensure data quality.\n", "\n", "The two major methodologies used to design a data warehouse system are extract, transform, load (ETL) and extract, load, transform (ELT). [[wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)]\n", "\n", "![](images/data-engineering-w3/3.png)\n", "\n", "\n", "Google BigQuery, Amazon Redshift, and Microsoft Azure Synapse Analytics are three data warehouse services. Here we review BigQuery and Redshift. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## BigQuery\n", "\n", "BigQuery is a fully managed enterprise data warehouse that helps you manage and analyze your data with built-in features like machine learning, geospatial analysis, and business intelligence. BigQuery's serverless architecture lets you use SQL queries to answer your organization's biggest questions with zero infrastructure management. BigQuery's scalable, distributed analysis engine lets you query terabytes in seconds and petabytes in minutes.\n", "\n", "BigQuery maximizes flexibility by separating the compute engine that analyzes your data from your storage choices. You can store and analyze your data within BigQuery or use BigQuery to assess your data where it lives. Federated queries let you read data from external sources while streaming supports continuous data updates. Powerful tools like BigQuery ML and BI Engine let you analyze and understand that data.\n", "\n", "BigQuery interfaces include Google Cloud Console interface and the BigQuery command-line tool. Developers and data scientists can use client libraries with familiar programming including Python, Java, JavaScript, and Go, as well as BigQuery's REST API and RPC API to transform and manage data. ODBC and JDBC drivers provide interaction with existing applications including third-party tools and utilities.\n", "\n", "As a data analyst, data engineer, data warehouse administrator, or data scientist, the BigQuery ML documentation helps you discover, implement, and manage data tools to inform critical business decisions. [[BigQuery docs](https://cloud.google.com/bigquery/docs/introduction)]\n", "\n", "> youtube: https://youtu.be/CFw4peH2UwU" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Partitioning\n", "\n", "A partitioned table is a special table that is divided into segments, called partitions, that make it easier to manage and query your data. You can typically split large tables into many smaller partitions using data ingestion time or `TIMESTAMP/DATE` column or an `INTEGER` column. BigQuery’s decoupled storage and compute architecture leverages column-based partitioning simply to minimize the amount of data that slot workers read from disk. Once slot workers read their data from disk, BigQuery can automatically determine more optimal data sharding and quickly repartition data using BigQuery’s in-memory shuffle service. [[source](https://cloud.google.com/blog/topics/developers-practitioners/bigquery-explained-storage-overview)]\n", "\n", "![](images/data-engineering-w3/4.png)\n", "*[source](https://cloud.google.com/blog/topics/developers-practitioners/bigquery-explained-storage-overview)*\n", "\n", "Check the previous video and also [here](https://cloud.google.com/blog/topics/developers-practitioners/bigquery-explained-storage-overview) to see an example of the performance gain.\n", "\n", "The example SQL query can be like this:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```sql\n", "CREATE OR REPLACE TABLE `stackoverflow.questions_2018_partitioned`\n", "PARTITION BY\n", " DATE(creation_date) AS\n", "SELECT\n", " *\n", "FROM\n", " `bigquery-public-data.stackoverflow.posts_questions`\n", "WHERE\n", " creation_date BETWEEN '2018-01-01' AND '2018-07-01';\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Clustering\n", "\n", "When a table is clustered in BigQuery, the table data is automatically organized based on the contents of one or more columns in the table’s schema. The columns you specify are used to collocate related data. Usually high cardinality and non-temporal columns are preferred for clustering.\n", "\n", "When data is written to a clustered table, BigQuery sorts the data using the values in the clustering columns. These values are used to organize the data into multiple blocks in BigQuery storage. The order of clustered columns determines the sort order of the data. When new data is added to a table or a specific partition, BigQuery performs automatic re-clustering in the background to restore the sort property of the table or partition. Auto re-clustering is completely free and autonomous for the users. [[source](https://cloud.google.com/blog/topics/developers-practitioners/bigquery-explained-storage-overview)]\n", "\n", "![](images/data-engineering-w3/5.png)\n", "*[source](https://cloud.google.com/blog/topics/developers-practitioners/bigquery-explained-storage-overview)*\n", "\n", "Clustering can improve the performance of certain types of queries, such as those using filter clauses and queries aggregating data.\n", "When a query containing a filter clause filters data based on the clustering columns, BigQuery uses the sorted blocks to eliminate scans of unnecessary data.\n", "When a query aggregates data based on the values in the clustering columns, performance is improved because the sorted blocks collocate rows with similar values.\n", "BigQuery supports clustering over both partitioned and non-partitioned tables. When you use clustering and partitioning together, your data can be partitioned by a DATE or TIMESTAMP column and then clustered on a different set of columns (up to four columns). [[source](https://cloud.google.com/blog/topics/developers-practitioners/bigquery-explained-storage-overview)]\n", "\n", "Clustered tables in BigQuery are subject to the following limitations:\n", "\n", "- Only standard SQL is supported for querying clustered tables and for writing query results to clustered tables.\n", "- Clustering columns must be top-level, non-repeated columns of one of the following types:\n", "\n", "- `DATE`\n", "- `BOOL`\n", "- `GEOGRAPHY`\n", "- `INT64`\n", "- `NUMERIC`\n", "- `BIGNUMERIC`\n", "- `STRING`\n", "- `TIMESTAMP`\n", "- `DATETIME`\n", "\n", "- You can specify up to four clustering columns.\n", "\n", "- When using `STRING` type columns for clustering, BigQuery uses only the first 1,024 characters to cluster the data. The values in the columns can themselves be longer than 1,024.\n", "\n", "Check the previous video and also [here](https://cloud.google.com/blog/topics/developers-practitioners/bigquery-explained-storage-overview) to see an example of the performance gain.\n", "\n", "The example SQL query can be like this:\n", "\n", "```bash\n", "CREATE OR REPLACE TABLE `stackoverflow.questions_2018_clustered`\n", "PARTITION BY\n", " DATE(creation_date)\n", "CLUSTER BY\n", " tags AS\n", "SELECT\n", " *\n", "FROM\n", " `bigquery-public-data.stackoverflow.posts_questions`\n", "WHERE\n", " creation_date BETWEEN '2018-01-01' AND '2018-07-01';\n", " ```\n", " \n", " Check the following video to learn more:\n", " \n", " > youtube: https://youtu.be/-CqXf7vhhDs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also chech this [codelab](https://codelabs.developers.google.com/codelabs/gcp-bq-partitioning-and-clustering#0) to learn more about partitioning and clustering.\n", "\n", "Let's now compare clustering and partitioning:\n", "\n", "![](images/data-engineering-w3/10.png)\n", "*[source](https://youtu.be/-CqXf7vhhDs)*\n", "\n", "\n", "Now, let's see when we have to choose clustering over partitioning [[ref](https://youtu.be/-CqXf7vhhDs)]:\n", "\n", "- Partitioning results in a small amount of data per partition (approximately less than 1 GB)\n", "- Partitioning results in a large number of partitions beyond the limits on partitioned tables\n", "- Partitioning results in your mutation operations modifying the majority of partitions in the table frequently (for example, every few minutes)\n", "\n", "In order to have better performance with lower cost, the following best practices in BQ are useful:\n", "\n", "- Cost reduction\n", " - Avoid SELECT *\n", " - Price your queries before running them\n", " - Use clustered or partitioned tables\n", " - Use streaming inserts with caution\n", " - Materialize query results in stages\n", "\n", "- Query performance\n", " - Filter on partitioned columns\n", " - Denormalizing data\n", " - Use nested or repeated columns\n", " - Use external data sources appropriately\n", " - Don't use it, in case u want a high query performance\n", " - Reduce data before using a JOIN\n", " - Do not treat WITH clauses as prepared statements\n", " - Avoid oversharding tables\n", " - Avoid JavaScript user-defined functions\n", " - Use approximate aggregation functions (HyperLogLog++)\n", " - Order Last, for query operations to maximize performance\n", " - Optimize your join patterns\n", " - As a best practice, place the table with the largest number of rows first, followed by the table with the fewest rows, and then place the remaining tables by decreasing size.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's see how to load or ingest data into BigQuery and analyze them. \n", "\n", "### Direct Import (Managed Tables)\n", "\n", "BigQuery can ingest datasets from a variety of different formats directly into its native storage. BigQuery native storage is fully managed by Google—this includes replication, backups, scaling out size, and much more.\n", "\n", "There are multiple ways to load data into BigQuery depending on data sources, data formats, load methods and use cases such as batch, streaming or data transfer. At a high level following are the ways you can ingest data into BigQuery:\n", "\n", "- Batch Ingestion: Batch ingestion involves loading large, bounded, data sets that don’t have to be processed in real-time\n", "- Streaming Ingestion: Streaming ingestion supports use cases that require analyzing high volumes of continuously arriving data with near-real-time dashboards and queries.\n", "- Data Transfer Service (DTS): The BigQuery Data Transfer Service (DTS) is a fully managed service to ingest data from Google SaaS apps such as Google Ads, external cloud storage providers such as Amazon S3 and transferring data from data warehouse technologies such as Teradata and Amazon Redshift .\n", "- Query Materialization: When you run queries in BigQuery their result sets can be materialized to create new tables. \n", "\n", "\n", "![](images/data-engineering-w3/6.png)\n", "*[source](https://cloud.google.com/blog/topics/developers-practitioners/bigquery-explained-data-ingestion)*\n", "\n", "### Query without Loading (External Tables) \n", "\n", "Using a federated query is one of the options to query external data sources directly without loading into BigQuery storage. You can query across Google services such as Google Sheets, Google Drive, Google Cloud Storage, Cloud SQL or Cloud BigTable without having to import the data into BigQuery.\n", "\n", "You don’t need to load data into BigQuery before running queries in the following situations:\n", "\n", "- Public Datasets: Public datasets are datasets stored in BigQuery and shared with the public. \n", "- Shared Datasets: You can share datasets stored in BigQuery. If someone has shared a dataset with you, you can run queries on that dataset without loading the data.\n", "- External data sources (Federated): You can skip the data loading process by creating a table based on an external data source.\n", "\n", "Apart from the solutions available natively in BigQuery, you can also check data integration options from Google Cloud partners who have integrated their industry-leading tools with BigQuery.\n", "\n", "\n", "To read more, please check [here](https://cloud.google.com/blog/topics/developers-practitioners/bigquery-explained-data-ingestion)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check the following video for BQ best practices:\n", "\n", "> youtube: https://youtu.be/k81mLJVX08w" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are some example queries in BQ which shows how to do partitioning and clustering on a public available dataset in BQ [[ref](https://github.com/DataTalksClub/data-engineering-zoomcamp/blob/main/week_3_data_warehouse/big_query.sql)]:\n", "\n", "```sql\n", "-- Query public available table\n", "SELECT station_id, name FROM\n", " bigquery-public-data.new_york_citibike.citibike_stations\n", "LIMIT 100;\n", "\n", "\n", "-- Creating external table referring to gcs path\n", "CREATE OR REPLACE EXTERNAL TABLE `taxi-rides-ny.nytaxi.external_yellow_tripdata`\n", "OPTIONS (\n", " format = 'CSV',\n", " uris = ['gs://nyc-tl-data/trip data/yellow_tripdata_2019-*.csv', 'gs://nyc-tl-data/trip data/yellow_tripdata_2020-*.csv']\n", ");\n", "\n", "-- Check yello trip data\n", "SELECT * FROM taxi-rides-ny.nytaxi.external_yellow_tripdata limit 10;\n", "\n", "-- Create a non partitioned table from external table\n", "CREATE OR REPLACE TABLE taxi-rides-ny.nytaxi.yellow_tripdata_non_partitoned AS\n", "SELECT * FROM taxi-rides-ny.nytaxi.external_yellow_tripdata;\n", "\n", "\n", "-- Create a partitioned table from external table\n", "CREATE OR REPLACE TABLE taxi-rides-ny.nytaxi.yellow_tripdata_partitoned\n", "PARTITION BY\n", " DATE(tpep_pickup_datetime) AS\n", "SELECT * FROM taxi-rides-ny.nytaxi.external_yellow_tripdata;\n", "\n", "-- Impact of partition\n", "-- Scanning 1.6GB of data\n", "SELECT DISTINCT(VendorID)\n", "FROM taxi-rides-ny.nytaxi.yellow_tripdata_non_partitoned\n", "WHERE DATE(tpep_pickup_datetime) BETWEEN '2019-06-01' AND '2019-06-30';\n", "\n", "-- Scanning ~106 MB of DATA\n", "SELECT DISTINCT(VendorID)\n", "FROM taxi-rides-ny.nytaxi.yellow_tripdata_partitoned\n", "WHERE DATE(tpep_pickup_datetime) BETWEEN '2019-06-01' AND '2019-06-30';\n", "\n", "-- Let's look into the partitons\n", "SELECT table_name, partition_id, total_rows\n", "FROM `nytaxi.INFORMATION_SCHEMA.PARTITIONS`\n", "WHERE table_name = 'yellow_tripdata_partitoned'\n", "ORDER BY total_rows DESC;\n", "\n", "-- Creating a partition and cluster table\n", "CREATE OR REPLACE TABLE taxi-rides-ny.nytaxi.yellow_tripdata_partitoned_clustered\n", "PARTITION BY DATE(tpep_pickup_datetime)\n", "CLUSTER BY VendorID AS\n", "SELECT * FROM taxi-rides-ny.nytaxi.external_yellow_tripdata;\n", "\n", "-- Query scans 1.1 GB\n", "SELECT count(*) as trips\n", "FROM taxi-rides-ny.nytaxi.yellow_tripdata_partitoned\n", "WHERE DATE(tpep_pickup_datetime) BETWEEN '2019-06-01' AND '2020-12-31'\n", " AND VendorID=1;\n", "\n", "-- Query scans 864.5 MB\n", "SELECT count(*) as trips\n", "FROM taxi-rides-ny.nytaxi.yellow_tripdata_partitoned_clustered\n", "WHERE DATE(tpep_pickup_datetime) BETWEEN '2019-06-01' AND '2020-12-31'\n", " AND VendorID=1;\n", "\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Furthermore, if you are curious to know about internals of BQ, you can check [here](https://cloud.google.com/blog/products/data-analytics/new-blog-series-bigquery-explained-overview), [here](https://cloud.google.com/blog/topics/developers-practitioners/bigquery-explained-storage-overview) and also the following video.\n", "\n", "> youtube: https://youtu.be/k81mLJVX08w" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Machine Learning in BigQuery\n", "\n", "It is also possible to do machine learning in BigQuery instead of doing it outside of it. BigQuery ML increases development speed by eliminating the need to move data. \n", "\n", "BigQuery ML supports the following types of models: [[ref](https://cloud.google.com/bigquery-ml/docs/introduction)]\n", "\n", "- Linear regression for forecasting; for example, the sales of an item on a given day. Labels are real-valued (they cannot be +/- infinity or NaN).\n", "- Binary logistic regression for classification; for example, determining whether a customer will make a purchase. Labels must only have two possible values.\n", "- Multiclass logistic regression for classification. These models can be used to predict multiple possible values such as whether an input is \"low-value,\" \"medium-value,\" or \"high-value.\" Labels can have up to 50 unique values. In BigQuery ML, multiclass logistic regression training uses a multinomial classifier with a cross-entropy loss function.\n", "- K-means clustering for data segmentation; for example, identifying customer segments. K-means is an unsupervised learning technique, so model training does not require labels nor split data for training or evaluation.\n", "- Matrix Factorization for creating product recommendation systems. You can create product recommendations using historical customer behavior, transactions, and product ratings and then use those recommendations for personalized customer experiences.\n", "- Time series for performing time-series forecasts. You can use this feature to create millions of time series models and use them for forecasting. The model automatically handles anomalies, seasonality, and holidays.\n", "- Boosted Tree for creating XGBoost based classification and regression models.\n", "- Deep Neural Network (DNN) for creating TensorFlow-based Deep Neural Networks for classification and regression models.\n", "- AutoML Tables to create best-in-class models without feature engineering or model selection. AutoML Tables searches through a variety of model architectures to decide the best model.\n", "- TensorFlow model importing. This feature lets you create BigQuery ML models from previously trained TensorFlow models, then perform prediction in BigQuery ML.\n", "- Autoencoder for creating Tensorflow-based BigQuery ML models with the support of sparse data representations. The models can be used in BigQuery ML for tasks such as unsupervised anomaly detection and non-linear dimensionality reduction.\n", "\n", "![](images/data-engineering-w3/8.png)\n", "*[source](https://cloud.google.com/blog/products/data-analytics/automl-tables-now-generally-available-bigquery-ml)*\n", "\n", "Check [here](https://cloud.google.com/bigquery-ml/docs/introduction) and the following video to learn more about how to train a linear regression model in BQ:\n", "\n", "> youtube: https://youtu.be/B-WtpB0PuG4" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is an example of training a liner regression model in BQ on the dataset we uploaded to GCS and BQ in the previous week using Airflow, and also how to do hyperparameter tuning: [[ref1](https://github.com/DataTalksClub/data-engineering-zoomcamp/blob/main/week_3_data_warehouse/extract_model.md)]\n", "\n", "``` sql\n", "-- SELECT THE COLUMNS INTERESTED FOR YOU\n", "SELECT passenger_count, trip_distance, PULocationID, DOLocationID, payment_type, fare_amount, tolls_amount, tip_amount\n", "FROM `taxi-rides-ny.nytaxi.yellow_tripdata_partitoned` WHERE fare_amount != 0;\n", "\n", "-- CREATE A ML TABLE WITH APPROPRIATE TYPE\n", "CREATE OR REPLACE TABLE `taxi-rides-ny.nytaxi.yellow_tripdata_ml` (\n", "`passenger_count` INTEGER,\n", "`trip_distance` FLOAT64,\n", "`PULocationID` STRING,\n", "`DOLocationID` STRING,\n", "`payment_type` STRING,\n", "`fare_amount` FLOAT64,\n", "`tolls_amount` FLOAT64,\n", "`tip_amount` FLOAT64\n", ") AS (\n", "SELECT passenger_count, trip_distance, cast(PULocationID AS STRING), CAST(DOLocationID AS STRING),\n", "CAST(payment_type AS STRING), fare_amount, tolls_amount, tip_amount\n", "FROM `taxi-rides-ny.nytaxi.yellow_tripdata_partitoned` WHERE fare_amount != 0\n", ");\n", "\n", "-- CREATE MODEL WITH DEFAULT SETTING\n", "CREATE OR REPLACE MODEL `taxi-rides-ny.nytaxi.tip_model`\n", "OPTIONS\n", "(model_type='linear_reg',\n", "input_label_cols=['tip_amount'],\n", "DATA_SPLIT_METHOD='AUTO_SPLIT') AS\n", "SELECT\n", "*\n", "FROM\n", "`taxi-rides-ny.nytaxi.yellow_tripdata_ml`\n", "WHERE\n", "tip_amount IS NOT NULL;\n", "\n", "-- CHECK FEATURES\n", "SELECT * FROM ML.FEATURE_INFO(MODEL `taxi-rides-ny.nytaxi.tip_model`);\n", "\n", "-- EVALUATE THE MODEL\n", "SELECT\n", "*\n", "FROM\n", "ML.EVALUATE(MODEL `taxi-rides-ny.nytaxi.tip_model`,\n", "(\n", "SELECT\n", "*\n", "FROM\n", "`taxi-rides-ny.nytaxi.yellow_tripdata_ml`\n", "WHERE\n", "tip_amount IS NOT NULL\n", "));\n", "\n", "-- PREDICT THE MODEL\n", "SELECT\n", "*\n", "FROM\n", "ML.PREDICT(MODEL `taxi-rides-ny.nytaxi.tip_model`,\n", "(\n", "SELECT\n", "*\n", "FROM\n", "`taxi-rides-ny.nytaxi.yellow_tripdata_ml`\n", "WHERE\n", "tip_amount IS NOT NULL\n", "));\n", "\n", "-- PREDICT AND EXPLAIN\n", "SELECT\n", "*\n", "FROM\n", "ML.EXPLAIN_PREDICT(MODEL `taxi-rides-ny.nytaxi.tip_model`,\n", "(\n", "SELECT\n", "*\n", "FROM\n", "`taxi-rides-ny.nytaxi.yellow_tripdata_ml`\n", "WHERE\n", "tip_amount IS NOT NULL\n", "), STRUCT(3 as top_k_features));\n", "\n", "-- HYPER PARAM TUNNING\n", "CREATE OR REPLACE MODEL `taxi-rides-ny.nytaxi.tip_hyperparam_model`\n", "OPTIONS\n", "(model_type='linear_reg',\n", "input_label_cols=['tip_amount'],\n", "DATA_SPLIT_METHOD='AUTO_SPLIT',\n", "num_trials=5,\n", "max_parallel_trials=2,\n", "l1_reg=hparam_range(0, 20),\n", "l2_reg=hparam_candidates([0, 0.1, 1, 10])) AS\n", "SELECT\n", "*\n", "FROM\n", "`taxi-rides-ny.nytaxi.yellow_tripdata_ml`\n", "WHERE\n", "tip_amount IS NOT NULL;\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After training the model, we need to deploy it. The following video explains how to that:\n", "\n", "> youtube: https://youtu.be/BjARzEWaznU" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here is the deployment steps: [[ref1](https://github.com/DataTalksClub/data-engineering-zoomcamp/blob/main/week_3_data_warehouse/extract_model.md), [ref2](https://cloud.google.com/bigquery-ml/docs/export-model-tutorial)]\n", "```bash\n", "- gcloud auth login\n", "- bq --project_id taxi-rides-ny extract -m nytaxi.tip_model gs://taxi_ml_model/tip_model\n", "- mkdir /tmp/model\n", "- gsutil cp -r gs://taxi_ml_model/tip_model /tmp/model\n", "- mkdir -p serving_dir/tip_model/1\n", "- cp -r /tmp/model/tip_model/* serving_dir/tip_model/1\n", "- docker pull tensorflow/serving\n", "- docker run -p 8501:8501 --mount type=bind,source=`pwd`/serving_dir/tip_model,target=\n", " /models/tip_model -e MODEL_NAME=tip_model -t tensorflow/serving &\n", "- curl -d '{\"instances\": [{\"passenger_count\":1, \"trip_distance\":12.2, \"PULocationID\":\"193\", \"DOLocationID\":\"264\", \"payment_type\":\"2\",\"fare_amount\":20.4,\"tolls_amount\":0.0}]}' -X POST http://localhost:8501/v1/models/tip_model:predict\n", "- http://localhost:8501/v1/models/tip_model\n", "```\n", "\n", "[Here](https://www.visual-design.net/post/how-to-build-ml-model-using-bigquery) is another nice blog post on using ML in BQ.\n", "\n", "You can also check the following video which is a workshop on BQ+Airflow:\n", "\n", "> youtube: https://youtu.be/lAxAhHNeGww" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Other useful resources:\n", "\n", "> youtube: https://youtu.be/ZVgt1-LfWW4" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Amazon Redshift\n", "\n", "Amazon Redshift is an alternate AWS data warehouse service that is not covered in the course but is quite comparable to BigQuery. It is an AWS Cloud-based petabyte-scale data warehouse solution that is completely managed. An Amazon Redshift data warehouse is a group of computer resources known as nodes that are arranged into a group known as a cluster. Each cluster contains one or more databases and is powered by an Amazon Redshift engine." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following diagram illustrates a typical data processing flow in Amazon Redshift. [[ref](https://docs.aws.amazon.com/redshift/latest/gsg/concepts-diagrams.html)]\n", "\n", "\n", "![](images/data-engineering-w3/9.png)\n", "*[source](https://docs.aws.amazon.com/redshift/latest/gsg/concepts-diagrams.html)*\n", "\n", "Different types of data sources constantly upload structured, semistructured, or unstructured data to the data storage layer at the data ingestion layer. This data storage area functions as a staging place for data in various states of consumption readiness. An Amazon Simple Storage Service (Amazon S3) bucket is an example of storage.\n", "\n", "The source data is preprocessed, validated, and transformed utilizing extract, transform, load (ETL) or extract, load, transform (ELT) pipelines at the optional data processing layer. ETL techniques are subsequently used to refine these raw datasets. AWS Glue is an example of an ETL engine.\n", "\n", "Data is imported into your Amazon Redshift cluster at the data consumption layer, where you can perform analytical applications.\n", "\n", "Data can also be consumed for analytical workloads as follows:\n", "\n", "- Use datashares to securely and easily transfer live data across Amazon Redshift clusters for reading purposes. Data can be shared at different levels, such as databases, schemas, tables, views, and SQL user-defined functions (UDFs).\n", "\n", "- Amazon Redshift Spectrum may be used to query data in Amazon S3 files without having to load the data into Amazon Redshift tables. Amazon Redshift offers SQL capabilities designed for quick and online analytical processing (OLAP) of very big datasets stored in Amazon Redshift clusters and Amazon S3 data lakes.\n", "\n", "- Using a federated query, you can join data from relational databases such as Amazon Relational Database Service (Amazon RDS), Amazon Aurora, or Amazon S3 with data from your Amazon Redshift database. Amazon Redshift can be used to query operational data directly (without moving it), apply transformations, and insert data into Amazon Redshift tables.\n", "\n", "- Amazon Redshift machine learning (ML) generates models based on the data you provide and the metadata associated with the data inputs. Patterns in the incoming data are captured by these models. These models can be used to make predictions for new input data. Amazon Redshift works with Amazon SageMaker Autopilot to choose the best model and make the prediction function available in Amazon Redshift." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To learn more about Amazon Redshift and how to create database, tables, query data from Redshift or external sources check [this](https://docs.aws.amazon.com/redshift/latest/gsg/new-user.html) and [this](https://docs.aws.amazon.com/redshift/latest/gsg/data-querying.html) tutorials. It's almost similar to BigQuery.\n", "\n", "Let's check the more interesting part: Machine Learning in Redshift." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Machine Learning in Redshift\n", "\n", "Amazon Redshift ML makes it simple for data analysts and database engineers to construct, train, and deploy machine learning models in Amazon Redshift data warehouses using standard SQL commands. You can use Redshift ML to access Amazon SageMaker, a fully managed machine learning service, without learning new tools or languages. Simply utilize SQL commands to develop and train Amazon SageMaker machine learning models on your Redshift data, and then use these models to predict.\n", "\n", "![](images/data-engineering-w3/7.png)\n", "*[source](https://aws.amazon.com/redshift/features/redshift-ml/)*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check the following demo to see how to do machine learning in Amazon Redshift:\n", "\n", "> youtube: https://youtu.be/bpiKwSj0X7g" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see how to create a model and running some inference queries for different scenarios using the SQL function that the `CREATE MODEL` command generates. [[ref](https://docs.aws.amazon.com/redshift/latest/dg/tutorial_customer_churn.html)]\n", "\n", "first create a table from a dataset in S3:\n", "\n", "```sql\n", "DROP TABLE IF EXISTS customer_activity;\n", "\n", "CREATE TABLE customer_activity (\n", "state varchar(2), \n", "account_length int, \n", "area_code int,\n", "phone varchar(8), \n", "intl_plan varchar(3), \n", "vMail_plan varchar(3),\n", "vMail_message int, \n", "day_mins float, \n", "day_calls int, \n", "day_charge float,\n", "total_charge float,\n", "eve_mins float, \n", "eve_calls int, \n", "eve_charge float, \n", "night_mins float,\n", "night_calls int, \n", "night_charge float, \n", "intl_mins float, \n", "intl_calls int,\n", "intl_charge float, \n", "cust_serv_calls int, \n", "churn varchar(6),\n", "record_date date);\n", "\n", "COPY customer_activity\n", "FROM 's3://redshift-downloads/redshift-ml/customer_activity/'\n", "REGION 'us-east-1' IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/Redshift-ML'\n", "DELIMITER ',' IGNOREHEADER 1;\n", "```\n", "\n", "Then creating the model. \n", "\n", "```sql\n", "CREATE MODEL customer_churn_auto_model FROM (\n", " SELECT \n", " state,\n", " account_length,\n", " area_code,\n", " total_charge/account_length AS average_daily_spend, \n", " cust_serv_calls/account_length AS average_daily_cases,\n", " churn \n", " FROM \n", " customer_activity\n", " WHERE \n", " record_date < '2020-01-01' \n", ")\n", "TARGET churn FUNCTION ml_fn_customer_churn_auto\n", "IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/Redshift-ML'SETTINGS (\n", " S3_BUCKET 'your-bucket'\n", ");\n", "```\n", "\n", "The `SELECT` query creates the training data. The `TARGET` clause specifies which column is the machine learning `label` that the `CREATE MODEL` uses to learn how to predict. The remaining columns are the features (input) that are used for the prediction.\n", "\n", "And finally, the prediction can be done as follows:\n", "\n", "```sql\n", "SELECT phone, \n", " ml_fn_customer_churn_auto( \n", " state,\n", " account_length,\n", " area_code, \n", " total_charge/account_length , \n", " cust_serv_calls/account_length )\n", " AS active FROM customer_activity WHERE record_date > '2020-01-01';\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Conclusion\n", "\n", "In this week, we focused more on data warehouses and reviewed Google BigQuery and Amazon Redshift services. We learned how to do normal SQL queries and also how to do machine learning in these warehouses using SQL." ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" } }, "nbformat": 4, "nbformat_minor": 4 }