{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "By the end of the lab, you will have learned how to configure Databricks, work with storage accounts, process data using Scala, store processed data in Delta Lake, and visualize the data in Power BI.\n", "\n", "In this lab, we'll cover the following recipes:\n", "\n", "- Configuring the Azure Databricks environment\n", "- Integrate Databricks with Azure Key Vault\n", "- Mounting an Azure Data Lake container in Databricks\n", "- Processing data using notebooks\n", "- Scheduling notebooks using job clusters\n", "- Working with Delta Lake tables\n", "- Connecting a Databricks Delta Lake to Power BI" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Recipe 1 - Configuring the Azure Databricks environment" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "In this recipe, we’ll learn how to configure the Azure Databricks environment by creating an Azure Databricks workspace, cluster, and cluster pools.\n", "\n", "An Azure Databricks workspace is the starting point for writing solutions in Azure Databricks. A workspace is where you create clusters, write notebooks, schedule jobs, and manage the Azure Databricks environment.\n", "\n", "An Azure Databricks workspace can be created in an Azure-managed virtual network or customer-managed virtual network. In this recipe, we will create a Databricks cluster in an Azure-managed network. Let’s get started:\n", "\n", "1. Go to portal.azure.com and click Create a resource. Search for Azure Databricks. Click Create.\n", "1. Provide the resource group name and workspace name, and click Review + Create.\n", "1. Once the resource is created, go to the Databricks workspace that we created (go to portal.azure.com, click on All resources and search for your workspace)." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "NOTE\n", "\n", "Cluster types: There are two types of cluster - Interactive and Automated. Interactive clusters are created manually by users so that they can interactively analyze the data while working on, or even developing, a data engineering solution. Automated clusters are created automatically when a job starts and are terminated as and when the job completes.\n", "\n", "Cluster mode: There are two major cluster modes - Standard and High Concurrency. Standard cluster mode uses single-user clusters, optimized to run tasks one at a time. The High Concurrency cluster mode is optimized to run multiple tasks in parallel; however, it only supports R, Python, and SQL workloads, and doesn’t support Scala.\n", "\n", "Cluster notes: There are two types of cluster nodes - Worker type and Driver type. The Driver type node is responsible for maintaining a notebook’s state information, interpreting the commands being run from a notebook or a library, and co-ordinates with Spark executors. The Worker type nodes are the Spark executor nodes, which are responsible for distributed data processing." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Creating Azure Databricks clusters" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Perform the following steps to create a Databricks cluster:\n", "\n", "1. Click Launch Workspace.\n", "1. To create a cluster, select Compute from the left-hand menu of the Databricks workspace.\n", "1. Click Create Cluster to create a new cluster. On the New Cluster page, provide a cluster name of dbcluster01. Then, set Cluster mode to Standard, Terminate after to 10 minutes of inactivity, Min workers to 1, and Max workers to 2, and leave the rest of the options as their defaults.\n", "1. Click Create Cluster to create the cluster. It will take around 5 to 10 minutes to create the cluster, and may take more time depending on the number of worker nodes that you have selected." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Creating Azure Databricks pools" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Azure Databricks pools reduce cluster startup and autoscaling time by keeping a set of idle, ready-to-use instances without the need for creating instances when required. To create Azure Databricks pools, execute the following steps:\n", "\n", "1. In your Azure Databricks workspace, on the Compute page, select the Pools option, and then select Create Pool to create a new pool. Provide the pool’s name, then set Min Idle to 2, Max Capacity to 4, and Idle Instance Auto Termination to 10. Leave Instance Type as its default of Standard_DS3_v2 and set Preloaded Databricks Runtime Version to Runtime: 9.1 LTS (Scala 2.12, Spark 3.1.2).\n", "1. Click Create to create the pool. We can attach a new or existing cluster to a pool by specifying the pool name under the Worker type option and Driver type option. In the workspace, navigate to the Clusters page and select dbcluster01, which we created in step 2 of the previous section. On the dbcluster01 page, click Edit, and select dbclusterpool from the Worker type drop-down list and Driver type drop-down list.\n", "1. Click Confirm to apply these changes. The cluster will now show up in the Attached Clusters list on the pool’s page.\n", "\n", "We can add multiple clusters to a pool. Whenever an instance, such as dbcluster01, requires an instance, it’ll attempt to allocate the pool’s idle instance. If an idle instance isn’t available, the pool expands to get new instances, as long as the number of instances is under the maximum capacity." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Recipe 3 - Integrating Databricks with Azure Key Vault" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Azure Key Vault is a useful service for storing keys and secrets that are used by various other services and applications. It is important to integrate Azure Key Vault with Databricks, as you could store the credentials of objects such as a SQL database or data lake inside the key vault. Once integrated, Databricks can reference the key vault, obtain the credentials, and access the database or data lake account. In this recipe, we will cover how you can integrate Databricks with Azure Key Vault.\n", "\n", "1. Log in to portal.azure.com, click Create a resource, search for Key Vault, and click Create. Provide the key vault details, and click Review + create.\n", "1. Open the Databricks workspace on the Azure portal and click Launch Workspace.\n", "1. This will open up the Databricks portal. Copy the URL, which will be set out as https://adb-xxxxxxxxxxxxxxxx.xx.azuredatabricks.net/?o= xxxxxxxxxxxxxxxx#.\n", "1. Add secrets/createScope to the end of the URL and go to the URL https://adb-xxxxxxxxxxxxxxxx.xx.azuredatabricks.net/?o= xxxxxxxxxxxxxxxx#secrets/createScope. It should open a page to create a secret scope.\n", "1. Provide the following details:\n", " - Scope name: Any name that relates to the key or password that you will access through the key vault. In our case, let’s use datalakekey.\n", " - DNS Name: DNS would be the DNS name of Key Vault, which will have the following format: .vault.azure.net. In our case, it is, sparshkv.vault.azure.net.\n", " - Set Manage Principal to All Users.\n", " - Resource ID: Resource ID of the key vault. To obtain it, go to Key Vault in the Azure portal (under All resources, search for sparshkv). Go to Properties and copy the Resource ID.\n", "\n", "You will receive confirmation that a secret scope called datalakekey has been successfully added. This completes the integration between Databricks and Azure Key Vault.\n", "\n", "How it works…\n", "\n", "Upon completion of the preceding steps, all users with access to the Databricks workspace will be able to extract secrets and keys from the key vault and use them in a Databricks notebook to perform the desired operations.\n", "\n", "Behind the scenes, Databricks uses a service principal to access the key vault. As we create the scope in the Databricks portal, Azure will grant the relevant permissions required for the Databricks service principal on the key vault. You can verify as much using the following steps:\n", "\n", "1. Go to the sparshkv key vault on the Azure portal.\n", "1. Click on Access policies. You will notice the Azure Databricks service principal being granted Get and List permissions on secrets. This will allow the Databricks workspace to read secrets from the key vault." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Recipe 4 - Mounting an Azure Data Lake container in Databricks" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Accessing data from Azure Data Lake is one of the fundamental steps of performing data processing in Databricks. In this recipe, we will learn how to mount an Azure Data Lake container in Databricks using the Databricks service principal. We will use Azure Key Vault to store the Databricks service principal ID and the Databricks service principal secret that will be used to mount a data lake container in Databricks.\n", "\n", "1. Go to the Azure Data Lake Storage account created in the Azure portal. Click Containers. Click + Container.\n", "1. Provide a container name of databricks and click Create.\n", "\n", "![](https://user-images.githubusercontent.com/62965911/218251815-91d04d7c-005e-4df8-bfdd-ad208b725e3c.png)\n", "\n", "Mounting the container in Databricks will involve the following steps:\n", "\n", "1. Go to portal.azure.com and click Azure Active Directory. Click App registrations and then hit + New registration.\n", "1. Provide a name of `SparshDatabricks` and click on the Register button.\n", "1. After the registration is done, copy the Application (client) ID and the Directory (tenant) ID.\n", "1. Click Certificates & secrets. Click + New client secret. For Description, provide any relevant description and click Add.\n", "1. Copy the generated secret value. Ensure you copy it before closing the screen, as you only get to see it once.\n", "1. Go to the sparshkv Azure Key Vault. Click Secrets. Click + Generate/Import.\n", "1. Set Name as appsecret and Value as the secret value copied in step 5. Hit the Create button.\n", "1. Repeat step 6 and step 7, and add the application ID and directory ID values saved in step 3 as secrets inside the key vault. Provide the name for the secrets as ApplicationID and DirectoryID. Once done, the key vault should have three secrets.\n", " ![](https://user-images.githubusercontent.com/62965911/218252078-30cc3316-ee86-48ee-bbe0-7ec302f6ad23.png)\n", "1. Go to the Data Lake account. Click Containers and open the databricks container. Click Access Control (IAM) and then click Add role assignment.\n", "1. Search for the `Storage Blob Data Contributor` role, select it, and click Next.\n", "1. Click + Select members and, in the Select members box, type the app name registered earlier, which is `SparshDatabricks`.\n", "1. Select SparshDatabricks, hit the Select button, and then click Review + assign to assign permission.\n", "1. Launch the Databricks workspace if you need to and go back to the Databricks portal. From the Create menu, select Notebook.\n", "1. Provide any notebook name. Set Default Language to Scala.\n", "1. Use the following Scala code to mount the data lake container in Databricks." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "The code extracts the application ID, application secret, and directory ID from the key vault using the dbutils.secrets.get function available in Scala. The dbutils.secret.get function takes the scope name (provided in the Integrating Databricks with Azure Key Vault recipe), and the secret names provided in step 7 and step 8. The dbutils.fs.mount command has a parameter called source, which takes the URL of the data lake container to be mounted. The data lake container URL format is abfss://@.dfs.core.windows.net/ and, in our case, the URL would be `abfss://databricks@sparshstorage1.dfs.core.windows.net/`:\n", "\n", "```scala\n", "val appsecret = dbutils.secrets.get(scope=\"datalakekey\",key=\"appsecret\")\n", "val ApplicationID = dbutils.secrets.get(scope=\"datalakekey\",key=\"ApplicationID\")\n", "val DirectoryID = dbutils.secrets.get(scope=\"datalakekey\",key=\"DirectoryID\")\n", "val endpoint = \"https://login.microsoftonline.com/\" + DirectoryID + \"/oauth2/token\"\n", "val configs = Map(\n", " \"fs.azure.account.auth.type\" -> \"OAuth\",\n", " \"fs.azure.account.oauth.provider.type\" -> \"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\",\n", " \"fs.azure.account.oauth2.client.id\" -> ApplicationID,\n", " \"fs.azure.account.oauth2.client.secret\" -> appsecret,\n", " \"fs.azure.account.oauth2.client.endpoint\" -> endpoint)\n", "// Optionally, you can add to the source URI of your mount point.\n", "dbutils.fs.mount(\n", " source = \"abfss://databricks@sparshstorage1.dfs.core.windows.net/\",\n", " mountPoint = \"/mnt/datalakestorage\",\n", " extraConfigs = configs)\n", "```\n", "\n", "Upon running the script, the data lake container will be successfully mounted:\n", "\n", "![](https://user-images.githubusercontent.com/62965911/218252771-c0dfe803-6dbd-4a84-a9e6-04c994de982b.png)\n", "\n", "How it works...\n", "\n", "On Azure AD, we registered an application that created a service principal. Service principals are entities that applications can use to authenticate themselves to Azure services. We provided permissions for the application ID on the container to be accessed, which grants permission to the service principal created. We stored the credentials of the service principal (the application ID and secret) in Azure Key Vault to ensure secure access to credentials. Databricks obtains the service principal credentials (the application ID and secret) from the key vault and uses the security context of the service principal to access the Azure Data Lake account. Databricks, while mounting the data lake account, retrieves the application ID, directory ID, and secret from the key vault and uses the service principal context to access the Azure Data Lake account. This process makes for a very secure method of accessing a data lake account for the following reasons:\n", "\n", "- Sensitive information such as an application secret or directory ID is accessed programmatically and not used in plain text inside the notebook. This ensures sensitive information is not exposed to anyone who accesses the notebook.\n", "- Developers who access the notebook needn't know the password or key of the data lake account. Using a key vault ensures that they can continue their development work, even without having direct access to the data lake account or database.\n", "- Data lake accounts can be accessed through account keys too. However, we used service principals for authentication, as using service principals restricts access to the data lake account via applications, while accessing them using an account key or **Shared Access Signature** (**SAS**) token would provide direct login access to the data lake account via tools such as Azure Storage Explorer." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Recipe 5 - Processing data using notebooks" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Databricks notebooks are the fundamental component in Databricks for performing data processing tasks. In this recipe, we will perform operations such as reading, filtering, cleaning a Comma-Separated Value (CSV) file, and gaining insights from it using a Databricks notebook written in Scala code.\n", "\n", "1. From the Create menu, select Notebook.\n", "1. Set the notebook name as processdata and Default Language to Scala.\n", "1. In the File menu, click Upload Data to DBFS.\n", "1. Click Drop files to upload, or click to browse. Make sure to also note down the default path in your environment. Usually, it will follow the following format – `/FileStore/shared_uploads/`.\n", "1. Upload the covid-data.csv file.\n", "1. Execute the following command to read the data to a DataFrame. Ensure to replace the file path noted earlier. Notice from the output message that the DataFrame contains over 60 fields.\n", " ```scala\n", " val covid_raw_data = spark.read.format(\"csv\")\n", " .option(\"header\", \"true\")\n", " .option(\"inferSchema\", \"true\")\n", " .load(\"/FileStore/shared_uploads/sprsag@gmail.com/covid_data.csv\")\n", " ```\n", "1. At the right-hand corner of the cell, click on the drop-down button, and click Add Cell Below. Provide the following command to display the DataFrame:\n", " ```scala\n", " display(covid_raw_data)\n", " ```\n", "1. Execute the following command to get the row count in the CSV file or DataFrame. There are over 94,000 rows.\n", " ```scala\n", " covid_raw_data.count()\n", " ```\n", "1. Let’s remove any duplicates. The dropDuplicates() function removes duplicates and stores the result in a new DataFrame:\n", " ```scala\n", " val covid_remove_duplicates = covid_raw_data.dropDuplicates()\n", " ```\n", "1. Let’s look at the fields and their data type in the DataFrame. The printSchema() function provides the DataFrame structure:\n", " ```scala\n", " covid_remove_duplicates.printSchema()\n", " ```\n", "1. The data is about the impact of COVID across the globe. Let’s focus on a handful of columns that are required, instead of all the columns provided. The select function can help us to specify the columns that we need out of a DataFrame. Execute the following command to load the selected columns to another DataFrame:\n", " ```scala\n", " val covid_selected_columns = covid_remove_duplicates.select(\"iso_code\",\"location\",\"continent\",\"date\",\"new_deaths_per_million\",\"people_fully_vaccinated\",\"population\")\n", " ```\n", "1. For our analysis, let’s remove any rows that contain NULL values in any of the columns. The na.drop() function can help to achieve this. Execute the following command. The covid_clean_data DataFrame will only contain rows without any NULL value in them once the command has been executed. The covid_clean_data.count() command shows that only 32,000+ rows were without any NULL values:\n", " ```scala\n", " val covid_clean_data = covid_selected_columns.na.drop()\n", " covid_clean_data.count()\n", " ```\n", "1. Data analysis is easier to perform using Spark SQL commands. To use SQL commands to analyze a DataFrame, we need to create a temporary view. The following command will create a temporary view called covid_view:\n", " ```scala\n", " covid_clean_data.createOrReplaceTempView(\"covid_view\")\n", " ```\n", "1. Execute the following command to get some insights out of the data. %sql, on the first line, lets us switch from Scala to SQL. The SQL query provides the number of deaths, and the percentage of people vaccinated in each country with a population of over 1 million, ordered by countries with the highest deaths per million people:\n", " ```scala\n", " %sql\n", " SELECT iso_code, location, continent,\n", " SUM(new_deaths_per_million) as death_sum,\n", " MAX(people_fully_vaccinated * 100 / population) as percentage_vaccinated FROM covid_view\n", " WHERE population > 1000000\n", " GROUP BY iso_code,location,continent\n", " ORDER BY death_sum desc\n", " ```\n", "1. To visualize the output from the previous SQL query, click on the + icon and select Create visualization.\n", "1. Set location in the Keys section, continent in the Series groupings section, and death_sum in the Values section. This will provide a bar graph with a line for each country, with the color of the line based on the continent that the country belongs to.\n", "\n", "NOTE: Notebooks for this recipe is in `./assets` folder.\n", "\n", "How it works…\n", "\n", "DataFrames are the fundamental objects used to store runtime data during data processing in Databricks. DataFrames are in-memory objects and extremely well-optimized for performing advanced analytics operations.\n", "\n", "A CSV file was loaded to the Databricks File System (DBFS) storage, which is the default local storage available when a Databricks workspace is created. We can perform the same activities in a data lake account too, by uploading the CSV file to the data lake container and mounting the data lake container, as explained in the Mounting an Azure Data Lake container in Databricks recipe.\n", "\n", "After loading the data to a DataFrame, we were able to cleanse the data by performing operations such as removing unwanted columns, dropping duplicates, and deleting rows with NULL values easily using Spark functions. Finally, by creating a temporary view out of the DataFrame, we were able to analyze the DataFrame’s data using SQL queries and get visual insights using Databricks' visualization capabilities." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Recipe 6 - Scheduling notebooks using job clusters" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Data processing can be performed using notebooks, but to operationalize it, we need to execute it at a specific scheduled time, depending upon the demands of the use case or problem statement. After a notebook has been created, you can schedule a notebook to be executed at a preferred frequency using job clusters. This recipe will demonstrate how you could schedule a notebook using job clusters.\n", "\n", "1. Create a folder called Job.\n", "1. Click Workspace, click on the Job folder, and pick the Import option. Import from the assets folder.\n", "1. In the Job folder, you will have a notebook called SampleJob. The SampleJob notebook will read a sample CSV file and provide insights from it.\n", "1. From the menu on the left, click Workflows.\n", "1. Click Create Job.\n", "1. Provide a job name of `SampleJob`. Select the imported SampleJob notebook. On the New Cluster configuration, click on the edit icon to set the configuration options. The job will create a cluster each time it runs based on the configuration and delete the cluster once the job is completed.\n", "1. Reduce the total number of cores to 2 and hit the Confirm button to save the cluster configuration. Hit the Create button on the job creation screen.\n", "1. Hit the Edit schedule button to edit the schedule.\n", "1. Set Schedule Type to Scheduled and set a frequency as per your needs. Hit the Save button.\n", "1. Hit Run now to trigger the job.\n", "1. Click on Runs on the left tab to view the job run result.\n", "1. The Active runs section will provide the currently active jobs. After a few minutes, the job will move to the Running state from the initial Pending state.\n", "1. Upon completion, the job result will be listed as completed in the Completed runs (past 60 days) section. Click Start time to view the result of the completed notebook.\n", "1. The job execution result is shown in the following screenshot:\n", " ![](https://user-images.githubusercontent.com/62965911/218254495-68a0a1c6-1f23-4344-b5f4-32785752020b.png)\n", "1. If we need to execute another notebook at the same schedule, we could create an additional task within the same job. Go back to the job menu and click on SampleJob. Switch to the Tasks section. Click on the + button to add a new task.\n", "1. Provide a task name. You may add any notebook to the task. Notice that the cluster name is SampleJob_cluster, which implies that it will use the cluster created for the previous task. The Depends on option controls whether the job needs to run after the previous SampleJob task is completed.\n", "1. Upon adding it, the two tasks will appear.\n", "\n", "How it works…\n", "\n", "The imported notebook was set to run at a specific schedule using the Databricks job scheduling capabilities. While scheduling the jobs, New Cluster was selected, instead of picking any cluster available in the workspace. Picking New Cluster implies a cluster will be created each time the job runs and will be destroyed once the job completes. This also means the jobs need to wait for an additional 2 minutes for the cluster to be created for each run.\n", "\n", "Adding multiple notebooks to the same job via additional tasks allows us to reuse the job cluster created for the first notebook execution, and the second task needn’t wait for another cluster to be created. Usage of multiple tasks and the dependency option allows us to orchestrate complex data processing flows using Databricks notebooks." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Recipe 7 - Working with Delta Lake tables" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Delta Lake databases are Atomicity, Consistency, Isolation, and Durability (ACID) property-compliant databases available in Databricks. Delta Lake tables are tables in Delta Lake databases that use Parquet files to store data and are highly optimized for performing analytic operations. Delta Lake tables can be used in a data processing notebook for storing preprocessed or processed data. The data stored in Delta Lake tables can be easily consumed in visualization tools such as Power BI.\n", "\n", "In this recipe, we will create a Delta Lake database and Delta Lake table, load data from a CSV file, and perform additional operations such as UPDATE, DELETE, and MERGE on the table.\n", "\n", "1. From the Databricks menu, click Create and create a new notebook.\n", "1. Create a notebook called Covid-DeltaTables with Default Language set to SQL.\n", "1. Add a cell to the notebook and execute the following command to create a Delta database called covid:\n", " ```sql\n", " CREATE DATABASE covid\n", " ```\n", "1. Execute the following command to read the covid-data.csv file to a temporary view. Please note that the path depends on the location where covid-data.csv was uploaded and ensure to provide the correct path for your environment:\n", " ```sql\n", " CREATE TEMPORARY VIEW covid_data\n", " USING CSV\n", " OPTIONS (path \"/FileStore/shared_uploads/sprsag@gmail.com/covid_data.csv\", header \"true\", mode \"FAILFAST\")\n", " ```\n", "1. Execute the following command to create a Delta table using the temporary view. USING DELTA in the CREATE TABLE command indicates that it’s a Delta table being created. The location specifies where the table is stored. If you have mounted a data lake container (as we did in the Mounting an Azure Data Lake container in Databricks recipe), you can use the data lake mount point to store the Delta Lake table in your Azure Data Lake account. In this example, we use the default storage provided by Databricks, which comes with each Databricks workspace. The table name is provided in . to ensure it belongs to the Delta Lake database created. To insert the data from the view into the new table, the table is created using the CREATE TABLE command, followed by the AS command, followed by the SELECT statement against the view:\n", " ```sql\n", " CREATE OR REPLACE TABLE covid.covid_data_delta\n", " USING DELTA\n", " LOCATION '/FileStore/shared_uploads/sprsag@gmail.com/covid_data_delta'\n", " AS\n", " SELECT iso_code,location,continent,date,new_deaths_per_million,people_fully_vaccinated,population FROM covid_data\n", " ```\n", "1. Go to the Databricks menu, click Data, and then click on the covid database. You will notice that a covid_data_delta table has been created.\n", "1. Go back to the notebook we were working on. Add a new cell and delete some rows using the following command. The DELETE command will delete around 57,000 rows. We can delete, select, and insert data as well as we would in any other commercial database:\n", " ```sql\n", " DELETE FROM covid.covid_data_delta where population is null or people_fully_vaccinated is null or new_deaths_per_million is null or location is null\n", " ```\n", "1. Add a new cell and execute the following command to delete all the rows from the table. Let’s run a select count(*) query against the table, which will return 0 if all the rows have been deleted:\n", " ```sql\n", " delete from covid.covid_data_delta;\n", " Select count(*) from covid.covid_data_delta;\n", " ```\n", "1. Delta tables have the ability to time travel, which allows us to read older versions of the table. Using the as of version keyword, we can read the older version of the table. Version 0 gives the most recent version behind the current version of the table. Add a new cell and execute the following command to read the data before deletion:\n", " ```sql\n", " select * from covid.covid_data_delta version as of 0;\n", " ```\n", "1. RESTORE TABLE can restore the table to the older version. Add a cell and execute the following command to recover all the rows before deletion:\n", " ```sql\n", " RESTORE TABLE covid.covid_data_delta TO VERSION AS OF 0;\n", " ```\n", "1. Let’s perform an UPDATE statement, followed by a DELETE statement. Add two cells. Paste the following commands into these cells, as shown. Execute them in sequence:\n", " ```sql\n", " UPDATE covid.covid_data_delta SET population = population * 1.2 WHERE continent = 'Asia';\n", "\n", " DELETE FROM covid.covid_data_delta WHERE continent = 'Europe';\n", " ```\n", "1. If we want to revert these two operations (UPDATE and DELETE), we can use the MERGE statement and the Delta table’s row versioning capabilities to achieve this. Using a single MERGE statement, we can perform the following:\n", " - Compare the older version of the table with the current version of the table\n", " - If the rows match, update all the columns with values from the older version\n", " - If the row doesn’t match, insert the row from the older version into the current table\n", " This can be achieved using the following code:\n", " ```sql\n", " MERGE INTO covid.covid_data_delta source\n", " USING covid.covid_data_delta TIMESTAMP AS OF \"2023-02-11 11:12:00\" target\n", " ON source.location = target.location and source.date = target.date\n", " WHEN MATCHED THEN UPDATE SET *\n", " WHEN NOT MATCHED\n", " THEN INSERT *\n", " ```\n", "1. The MERGE command takes the covid_data_delta table as the source table to be updated or inserted. Instead of specifying version numbers, we can also use timestamps to obtain older versions of the table. covid_data_delta TIMESTAMP AS OF \"2023-02-11 11:12:00\" takes the version of the table as of February 11, 2023, 11:12:00 – UTC time. WHEN MATCHED THEN UPDATE SET * updates all the columns in the table when the condition specified in the ON clause matches. When the condition doesn’t match, the rows are inserted from the older version to the current version of the table. The output, as expected, shows that the rows that were deleted in earlier step were successfully reinserted.\n", "\n", "How it works…\n", "\n", "Delta tables offer advanced capabilities for processing data, such as support for UPDATE, DELETE, and MERGE statements. MERGE statements and the versioning capabilities of Delta tables are very powerful in ETL scenarios, where we need to perform UPSERT (update if it matches, insert if it doesn’t) operations against various tables.\n", "\n", "These capabilities for supporting data modifications and row versioning are made possible because Delta tables maintain the changes to the table via a transaction log file stored in JSON format. The transaction files are located in the same location where the table was created but in a subfolder called _delta_log. By default, the log files are retained for 30 days and can be controlled using the delta.logRetentionDuration table property. The ability to read older versions is also controlled by the delta.logRetentionDuration property." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "name": "python", "version": "3.9.10" }, "orig_nbformat": 4, "vscode": { "interpreter": { "hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49" } } }, "nbformat": 4, "nbformat_minor": 2 }