{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "title: \"Kafka PySpark Producer Example\"\n", "date: 2021-02-24\n", "type: technical_note\n", "draft: false\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Consuming Messages from Kafka Tour Producer Using PySpark\n", "\n", "To run this notebook you should have taken the Kafka tour and created the Producer and Consumer jobs. I.e your Job UI should look like this: \n", "\n", "![kafka11.png](./images/kafka11.png)\n", "\n", "In this notebook we will consume messages from Kafka that were produced by the producer-job created in the Demo. Go to the Jobs-UI in hopsworks and start the Kafka producer job:\n", "\n", "![kafka12.png](./images/kafka12.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Imports\n", "\n", "We use utility functions from the hops library to make Kafka configuration simple\n", "\n", "Dependencies: \n", "\n", "- hops-py-util" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from hops import kafka\n", "from hops import tls\n", "from hops import hdfs\n", "from confluent_kafka import Producer, Consumer\n", "import numpy as np\n", "from pyspark.sql.types import StructType, StructField, FloatType, TimestampType" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Constants\n", "\n", "Update the `TOPIC_NAME` field to reflect the name of your Kafka topic that was created in your Kafka tour (e.g \"DemoKafkaTopic_3\")\n", "\n", "Update the `OUTPUT_PATH` field to where the output data should be written" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "TOPIC_NAME = \"test2\"\n", "OUTPUT_PATH = \"/Projects/\" + hdfs.project_name() + \"/Resources/data-txt\"\n", "CHECKPOINT_PATH = \"/Projects/\" + hdfs.project_name() + \"/Resources/checkpoint-txt\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Consume the Kafka Topic using Spark and Write to a Sink" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The below snippet creates a streaming DataFrame with Kafka as a data source. Spark is lazy so it will not start streaming the data from Kafka into the dataframe until we specify an output sink (which we do later on in this notebook)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "#lazy\n", "df = spark \\\n", " .readStream \\\n", " .format(\"kafka\") \\\n", " .option(\"kafka.bootstrap.servers\", kafka.get_broker_endpoints()) \\\n", " .option(\"kafka.security.protocol\",kafka.get_security_protocol()) \\\n", " .option(\"kafka.ssl.truststore.location\", tls.get_trust_store()) \\\n", " .option(\"kafka.ssl.truststore.password\", tls.get_key_store_pwd()) \\\n", " .option(\"kafka.ssl.keystore.location\", tls.get_key_store()) \\\n", " .option(\"kafka.ssl.keystore.password\", tls.get_key_store_pwd()) \\\n", " .option(\"kafka.ssl.key.password\", tls.get_trust_store_pwd()) \\\n", " .option(\"kafka.ssl.endpoint.identification.algorithm\", \"\") \\\n", " .option(\"subscribe\", TOPIC_NAME) \\\n", " .load()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When using Kafka as a data source, Spark gives us a default kafka schema as printed below" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "root\n", " |-- key: binary (nullable = true)\n", " |-- value: binary (nullable = true)\n", " |-- topic: string (nullable = true)\n", " |-- partition: integer (nullable = true)\n", " |-- offset: long (nullable = true)\n", " |-- timestamp: timestamp (nullable = true)\n", " |-- timestampType: integer (nullable = true)" ] } ], "source": [ "df.printSchema()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are using the Spark structured streaming engine, which means that we can express stream queries just as we would do in batch jobs. \n", "\n", "Below we filter the input stream to select only the message values" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "messages = df.selectExpr(\"CAST(value AS STRING)\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Specify the output query and the sink of the stream job to be a CSV file in HopsFS. \n", "\n", "By using checkpointing and a WAL, spark gives us end-to-end exactly-once fault-tolerance" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "query = messages \\\n", " .writeStream \\\n", " .format(\"text\") \\\n", " .option(\"path\", OUTPUT_PATH) \\\n", " .option(\"checkpointLocation\", CHECKPOINT_PATH) \\\n", " .start()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run the streaming job, in theory streaming jobs should run forever. \n", "\n", "The call below will be blocking and not terminate. To kill this job you have to restart the pyspark kernel." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "query.awaitTermination()\n", "query.stop()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While the job is running you can go to the HDFS file browser in the Hopsworks UI to preview the files:\n", "\n", "![kafka14.png](./images/kafka14.png)\n", "![kafka13.png](./images/kafka13.png)\n", "![kafka15.png](./images/kafka15.png)\n", "![kafka16.png](./images/kafka16.png)" ] } ], "metadata": { "kernelspec": { "display_name": "PySpark", "language": "python", "name": "pysparkkernel" }, "language_info": { "codemirror_mode": { "name": "python", "version": 3 }, "mimetype": "text/x-python", "name": "pyspark", "pygments_lexer": "python3" } }, "nbformat": 4, "nbformat_minor": 4 }