{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Parsl: Advanced Features\n", "\n", "In this tutorial we present advanced features of Parsl including its ability to support multiple sites, elastically scale across sites, and its support for fault tolerance.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1) Multiple Sites\n", "\n", "In the \"parsl-introduction\" notebook we showed how a configuration file controls the execution provider and model used to execute a Parsl script. While we showed only a single site, Parsl is capable of distributing workload over several sites simultaneously. Below we show an example configuration that combines local thread execution and local pilot job execution. By default, Apps will execute on any configured sites. However, you can also specify a specific site, or sites, on which an App can execute by adding a list of sites to the App decorator. In the following cells, we show a three-stage workflow in which the first app uses local threads, the second uses local pilot jobs, and the third (with no sites specified) will use either threads or pilot jobs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we define two \"sites\", which in this example are both local. The first uses threads, and the second uses pilot job execution. We then instantiate a DataFlowKernel object with these two sites." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from parsl.config import Config\n", "from parsl.executors.threads import ThreadPoolExecutor\n", "from parsl.executors import HighThroughputExecutor\n", "from parsl.providers import LocalProvider\n", "from parsl.channels import LocalChannel\n", "\n", "# Define a configuration for using local threads and pilot jobs\n", "multi_site_config = Config(\n", " executors=[\n", " ThreadPoolExecutor(\n", " max_threads=8, \n", " label='local_threads'\n", " ), \n", " HighThroughputExecutor(\n", " label=\"local_htex\",\n", " worker_debug=True,\n", " max_workers=1,\n", " provider=LocalProvider(\n", " channel=LocalChannel(),\n", " init_blocks=1,\n", " max_blocks=1,\n", " ),\n", " )\n", " ]\n", ")\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we define three Apps, which have the same functionality as in the previous tutorial. However, the first is specified to use the first site only, the second is specific to use the second site only, and the third doesn't have a site specification, so it can run on any available site." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import parsl\n", "from parsl.app.app import python_app, bash_app\n", "from parsl.data_provider.files import File\n", "\n", "parsl.load(multi_site_config)\n", "\n", "# Generate app runs using the \"local_threads\" executor\n", "@bash_app(executors=[\"local_threads\"])\n", "def generate(outputs=[]):\n", " return \"echo $(( RANDOM )) &> {}\".format(outputs[0].filepath)\n", "\n", "# Concat app runs using the \"local_htex\" executor\n", "@bash_app(executors=[\"local_htex\"])\n", "def concat(inputs=[], outputs=[]):\n", " return \"cat {0} > {1}\".format(\" \".join(i.filepath for i in inputs), outputs[0].filepath)\n", "\n", "# Total app runs using either executor\n", "@python_app\n", "def total(inputs=[]):\n", " total = 0\n", " with open(inputs[0], 'r') as f:\n", " for l in f:\n", " total += int(l)\n", " return total" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we run the apps, and cleanup." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create 5 files with random numbers\n", "output_files = []\n", "for i in range (5):\n", " output_files.append(generate(outputs=[File('random-%s.txt' % i)]))\n", "\n", "# Concatenate the files into a single file\n", "cc = concat(inputs=[i.outputs[0] for i in output_files], outputs=[File(\"all.txt\")])\n", "\n", "# Calculate the sum of the random numbers\n", "result = total(inputs=[cc.outputs[0]])\n", "\n", "print (result.result())\n", "\n", "parsl.clear()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2) Elasticity\n", "\n", "As a Parsl script is evaluated, it creates a collection of tasks for asynchronous execution. In most cases this stream of tasks is variable as different stages of the workflow are evaluated. To address this variability, Parsl is able to monitor the flow of tasks and elastically provision resources, within user specified bounds, in response. \n", "\n", "In the following example, we declare the range of blocks to be provisioned from 0 to 2 (minBlocks and maxBlocks, respectively). We then set parallelism to 0.1, which means that Parsl will favor reusing resources rather than provisioning new resources. You should see that the app is executed on one process IDs. Note: we restrict Parsl to using one worker per block (max_workers=1)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import parsl\n", "from parsl.app.app import python_app, bash_app\n", "from parsl.providers import LocalProvider\n", "from parsl.channels import LocalChannel\n", "from parsl.config import Config\n", "from parsl.executors import HighThroughputExecutor\n", "\n", "local_htex = Config(\n", " executors=[\n", " HighThroughputExecutor(\n", " label=\"local_htex\",\n", " max_workers=1,\n", " provider=LocalProvider(\n", " channel=LocalChannel(),\n", " init_blocks=1,\n", " max_blocks=2,\n", " parallelism=0.1,\n", " )\n", " )\n", " ]\n", ")\n", "\n", "parsl.load(local_htex)\n", "\n", "@python_app\n", "def py_hello():\n", " import time \n", " import os\n", " time.sleep(5)\n", " return \"(%s) Hello World!\" % os.getpid()\n", "\n", "results = {}\n", "for i in range(0, 10):\n", " results[i] = py_hello()\n", "\n", "print(\"Waiting for results ....\")\n", "for i in range(0, 10):\n", " print(results[i].result())\n", "\n", "parsl.clear()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now modify the parallelism option to 1. This configuration means that Parsl will favor elastic growth to execute as many tasks simultaineously as possible, up to the user defined limit of workers and blocks. You can modify the max_blocks and parallelism between 0 and 1 to experiment with different scaling policies. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import parsl\n", "from parsl.app.app import python_app, bash_app\n", "from parsl.providers import LocalProvider\n", "from parsl.channels import LocalChannel\n", "from parsl.config import Config\n", "from parsl.executors import HighThroughputExecutor\n", "\n", "local_htex = Config(\n", " executors=[\n", " HighThroughputExecutor(\n", " label=\"local_htex\",\n", " max_workers=1,\n", " provider=LocalProvider(\n", " channel=LocalChannel(),\n", " init_blocks=1,\n", " max_blocks=2,\n", " parallelism=1,\n", " )\n", " )\n", " ]\n", ")\n", "\n", "parsl.load(local_htex)\n", "\n", "@python_app\n", "def py_hello():\n", " import time \n", " import os\n", " time.sleep(5)\n", " return \"(%s) Hello World!\" % os.getpid()\n", "\n", "results = {}\n", "for i in range(0, 10):\n", " results[i] = py_hello()\n", "\n", "print(\"Waiting for results ....\")\n", "for i in range(0, 10):\n", " print(results[i].result())\n", "\n", "parsl.clear()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3) Fault tolerance and caching\n", "\n", "Workflows are often re-executed for various reasons, including workflow or node failure, code errors, or extension of the workflow. It is inefficient to re-execute apps that have succesfully completed. Parsl provides two mechanisms to improve efficacy via app caching and/or workflow-level checkpointing. \n", "\n", "### App Caching\n", "\n", "When developing a workflow, developers often re-execute the same workflow with incremental changes. Often large fragments of the workflow are re-executed even though they have not been modified. This wastes not only time but also computational resources. App Caching solves this problem by caching results from apps that have completed so that they can be re-used. Caching is enabled by setting the `cache` argument to the App wrapper. Note: the cached result is returned only when the same function, with the same name, input arguments, and function body is called. If any of these are changed, a new result is computed and returned.\n", "\n", "The following example shows two calls to the `slow_message` app with the same message. You will see that the first call is slow (since the app sleeps for 5 seconds), but the second call returns immedidately (the app is not actually executed this time, so there is no sleep delay). \n", "\n", "Note: running this example in Jupyter notebooks will cache the results through subsequent executions of the cell. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import parsl\n", "from parsl.app.app import python_app, bash_app\n", "from parsl.providers import LocalProvider\n", "from parsl.channels import LocalChannel\n", "from parsl.config import Config\n", "from parsl.executors import HighThroughputExecutor\n", "\n", "local_htex = Config(\n", " executors=[\n", " HighThroughputExecutor(\n", " label=\"htex_Local\",\n", " worker_debug=True,\n", " max_workers=1,\n", " provider=LocalProvider(\n", " channel=LocalChannel(),\n", " init_blocks=1,\n", " max_blocks=1,\n", " )\n", " )\n", " ]\n", ")\n", "\n", "parsl.load(local_htex)\n", "\n", "@python_app(cache = True)\n", "def slow_message(message):\n", " import time \n", " time.sleep(5)\n", " return message\n", "\n", "# First call to slow_message will calcuate the value\n", "first = slow_message(\"Hello World\")\n", "print (\"First: %s\" % first.result())\n", "\n", "# Second call to slow_message with the same args will\n", "# return immediately\n", "second = slow_message(\"Hello World\")\n", "print (\"Second: %s\" % second.result())\n", "\n", "# Third call to slow_message with different arguments\n", "# will take some time to calculate values\n", "third = slow_message(\"Hello World!\")\n", "print (\"Third: %s\" % third.result())\n", "\n", "parsl.clear()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Checkpointing\n", "\n", "Parsl's checkpointing model enables workflow state to be saved and then used at a later time to resume execution from that point. Checkpointing provides workflow-level fault tolerance, insuring against failure of the Parsl control process. \n", "\n", "Parsl implements an incremental checkpointing model: each explicit checkpoint will save state changes from the previous checkpoint. Thus, the full history of a workflow may be distributed across multiple checkpoints.\n", "\n", "Checkpointing uses App caching to store results. Thus, the same caveats apply to non-deterministic functions. That is, the checkpoint saves results for an instance of an App when it has the same name, arguments, and function body. \n", "\n", "In this example we demonstrate how to automatically checkpoint workflows when tasks succesfully execute. This is enabled in the config by setting `checkpointMode` to `task_exit`. Other checkpointing models are described in the [checkpointing documentation](https://parsl.readthedocs.io/en/latest/userguide/checkpoints.html).\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import parsl\n", "from parsl.app.app import python_app, bash_app\n", "from parsl.providers import LocalProvider\n", "from parsl.channels import LocalChannel\n", "from parsl.config import Config\n", "from parsl.executors import HighThroughputExecutor\n", "\n", "local_htex = Config(\n", " executors=[\n", " HighThroughputExecutor(\n", " label=\"local_htex\",\n", " max_workers=1,\n", " provider=LocalProvider(\n", " channel=LocalChannel(),\n", " init_blocks=1,\n", " max_blocks=1,\n", " )\n", " )\n", " ],\n", " checkpoint_mode='task_exit',\n", ")\n", "\n", "dfk = parsl.load(local_htex)\n", "\n", "@python_app(cache=True)\n", "def slow_double(x):\n", " import time\n", " time.sleep(2)\n", " return x * 2\n", "\n", "d = []\n", "for i in range(5):\n", " d.append(slow_double(i))\n", "\n", "# wait for results\n", "print([d[i].result() for i in range(5)])\n", "\n", "parsl.clear()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To restart from a previous checkpoint the DFK must be configured with the appropriate checkpoint file. In most cases this is likley to be the most recent checkpoint file created. The following approach works with any checkpoint file, irrespective of which checkpointing method was used to create it. \n", "\n", "In this example we reload the most recent checkpoint and attempt to run the same workflow. The results return immediately as there is no need to rexecute each app. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from parsl.utils import get_all_checkpoints\n", "from parsl import set_stream_logger, NullHandler\n", "\n", "local_htex = Config(\n", " executors=[\n", " HighThroughputExecutor(\n", " label=\"local_htex\",\n", " max_workers=1,\n", " provider=LocalProvider(\n", " channel=LocalChannel(),\n", " init_blocks=1,\n", " max_blocks=1,\n", " )\n", " )\n", " ],\n", " checkpoint_files = get_all_checkpoints(),\n", ")\n", "\n", "parsl.load(local_htex)\n", "\n", "# Rerun the same workflow\n", "d = []\n", "for i in range(5):\n", " d.append(slow_double(i))\n", "\n", "# wait for results\n", "print([d[i].result() for i in range(5)])\n", "\n", "parsl.clear()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 4) Globus data management\n", "\n", "Parsl uses Glous for wide area data movement. The following example shows how you can execute an app by passing in an input Globus-accesible file and transferring the output file to a Globus endpoint. \n", "\n", "Note: to run this example you will need to run in a location with a Globus endpoint and make that endpoint known to the configuration. E.g., for BlueWaters you will need to include the following configuration: \n", "\n", "```\n", "local_endpoint = 'd59900ef-6d04-11e5-ba46-22000b92c6ec'\n", "\n", "storage_access=[GlobusScheme(\n", " endpoint_uuid=local_endpoint,\n", " endpoint_path=\"/\",\n", " local_path=\"/\"\n", " )]\n", "```\n", "\n", "\n", "Make sure to activate the destination endpoint before running this example. You can activate the endpoint on the Globus website or via the Globus Python SDK. \n", "\n", "The example is set to upload the sorted file to the Globus tutorial endpoint. You can send it to another location by updating the following code: \n", "\n", "```\n", "dest_endpoint = 'ddb59aef-6d04-11e5-ba46-22000b92c6ec'\n", "```\n", "\n", "You can view the destination endpoint in [Globus](https://app.globus.org/file-manager?origin_id=ddb59aef-6d04-11e5-ba46-22000b92c6ec&origin_path=%2F~%2F)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import parsl\n", "from parsl.app.app import python_app, bash_app\n", "from parsl.data_provider.files import File\n", "from parsl.config import Config\n", "from parsl.data_provider.globus import GlobusStaging\n", "from parsl.data_provider.data_manager import default_staging\n", "from parsl.executors.threads import ThreadPoolExecutor\n", "import os\n", "\n", "# Local endpoint used to transfer Globus file in for processing\n", "local_endpoint = '' # Insert your endpoint ID here\n", "\n", "dest_endpoint = 'ddb59aef-6d04-11e5-ba46-22000b92c6ec' # Destination for sorted file\n", "\n", "if not local_endpoint:\n", " print(\"You must specify a local endpoint\")\n", "\n", "else:\n", " config = Config(\n", " executors=[\n", " ThreadPoolExecutor(\n", " label='local_threads_globus',\n", " working_dir=os.getcwd(), # Update to your working directory\n", " storage_access=default_staging + [GlobusStaging(\n", " endpoint_uuid=local_endpoint,\n", " endpoint_path='/',\n", " local_path='/'\n", " )],\n", " )\n", " ],\n", " )\n", "\n", " parsl.load(config)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@python_app\n", "def sort_strings(inputs=[], outputs=[]):\n", " with open(inputs[0], 'r') as u:\n", " strs = u.readlines()\n", " strs.sort()\n", " with open(outputs[0].filepath, 'w') as s:\n", " for e in strs:\n", " s.write(e)\n", "\n", "# Remote unsorted file stored on the Globus-accesible Petrel data service\n", "unsorted_globus_file = File('globus://03d7d06a-cb6b-11e8-8c6a-0a1d4c5c824a/unsorted.txt')\n", "\n", "# Location to send the sorted file after processing\n", "sorted_globus_file = File('globus://%s/~/sorted.txt' % dest_endpoint)\n", "\n", "f = sort_strings(inputs=[unsorted_globus_file], outputs=[sorted_globus_file])\n", "print (f.result())\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Monitoring\n", "\n", "Parsl can be configured to capture fine grain monitoring information about workflows and resource usage. To enable monitoring you must add the monitoring hub to the configuration.\n", "\n", "Note: in this example we set the resource monitoring interval to 3 seconds so that we can capture resource information from short running tasks. In practice you will likely use a longer interval." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import parsl\n", "from parsl import python_app\n", "from parsl.monitoring.monitoring import MonitoringHub\n", "from parsl.config import Config\n", "from parsl.executors import HighThroughputExecutor\n", "from parsl.providers import LocalProvider\n", "from parsl.addresses import address_by_hostname\n", "\n", "import logging\n", "\n", "config = Config(\n", " executors=[\n", " HighThroughputExecutor(\n", " label=\"local_htex\",\n", " address=address_by_hostname(),\n", " max_workers=1,\n", " provider=LocalProvider(\n", " channel=LocalChannel(),\n", " init_blocks=1,\n", " max_blocks=1,\n", " ),\n", " \n", " )\n", " ],\n", " monitoring=MonitoringHub(\n", " hub_address=address_by_hostname(),\n", " hub_port=6553,\n", " resource_monitoring_interval=1,\n", " )\n", ")\n", "\n", "parsl.load(config)\n", "\n", "# Run a simple workflow\n", "@python_app(cache=True)\n", "def slow_double(x):\n", " import time\n", " import random\n", " time.sleep(random.randint(1,15))\n", " return x * 2\n", "\n", "d = []\n", "for i in range(5):\n", " d.append(slow_double(i))\n", "\n", "# wait for results\n", "print([d[i].result() for i in range(5)])\n", "\n", "parsl.clear()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Accessing monitoring data\n", "\n", "Parsl includes a workflow monitoring system and web-based visualization interface that can be used to monitor workflows during or after execution. The monitoring system stores information in a database and the web interface provides workflow-, task-, and resource-level views. To view the web interface run the parsl-visualize command. \n", "\n", "As all monitoring information is stored in a local SQLite database, you can connect to this database directly from your notebook to view and visualize workflow status. In the example below we show how to load the information into a Pandas dataframe.\n", "\n", "First connect to the database. By default the database is named 'monitoring.db'; however, that may be changed in the monitoring configuration. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sqlite3\n", "import pandas as pd\n", "\n", "conn = sqlite3.connect('monitoring.db')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The workflow table contains high level workflow information such as the name of the workflow, when it was run, by whom, as well as statistics about the state of tasks (e.g., completed or failed)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_workflow = pd.read_sql_query('SELECT * from workflow', conn)\n", "df_workflow" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The tasl table contains information about the invidual tasks in a worklow such as the input and output files, start and completion time, and where that task was executed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "run_id = df_workflow['run_id'].iloc[-1]\n", "df_task = pd.read_sql_query('SELECT * from task where run_id=\"%s\"' % run_id, conn)\n", "df_task" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The resource table contains information about resources used on the node. Resource information is sampled as defined by the `resource_monitoring_interval` in the configuration. All resource information is captured using [psutil](https://psutil.readthedocs.io/en/latest/)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_resource = pd.read_sql_query('SELECT * from resource where run_id=\"%s\"' % run_id, conn)\n", "df_resource" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualizing monitoring data\n", "\n", "The monitoring data can be visualized directly in the Jupyter notebook using many of the Python graphing libraries. The following examples use Plotly.\n", "\n", "First we show a gantt chart of task execution times. It shows when a task was submitted, when it was executed, and when it completed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import plotly.figure_factory as ff\n", "import plotly.graph_objs as go\n", "\n", "parsl_tasks = []\n", "for i, task in df_task.iterrows():\n", " time_running, time_returned = task['task_time_running'], task['task_time_returned']\n", " if task['task_time_returned'] is None:\n", " time_returned = datetime.datetime.now()\n", " if time_completed is not None:\n", " time_returned = time_completed\n", " if task['task_time_running'] is None:\n", " time_running = task['task_time_submitted']\n", " description = \"Task ID: {}, app: {}\".format(task['task_id'], task['task_func_name'])\n", " dic1 = dict(Task=description, Start=task['task_time_submitted'],\n", " Finish=time_running, Resource=\"Pending\")\n", " dic2 = dict(Task=description, Start=time_running,\n", " Finish=time_returned, Resource=\"Running\")\n", " parsl_tasks.extend([dic1, dic2])\n", "colors = {'Pending': 'rgb(168, 168, 168)', 'Running': 'rgb(0, 0, 255)'}\n", "fig = ff.create_gantt(parsl_tasks, title=\"\", colors=colors, group_tasks=True, show_colorbar=True, index_col='Resource')\n", "fig['layout']['yaxis']['title'] = 'Task'\n", "fig['layout']['yaxis']['showticklabels'] = False\n", "fig['layout']['xaxis']['title'] = 'Time'\n", "\n", "fig.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we show side-by-side plots of CPU and memory utilization over time. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from plotly.subplots import make_subplots\n", "import plotly.graph_objects as go\n", "\n", "fig = make_subplots(rows=1, cols=2, subplot_titles=(\"CPU\", \"Memory\"))\n", "\n", "fig.add_trace(\n", " go.Scatter(\n", " x=df_resource['timestamp'], \n", " y=df_resource['psutil_process_cpu_percent']),\n", " row=1, col=1\n", ")\n", "\n", "fig.add_trace(\n", " go.Scatter(\n", " x=df_resource['timestamp'], \n", " y=df_resource['psutil_process_memory_percent']),\n", " row=1, col=2\n", ")\n", "\n", "xaxis = dict(tickformat='%m-%d\\n%H:%M:%S', autorange=True, title='Time')\n", "yaxis = dict(title=\"Utilization (X)\")\n", "\n", "fig.update_layout(height=600, width=800, xaxis=xaxis, showlegend=False)\n", "fig.update_yaxes(range=[0, 100])\n", "\n", "fig.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "conn.close()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2" } }, "nbformat": 4, "nbformat_minor": 4 }