{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Parsl Tutorial\n", "\n", "Parsl is a native Python library that allows you to write functions that execute in parallel and tie them together with dependencies to create workflows. Parsl wraps Python functions as \"Apps\" using the **@App** decorator. Decorated functions can run in parallel when all their inputs are ready.\n", "\n", "For more comprehensive documentation and examples, please refer our [documentation](http://parsl.readthedocs.io/en/latest/)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import parsl\n", "from parsl import *\n", "# parsl.set_stream_logger() # <-- log everything to stdout\n", "print(parsl.__version__)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### DataFlowKernal\n", "\n", "Parsl's `DataFlowKernel` acts as an abstraction layer over any pool of execution resources (e.g., clusters, clouds, threads). \n", "\n", "\n", "We'll come back to the DataFlowKernel later in this tutorial. For now, we configure this example to use a pool of [threads](https://en.wikipedia.org/wiki/Thread_(computing) to facilitate local parallel execution." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "local_config = {\n", " \"sites\" : [\n", " { \"site\" : \"Threads\",\n", " \"auth\" : { \"channel\" : None },\n", " \"execution\" : {\n", " \"executor\" : \"threads\",\n", " \"provider\" : None,\n", " \"maxThreads\" : 4\n", " }\n", " }],\n", " \"globals\" : {\"lazyErrors\" : True}\n", "}\n", "\n", "dfk = DataFlowKernel(config=local_config)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 1. Apps\n", "\n", "\n", "In Parsl an `app` is a piece of code that can be asynchronously executed on an execution resource (e.g., cloud, cluster, or local PC). Parsl provides support for pure Python apps and also command-line apps executed via Bash.\n", "\n", "## Python Apps\n", "\n", "As a first example let's define a simple Python function that returns the string 'Hello World!'. This function is made into a Parsl App using the **@App** decorator. The decorator specifies the type of App ('python'|'bash') and the `DataFlowKernel` object as arguments." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@App('python', dfk)\n", "def hello ():\n", " return 'Hello World!'\n", "\n", "print(hello().result())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As can be seen above, Apps wrap standard Python function calls. As such, they can be passed arbitrary arguments and return standard Python objects. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@App('python', dfk)\n", "def multiply (a, b):\n", " return a * b\n", "\n", "print(multiply(5,9).result())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bash Apps\n", "\n", "Parsl’s Bash app allows you to wrap execution of external applications from the command-line as you would in a Bash shell. It can also be used to execute Bash scripts directly. To define a Bash app the wrapped Python function must return the command-line string to be executed.\n", "\n", "Parsl is able to capture stdout/stderr for debugging or as a first class data object in a workflow.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@App('bash', dfk)\n", "def echo_hello(stdout='echo-hello.stdout', stderr='echo-hello.stderr'):\n", " return 'echo \"Hello World!\"'\n", "\n", "echo_hello().result()\n", "\n", "with open('echo-hello.stdout', 'r') as f:\n", " print(f.read())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Often, Parsl Apps exchange data in the form of files. In order to orchestrate a dataflow it is important that Parsl is able to track the data that is passed into and out of an App. For this purpose Parsl Apps can define input and output files as follows.\n", "\n", "We first create three test files named hello1.txt, hello2.txt, and hello3.txt containing the text \"hello 1\", \"hello 2\", and \"hello 3\"." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "!echo \"hello 1\" > /tmp/hello1.txt\n", "!echo \"hello 2\" > /tmp/hello2.txt\n", "!echo \"hello 3\" > /tmp/hello3.txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We then write an App that will concentate these files using `cat`. We pass in the list of hello files and concatenate the text into an output file named all_hellos.txt." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@App('bash', dfk)\n", "def cat(inputs=[], outputs=[]):\n", " return 'cat %s > %s' %(inputs[0], outputs[0]) \n", "\n", "concat = cat(inputs=['/tmp/hello*.txt'], outputs=['all_hellos.txt'])\n", "\n", "# Get the filepath for the output file and open it\n", "with open(concat.outputs[0].result(), 'r') as f:\n", " print(f.read())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Futures\n", "When a Python function is invoked, the Python interpreter waits for the function to complete execution and returns the results. In case of long running functions it may not be desirable to wait for completion, instead it is often preferable that functions are asynchronous. Parsl provides such asynchronous behavior by returning a future in lieu of results. A future is essentially an object that allows us to track the status of an asynchronous task so that it may, in the future, be interrogated to find the status, results, exceptions, etc.\n", "\n", "Parsl provides two types of futures: AppFutures and DataFutures. While related, these two types of futures enable subtly different workflow patterns, as we will see.\n", "\n", "\n", "\n", "## AppFutures\n", "AppFutures are the basic building block upon which Parsl scripts are built. Every invocation of a Parsl app returns an AppFuture which may be used to manage execution and control the workflow.\n", "\n", "Here we show how AppFutures are used to wait for the result of a Python App.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# App that sleeps and then returns hello world\n", "@App('python', dfk)\n", "def hello ():\n", " import time\n", " time.sleep(5)\n", " return 'Hello World!'\n", "\n", "app_future = hello()\n", "\n", "# Check if the app_future is resolved\n", "print ('Done: %s' % app_future.done())\n", "\n", "# Print the result of the app_future. Note: this\n", "# call will block and wait for the future to resolve\n", "print ('Result: %s' % app_future.result())\n", "print ('Done: %s' % app_future.done())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DataFutures\n", "\n", "While AppFutures represent the execution of an asynchronous app, the DataFuture represents the files it produces. Parsl’s dataflow model, in which data flows from one app to another via files, requires such a construct to enable apps to validate creation of required files and to subsequently resolve dependencies when input files are created. When invoking an app, Parsl requires that a list of output files be specified (using the outputs keyword argument). A DataFuture for each file is returned by the app when it is executed. Throughout execution of the app Parsl will monitor these files to 1) ensure they are created, and 2) pass them to any dependent apps." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# App that echos the input message to the first file specified in the\n", "# outputs list\n", "@App('bash', dfk)\n", "def slowecho(message, outputs=[]):\n", " return 'sleep 5; echo %s &> {outputs[0]}' % (message)\n", "\n", "# Call echo specifying the output file\n", "hello = slowecho('Hello World!', outputs=['hello1.txt'])\n", "\n", "# The AppFuture's outputs attribute is a list of DataFutures\n", "print(hello.outputs)\n", "\n", "# Also check the AppFuture\n", "print ('Done: %s' % hello.done())\n", "\n", "# Print the contents of the output DataFuture when complete\n", "with open(hello.outputs[0].result(), 'r') as f:\n", " print(f.read())\n", " \n", "# Now that this is complete, check the DataFutures again, and the Appfuture\n", "print(hello.outputs)\n", "print ('Done: %s' % hello.done())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 3. Data Management\n", "\n", "Parsl is designed to enable implementation of dataflow patterns. These patterns enable workflows to be defined in which the data passed between apps manages the flow of execution. Dataflow programming models are popular as they can cleanly express, via implicit parallelism, the concurrency needed by many applications in a simple and intuitive way.\n", "\n", "## Files\n", "\n", "Parsl’s file abstraction abstracts local access to a file. It therefore requires only the file path to be defined. Irrespective of where the script, or its apps are executed, Parsl uses this abstraction to access that file. When referencing a Parsl file in an app, Parsl maps the object to the appropriate access path.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from parsl.data_provider.files import File\n", "\n", "# App that copies the contents of 1 or more files to another file\n", "@App('bash', dfk)\n", "def copy(inputs=[], outputs=[]):\n", " return 'cat %s &> %s' % (inputs[0], outputs[0])\n", "\n", "# cCeate a test file\n", "open('cat-in.txt', 'w').write('Hello World!\\n')\n", "\n", "# Create Parsl file objects\n", "parsl_infile = File(\"cat-in.txt\")\n", "parsl_outfile = File(\"cat-out.txt\")\n", "\n", "# Call the copy app with the Parsl file\n", "copy_future = copy(inputs=[parsl_infile], outputs=[parsl_outfile])\n", "\n", "# Read what was redirected to the output file\n", "with open(copy_future.outputs[0].result(), 'r') as f:\n", " print(f.read())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 4. Composing a workflow\n", "\n", "Now that we understand all the building blocks, we can create workflows with Parsl. Unlike other workflow systems, Parsl creates implicit workflows based on the passing of control or data between Apps. The flexibility of this model allows for the creation of a wide range of workflows from sequential through to complex nested, parallel workflows. As we will see below, a range of workflows can be created by passing AppFutures and DataFutures between Apps.\n", "\n", "\n", "## Sequential workflow\n", "\n", "Simple sequential or procedural workflows can be created by passing an AppFuture from one task to another. The following example shows one such workflow, which first generates a random number and then writes it to a file. \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# App that generates a random number\n", "@App('python', dfk)\n", "def generate(limit):\n", " from random import randint\n", " return randint(1,limit)\n", "\n", "# App that writes a message to a file\n", "@App('bash', dfk)\n", "def save(message, outputs=[]):\n", " return 'echo %s &> {outputs[0]}' % (message)\n", "\n", "# Generate the random number\n", "message = generate(10)\n", "print('Random number: %s' % message.result())\n", "\n", "# Save the random number to a file\n", "saved = save(message, outputs=['output.txt'])\n", "\n", "# Print the output file\n", "with open(saved.outputs[0].result(), 'r') as f:\n", " print('File contents: %s' % f.read())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Parallel workflow\n", "\n", "The most common way that Parsl Apps are executed in parallel is via looping. The following example shows how a simple loop can be used to create many random numbers in parallel.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# App that generates a random number\n", "@App('python', dfk)\n", "def generate(limit):\n", " from random import randint\n", " return randint(1,limit)\n", "\n", "# Generate 5 random numers\n", "rand_nums = []\n", "for i in range(5):\n", " rand_nums.append(generate(10))\n", "\n", "# Wait for all apps to finish and collect the results\n", "outputs = [i.result() for i in rand_nums]\n", "\n", "# Print results\n", "print(outputs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Parallel dataflow\n", "\n", "Parallel dataflows can be developed by passing data between Apps. In this example we create a set of files, each with a random number, we then concatenate these files into a single file and compute the sum of all numbers in that file. In the first two Apps files are exchanged. The final App returns the sum as a Python integer.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# App that generates a random number\n", "@App('bash', dfk)\n", "def generate(outputs=[]):\n", " return \"echo $(( RANDOM )) &> {outputs[0]}\"\n", "\n", "# App that concatenates input files into a single output file\n", "@App('bash', dfk)\n", "def concat(inputs=[], outputs=[], stdout=\"stdout.txt\", stderr='stderr.txt'):\n", " return \"cat {0} > {1}\".format(\" \".join(inputs), outputs[0])\n", "\n", "# App that calculates the sum of values in a list of input files\n", "@App('python', dfk)\n", "def total(inputs=[]):\n", " total = 0\n", " with open(inputs[0], 'r') as f:\n", " for l in f:\n", " total += int(l)\n", " return total\n", "\n", "# Create 5 files with random numbers\n", "output_files = []\n", "for i in range (5):\n", " output_files.append(generate(outputs=['random-%s.txt' % i]))\n", "\n", "# Concatenate the files into a single file\n", "cc = concat(inputs=[i.outputs[0] for i in output_files], outputs=[\"all.txt\"])\n", "\n", "# Calculate the sum of the random numbers\n", "total = total(inputs=[cc.outputs[0]])\n", "print (total.result())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Monte Carlo workflow\n", "\n", "Many scientific applications use the [monte-carlo method](https://en.wikipedia.org/wiki/Monte_Carlo_method#History) to compute results. \n", "\n", "If a circle with radius $r$ is inscribed inside a square with side length $2r$ then the area of the circle is $\\pi r^2$ and the area of the square is $(2r)^2$. Thus, if $N$ uniformly distributed random points are dropped within the suqare then approximately $N\\pi/4$ will be inside the circle.\n", "\n", "Each call to the function `pi()` is executed independently and in parallel. The `avg_three()` app is used to compute the average of the futures that were returned from the `pi()` calls.\n", "\n", "The dependency chain looks like this:\n", "\n", "```\n", "App Calls pi() pi() pi()\n", " \\ | /\n", "Futures a b c\n", " \\ | /\n", "App Call avg_points()\n", " |\n", "Future avg_pi\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# App that estimates pi by placing points in a box\n", "@App('python', dfk)\n", "def pi(total):\n", " import random \n", " \n", " # Set the size of the box (edge length) in which we drop random points\n", " edge_length = 10000\n", " center = edge_length / 2\n", " c2 = center ** 2\n", " count = 0\n", " \n", " for i in range(total):\n", " # Drop a random point in the box.\n", " x,y = random.randint(1, edge_length),random.randint(1, edge_length)\n", " # Count points within the circle\n", " if (x-center)**2 + (y-center)**2 < c2:\n", " count += 1\n", " \n", " return (count*4/total)\n", "\n", "# App that computes the average of the values\n", "@App('python', dfk)\n", "def avg_points(a, b, c):\n", " return (a + b + c)/3\n", "\n", "# Estimate three values for pi\n", "a, b, c = pi(10**6), pi(10**6), pi(10**6)\n", "\n", "# Compute the average of the three estimates\n", "avg_pi = avg_points(a, b, c)\n", "\n", "# Print the results\n", "print(\"A: {0:.5f} B: {1:.5f} C: {2:.5f}\".format(a.result(), b.result(), c.result()))\n", "print(\"Average: {0:.5f}\".format(avg_pi.result()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Execution and configuration\n", "\n", "Parsl is designed to support arbitrary execution providers (e.g., PCs, clusters, supercomputers) and execution models (e.g., threads, pilot jobs, etc.). That is, Parsl scripts are independent of execution provider or executor. Instead, the configuration used to run the script tells Parsl how to execute apps on the desired environment. Parsl provides a high level abstraction, called a Block, for describing the resource configuration for a particular app or script.\n", "\n", "Information about the different execution providers and executors supported is included in the [Parsl documentation](https://parsl.readthedocs.io/en/latest/userguide/execution.html).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Local execution with threads\n", "\n", "As we saw above, we can configure Parsl to execute apps on a local thread pool. This is a good way to parallelize execution on a local PC. The configuration object defines the sites that will be used for execution, optinally the authentication method to be used (e.g., if using SSH), and the execution model to use. In the case of threads we define the maximum number of threads to be used. A number of global configuration options may also be specified. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "threads_config = {\n", " \"sites\" : [\n", " { \"site\" : \"Local_Threads\",\n", " \"auth\" : { \"channel\" : None },\n", " \"execution\" : {\n", " \"executor\" : \"threads\",\n", " \"provider\" : None,\n", " \"maxThreads\" : 4\n", " }\n", " }],\n", " \"globals\" : {\"lazyErrors\" : True}\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Local execution with pilot jobs\n", "\n", "We can also define a configuration that uses IPythonParallel as the executor. In this mode, pilot jobs are used to manage the submission. Parsl creates an IPythonParallel controller to manage execution and deploys one or more IPythonParallel engines (workers) to execute workload. The following config will instantiate this infrastructure locally, it can be trivially extended to include a remote provider (e.g., Cori, Theta, etc.) for execution. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "ipp_config = {\n", " \"sites\" : [{\n", " \"site\" : \"Local_IPP\",\n", " \"auth\" : {\n", " \"channel\" : \"local\"\n", " },\n", " \"execution\" : {\n", " \"executor\" : \"ipp\",\n", " \"provider\" : \"local\",\n", " \"script_dir\" : \".scripts\",\n", " \"scriptDir\" : \".scripts\",\n", " \"block\" : {\n", " \"nodes\" : 1,\n", " \"taskBlocks\" : 1,\n", " \"walltime\" : \"00:05:00\",\n", " \"initBlocks\" : 1,\n", " \"minBlocks\" : 1,\n", " \"maxBlocks\" : 1,\n", " \"scriptDir\" : \".\",\n", " \"options\" : {\n", " \"partition\" : \"debug\"\n", " }\n", " }\n", " }\n", " }],\n", " \"globals\" : { \"lazyErrors\" : True },\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Running a workflow using a configuration\n", "\n", "We can now run the same workflow using either of the two configurations defined above. Change which config is used to instantiate the DFK to see the same workflow executed with different models. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import parsl\n", "from parsl import *\n", "# parsl.set_stream_logger() # <-- log everything to stdout\n", "print(parsl.__version__)\n", "\n", "#dfk = DataFlowKernel(config=threads_config)\n", "dfk = DataFlowKernel(config=ipp_config)\n", "\n", "@App('bash', dfk)\n", "def generate(outputs=[]):\n", " return \"echo $(( RANDOM )) &> {outputs[0]}\"\n", "\n", "@App('bash', dfk)\n", "def concat(inputs=[], outputs=[], stdout=\"stdout.txt\", stderr='stderr.txt'):\n", " return \"cat {0} > {1}\".format(\" \".join(inputs), outputs[0])\n", "\n", "@App('python', dfk)\n", "def total(inputs=[]):\n", " total = 0\n", " with open(inputs[0], 'r') as f:\n", " for l in f:\n", " total += int(l)\n", " return total\n", "\n", "# Create 5 files with random numbers\n", "output_files = []\n", "for i in range (5):\n", " output_files.append(generate(outputs=['random-%s.txt' % i]))\n", "\n", "# Concatenate the files into a single file\n", "cc = concat(inputs=[i.outputs[0].filepath for i in output_files], \n", " outputs=[\"combined.txt\"])\n", "\n", "# Calculate the sum of the random numbers\n", "total = total(inputs=[cc.outputs[0]])\n", "\n", "print (total.result())\n", "dfk.cleanup()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 2 }