{ "metadata": { "name": "", "signature": "sha256:5c6a42495904dcc9148e382e7fb67a3073d76d5886367b67aaa862cfa8e6dcc6" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "## Generator Pipelines using Workbench:\n", "If you don't know what Generator Pipelines here are some resources: \n", "\n", "- Intro material: [Generator Pipelines in Python](http://www.blangdon.com/writing/about/generator-pipelines-in-python/)\n", " \n", "- More advanced, LOTS of exercises: [David M. Beazley](http://www.dabeaz.com/generators/)\n", " \n", "- Video presentation: [V James Powell @ PyData NYC 2012](http://vimeo.com/53039281)\n", " \n", "**tl;dr** - Good for streaming data and building pipelines that iterate over streaming data.\n", "\n", "** How easy is it to get workbench to build streaming data pipeline thingys? **\n", "- Pretty darn easy...\n", "\n", "## Lets start up the workbench server...\n", "Run the workbench server (from somewhere, for the demo we're just going to start a local one)\n", "
\n",
      "$ workbench_server\n",
      "
\n", "\n", "#### Okay so when the server starts up, it autoloads any worker plugins in the server/worker directory and dynamically monitors the directory, if a new python file shows up, it's validated as a properly formed plugin and if it passes is added to the list of workers." ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Lets start to interact with workbench, please note there is NO specific client to workbench,\n", "# Just use the ZeroRPC Python, Node.js, or CLI interfaces.\n", "import zerorpc\n", "c = zerorpc.Client()\n", "c.connect(\"tcp://127.0.0.1:4242\")" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 76, "text": [ "[None]" ] } ], "prompt_number": 76 }, { "cell_type": "code", "collapsed": false, "input": [ "# Load in 100 PE Files\n", "import os\n", "file_list = [os.path.join('../data/pe/bad', child) for child in os.listdir('../data/pe/bad')]\n", "file_list += [os.path.join('../data/pe/good', child) for child in os.listdir('../data/pe/good')]\n", "md5_list = []\n", "for filename in file_list:\n", " with open(filename,'rb') as f:\n", " md5_list.append(c.store_sample(f.read(), filename, 'exe'))\n", "print 'Files loaded: %d' % len(md5_list)\n", "md5_list[:5]" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "Files loaded: 100\n" ] }, { "metadata": {}, "output_type": "pyout", "prompt_number": 77, "text": [ "['033d91aae8ad29ed9fbb858179271232',\n", " '0cb9aa6fb9c4aa3afad7a303e21ac0f3',\n", " '0e882ec9b485979ea84c7843d41ba36f',\n", " '0e8b030fb6ae48ffd29e520fc16b5641',\n", " '0eb9e990c521b30428a379700ec5ab3e']" ] } ], "prompt_number": 77 }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "### Notice the level on control on our batch operations\n", "#### Your database may have tons of files of different types. We literally control execution on the per sample level with md5 lists. Alternatively we can specify specific types or simply make a query to the database get exactly what we want and build our own md5 list.\n", "\n", "#### Also notice that we can specify ^exactly^ what data we want down to arbitrary depth.. here we want just the imported_symbols from the sparse features from the pe_features worker." ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Compute pe_features on all files of type pe, just pull back the sparse features\n", "imports = c.batch_work_request('pe_features', {'md5_list': md5_list, 'subkeys':['md5','sparse_features.imported_symbols']})\n", "imports" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 78, "text": [ "" ] } ], "prompt_number": 78 }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "## Holy s#@&! The server batch request returned a generator? \n", "#### Yes generators are awesome but getting one from a server request! Are u serious?! Yes, thanks to ZeroRPC... dead serious.. like chopping off your head and kicking your body into a shallow grave and putting your head on a stick... serious.\n", "\n", "#### Now that we have the a server generator from workbench we setup our pipeline and client generators to precisely control server execution on the streaming samples." ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Client generators\n", "\n", "# First we're going going to filter PE Files only getting ones with communication related imports\n", "def comm_imports(import_info_list):\n", " comm_imports = ['accept', 'bind', 'connect', 'connectnamedpipe', 'gethostbyname', 'gethostname', 'inet_addr', 'recv', 'send']\n", " for import_info in import_info_list:\n", " md5 = import_info['md5']\n", " import_symbols = import_info['imported_symbols']\n", " if any(comm in sym for comm in comm_imports for sym in import_symbols ):\n", " yield md5\n", "\n", "def peid_sigs(md5_list):\n", " for md5 in md5_list:\n", " yield c.work_request('pe_peid', md5)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 79 }, { "cell_type": "code", "collapsed": false, "input": [ "# Now and only now will our generator pipeline unwind. The work will get pulled from the workbench\n", "# server and ONLY what needs to get processed based on our generator filters will get processed.\n", "# Note: Out of 100 PE Files, only 19 matched our filter, so only 19 will get pulled through the peid \n", "# worker. Imagine a generator pipeline that ended in a dynamic sandbox.. super handy to downselect first.\n", "peid_results = peid_sigs(comm_imports(imports))\n", "for peid_info in peid_results:\n", " print peid_info" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "{'pe_peid': {'match_list': ['Microsoft Visual C++ v6.0'], 'md5': '0cb9aa6fb9c4aa3afad7a303e21ac0f3'}}\n", "{'pe_peid': {'match_list': [], 'md5': '0e8b030fb6ae48ffd29e520fc16b5641'}}\n", "{'pe_peid': {'match_list': [], 'md5': '0eb9e990c521b30428a379700ec5ab3e'}}\n", "{'pe_peid': {'match_list': ['Microsoft Visual C++ v6.0'], 'md5': '13dcc5b4570180118eb65529b77f6d89'}}\n", "{'pe_peid': {'match_list': ['Armadillo v4.x'], 'md5': '1cac80a2147cd8f3860547e43edcaa00'}}\n", "{'pe_peid': {'match_list': [], 'md5': '2058c50de5976c67a09dfa5e0e1c7eb5'}}\n", "{'pe_peid': {'match_list': ['UPX -> www.upx.sourceforge.net'], 'md5': '2d09e4aff42aebac87ae2fd737aba94f'}}\n", "{'pe_peid': {'match_list': [], 'md5': '9ceccd9f32cb2ad0b140b6d15d8993b6'}}" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "\n", "{'pe_peid': {'match_list': [], 'md5': 'b681485cb9e0cad73ee85b9274c0d3c2'}}\n", "{'pe_peid': {'match_list': [], 'md5': '093dee8d97fd9d35884ed52179b3d142'}}\n", "{'pe_peid': {'match_list': [], 'md5': '2352ab5f9f8f097bf9d41d5a4718a041'}}\n", "{'pe_peid': {'match_list': ['Installer VISE Custom'], 'md5': '2459a629ace148286360b860442221a2'}}\n", "{'pe_peid': {'match_list': ['Microsoft Visual C++ 8'], 'md5': '2cbfc3993bd5134d58d12668f21d91da'}}" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "\n", "{'pe_peid': {'match_list': ['Microsoft Visual C++ 8.0 (DLL)'], 'md5': '4d71227301dd8d09097b9e4cc6527e5a'}}\n", "{'pe_peid': {'match_list': ['Microsoft Visual C++ 8'], 'md5': '550388c7aecde4bf6fae9f77755d54de'}}\n", "{'pe_peid': {'match_list': ['Microsoft Visual C++ 8.0 (DLL)'], 'md5': '8003518cf919df3475531124b327ceb8'}}\n", "{'pe_peid': {'match_list': [], 'md5': '803b370865d907ea21dc0c2b6a8936b5'}}\n", "{'pe_peid': {'match_list': [], 'md5': '9333096b59dad0c4294749755963dba7'}}" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "\n", "{'pe_peid': {'match_list': ['Borland Delphi 3.0 (???)'], 'md5': 'a3661a61f7e7b7d37e6d037ed747e7ef'}}\n" ] } ], "prompt_number": 80 }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Recap\n", "#### This simple set of python commands loaded a bunch of files into workbench (100 in this case but real applications will be a LOT more). We then set up a **pipeline** on the client side which controlled which workers were invoked in which order (that's a pipeline) but we also placed a filter that explicitly controlled which samples went further down the pipeline.\n", "\n", "### Super important: The amount of data pulled back to the client was **minuscule**. \n", " - Just the import symbols for 100 samples\n", " - Just the PEID signature matches for 19 samples\n", "\n", "### We got exactly the data we needed from the samples and we controlled the whole server pipeline with ease and elegance." ] } ], "metadata": {} } ] }