{ "cells": [ { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# **Masquerading Process Name Anomaly Algorithm**\n", "\n", "**Notebook Version:** 1.0\n", "\n", "**Python Version:** Python 3.8\n", "\n", "**Required Packages:** azure_sentinel_utilities, damerauLevenshtein, azureml-synapse\n", "\n", "**Platforms Supported:** Azure Synapse Workspace, Azure Sentinel, Azure Log Analytics Workspace, Storage Account, Azure Machine Learning Notebooks connected to Azure Synapse Workspace\n", "\n", "**Data Source Required:** Yes\n", "\n", "**Data Source:** SecurityEvents\n", "\n", "**Spark Version:** 3.1 or above\n", "\n", "## **Description**\n", "\n", "This notebook demonstrates how to apply custom machine learning algorithms to data in Azure Sentinel. \n", "It showcases a Masquerading Process Name anomaly algorithm, which looks for Windows process creation events for processes whose names are similar to known normal processes.\n", "It is a very common attack vector for malicious processes to masquerade as known normal processes by having names similar to known normal ones but different by a single character. Since these are easy to miss when simply looked at, they can succeed at running malicious code on your machine. Examples of such malicious processes are scvhost.exe, svch0st.exe, etc. -> Known normal process here was svchost.exe.\n", "\n", "The data used here is from the SecurityEvents table with EventID = 4688. These correspond to process creation events from Windows machines. \n", "\n", "You will have to export this data from your Log Analytics workspace into a storage account. Instructions for this LA export mechanism can be found here: [LA export mechanism](https://docs.microsoft.com/azure/azure-monitor/logs/logs-data-export?tabs=portal). \n", "\n", "Here is a [Blog explaining data export](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/export-historical-log-data-from-microsoft-sentinel/ba-p/3413418)\n", "\n", "Data is then loaded from this storage account container and the results are published to your Log Analytics resource." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**This notebook can be run either from the AML platform or directly off of Synapse. Based on what you choose, the setup will differ. Please follow either section A or B, that suits you, for setup before running the main pyspark code.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## **A. Running on AML**\n", "\n", "You will need to configure your environment to use a Synapse cluster with your AML workspace. For this, you require to setup the Synapse compute and attach the necessary packages/wheel files. Then, for the rest of the code, you need to convert to using Synapse language by marking each cell with a %%synapse header.\n", "\n", "Steps:\n", "1. Install AzureML Synapse package on the AML compute to use spark magics\n", "2. Configure AzureML and Azure Synapse Analytics\n", "3. Attach the required packages and wheel files to the compute.\n", "4. Start Spark session" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azureml.core import Workspace, LinkedService" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **1. Install AzureML Synapse package on the AML compute to use spark magics**\n", "\n", "You will have to setup the AML compute that is attached to your notebook with some packages so that the rest of this code can run properly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Run the following line and confirm that 'azureml-synapse' is not installed.\n", "%pip list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Now run the following line and then restart the kernel/compute so that the package is installed.\n", "%pip install azureml-synapse" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Rerun the following line and confirm that 'azureml-synapse' is installed.\n", "%pip list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **2. Configure AzureML and Azure Synapse Analytics**\n", "\n", "Please use notebook [Configurate Azure ML and Azure Synapse Analytics](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Configurate%20Azure%20ML%20and%20Azure%20Synapse%20Analytics.ipynb) to configure environment.\n", "\n", "The notebook will configure existing Azure synapse workspace to create and connect to Spark pool. You can then create linked service and connect AML workspace to Azure Synapse workspaces. You can skip point 6 which exports data from Log Analytics to Datalake Storage Gen2 because you have already set up the data export to the storage account above.\n", "\n", "**Note:** Specify the input parameters in below step in order to connect AML workspace to synapse workspace using linked service." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "amlworkspace = \"\" # fill in your AML workspace name\n", "subscription_id = \"\" # fill in your subscription id\n", "resource_group = '' # fill in your resource groups for AML workspace\n", "linkedservice = '' # fill in your linked service created to connect to synapse workspace" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Authentication to Azure Resources:** \n", "\n", "In this step we will connect aml workspace to linked service connected to Azure Synapse workspace" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Get the aml workspace\n", "aml_workspace = Workspace.get(name=amlworkspace, subscription_id=subscription_id, resource_group=resource_group)\n", "\n", "# Retrieve a known linked service\n", "linked_service = LinkedService.get(aml_workspace, linkedservice)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **3. Attach the required packages and wheel files to the compute.**\n", "\n", "You will have to setup the spark pool that is attached to your notebook with some packages so that the rest of this code can run properly.\n", "\n", "Please follow these steps:\n", "1. On the AML Studio left menu, navigate to **Linked Services**\n", "2. Click on the name of the Link Service you want to use\n", "3. Select **Spark pools** tab\n", "4. Click the Spark pool you want to use.\n", "5. In Synapse Properties, click the Synapse workspace. It will open the workspace in a new tab.\n", "6. Click on 'Manage' in the left window.\n", "7. Click on 'Apache Spark pools' in the left window.\n", "8. Select the '...' in the pool you want to use and click on 'Packages'.\n", "9. Now upload the following two files in this blade.\n", "\n", "a. Create a requirements.txt with the following line in it and upload it to the Requirements section\n", "\n", " fastDamerauLevenshtein\n", "b. Download the azure_sentinel_utilities whl package from [Repo](https://github.com/Azure/Azure-Sentinel/tree/master/BYOML/Libraries)\n", "\n", " First upload this package in the 'Workspace packages' in the left tab of the original blade.\n", " \n", "c. Then select this package from there in this tab.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **4. Start Spark session**\n", "\n", "Enter your Synapse Spark compute below. To find the Spark compute, please follow these steps:\n", "1. On the AML Studio left menu, navigate to **Linked Services**\n", "2. Click on the name of the Link Service you want to use\n", "3. Select **Spark pools** tab\n", "4. Get the Name of the Spark pool you want to use." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "synapse_spark_compute = input('Synapse Spark compute:')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Start spark session\n", "%synapse start -s $subscription_id -w $amlworkspace -r $resource_group -c $synapse_spark_compute" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## **B. Running directly on Synapse**\n", "\n", "You will need to attach the required packages and wheel files to the cluster you intend to use with this notebook. Follow Step 3 above to complete this." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## **Common Code**\n", "\n", "From here on below, all the steps are the same for both AML and Synapse platforms. The main difference is, if you have setup through AML then pre-pend each pyspark block with the synapse header %%synapse. For Synapse runs, don't add that header.\n", "\n", "1. One-time: Set credentials in KeyVault so the notebook can access\n", " - [Create KeyVault](https://docs.microsoft.com/azure/key-vault/general/quick-create-portal)\n", " - Store the following secrets in the KeyVault\n", " - Storage Account connection string: the keyName should be 'saConnectionString'\n", " - Log Analytics workspaceSharedKey: the keyName should be 'wsSharedKey'\n", " - Log Analytics workspaceId: the keyName should be 'wsId'\n", " - Log Analytics workspaceResourceId: the keyName should be 'wsResourceId'\n", " - Add the KeyVault as a [linked service](https://docs.microsoft.com/azure/data-factory/store-credentials-in-key-vault) to your Azure Synapse workspace\n", "\n", "2. Ensure the settings in the cell below are filled in." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%synapse\n", "\n", "from azure_sentinel_utilities.azure_storage import storage_blob_manager\n", "from azure_sentinel_utilities.log_analytics import log_analytics_client\n", "import re\n", "import datetime as dt\n", "import time\n", "from pyspark.sql import functions as F, types as T\n", "from pyspark.sql.window import Window\n", "from fastDamerauLevenshtein import damerauLevenshtein\n", "import random\n", "import string" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "These are some customizable variables which are used further in the code." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "frequentThreshold = 0.8 # If the percentile of a process's creation count is above this threshold, it is considered a frequent and normal process.\n", "infrequentThreshold = 0.2 # If the percentile of a process's creation count is lower than this threshold, it is considered an infrequent and possibly malicious process.\n", "levenDistThreshold = 0.85 # Higher the levenshtein distance, more similar are the two sequences. This threshold will help you select only the very similar processes and remove the noise." ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Making Connections to the Storage Account and KeyVaults for user credentials" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "#Log Analytics WorkSpace (Sentinel) to write the results\n", "workspaceId = mssparkutils.credentials.getSecret(keyVault = 'YOUR_KEYVAULT_HERE', keyName = 'wsId', linkedServiceName = 'YOUR_LINKED_SERVICE_HERE') # wks_guid\n", "workspaceSharedKey = mssparkutils.credentials.getSecret(keyVault = 'YOUR_KEYVAULT_HERE', keyName = 'wsSharedKey', linkedServiceName = 'YOUR_LINKED_SERVICE_HERE')\n", "workspaceResourceId = mssparkutils.credentials.getSecret(keyVault = 'YOUR_KEYVAULT_HERE', keyName = 'wsResourceId', linkedServiceName = 'YOUR_LINKED_SERVICE_HERE') # eg: /subscriptions//resourcegroups//providers/microsoft.operationalinsights/work\n", "\n", "#extract storage account and key from connection string\n", "connectionString = mssparkutils.credentials.getSecret(keyVault = 'YOUR_KEYVAULT_HERE', keyName = 'saConnectionString', linkedServiceName = 'YOUR_LINKED_SERVICE_HERE')\n", "print(\"Connection String to your storage account is : \", connectionString)\n", "\n", "keyPattern = 'DefaultEndpointsProtocol=(\\w+);AccountName=(\\w+);AccountKey=([^;]+);'\n", "match = re.match(keyPattern, connectionString)\n", "storageAccount = match.group(2)\n", "storageKey = match.group(3)\n", "print(\"Storage Account is : \", storageAccount)\n", "print(\"Storage Key is : \", storageKey)\n", "\n", "containerName = \"am-securityevent\" # This name is fixed for security events\n", "basePath = \"WorkspaceResourceId={workspaceResourceId}\".format(workspaceResourceId=workspaceResourceId)\n", "print(\"BasePath is : \", basePath)\n", "\n", "startTime = dt.datetime.now() - dt.timedelta(days=1)\n", "endTime = dt.datetime.now() - dt.timedelta(days=0)\n", "startTimeStr = startTime.strftime(\"%m/%d/%Y, %I:%M:%S.%f %p\")\n", "print(\"Start Time of Algo run is : \", startTime)\n", "endTimeStr = endTime.strftime(\"%m/%d/%Y, %I:%M:%S.%f %p\")\n", "print(\"End Time of Algo run is : \", endTime)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "This cell defines the helper functions.\n", "1. calcDist() -> calculates the Levenshtein distance. This is a measure of the difference between two sequences by calculating the edit distance. It takes into account the number of different characters in the sequence as well as the length of the sequences. If the extensions of both the processes are the same, then it excludes the extension when calculating the distance. \n", "2. getRandomTimeStamp() -> calculates a random timestamp. This is added to the synthetically created process events.\n", "3. getKnownNormalProcs() -> creates a hardcoded list of known normal processes which malicious processes may masquerade as.\n", "4. getSyntheticMaliciousProcs() -> creates a list of potentially malicious processes by modifying a single random letter of the normal processes to form new names.\n", "5. getSyntheticEvents() -> synthetically creates a list of 4688 events. It gets the known normal and synthetically created malicious process names from previous functions and creates entire events using time stamp and process path." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "def calcDist(one, two):\n", "\n", " oneDot = -1\n", " twoDot = -1\n", " if(one is not None and two is not None):\n", " oneDot = one.rfind('.')\n", " twoDot = two.rfind('.')\n", "\n", " if((oneDot != -1) and (twoDot != -1)):\n", " oneEnd = one[oneDot+1:]\n", " twoEnd = two[twoDot+1:]\n", " if(oneEnd == twoEnd):\n", " one = one[:oneDot]\n", " two = two[:twoDot]\n", "\n", " return damerauLevenshtein(one, two)\n", " return 0.0\n", "\n", "calcDistUdf = F.udf(calcDist, T.FloatType())\n", "\n", "def getRandomTimeStamp():\n", "\n", " input_time_format = '%m/%d/%Y, %I:%M:%S.%f %p'\n", " output_time_format = '%m/%d/%Y, %I:%M:%S.000 %p'\n", " randAdd = random.random()\n", "\n", " stime = time.mktime(time.strptime(startTimeStr, input_time_format))\n", " etime = time.mktime(time.strptime(endTimeStr, input_time_format))\n", "\n", " ptime = stime + randAdd * (etime - stime)\n", "\n", " return time.strftime(output_time_format, time.localtime(ptime))\n", "\n", "getRandomTimeStampUdf = F.udf(getRandomTimeStamp, T.StringType())\n", "\n", "def getListGoodProcs():\n", "\n", " return [\n", " \"svchost\", \"csrss\", \"smss\", \"services\",\n", " \"winlogon\", \"wininit\", \"lsass\",\n", " \"spoolsv\", \"conhost\", \"powershell\",\n", " \"init\", \"bash\", \"avahi-daemon\", \"gnome-session\", \"getty\",\n", " \"acpid\", \"dbus-daemon\", \"dbus-launch\", \"networkmanager\",\n", " \"explorer\", \"more.com\", \"mode.com\",\n", " \"nbtstat\", \"netstat\", \"cscript\", \"cnscript\",\n", " \"tracert\", \"tracerpt\", \"systeminfo\", \"system_info\",\n", " \"aitagent\", \"adtagent\", \"wininit\", \"wininst\",\n", " \"userinit\", \"userinst\", \"dnscmd\", \"dfscmd\",\n", " \"nslookup\", \"nblookup\"\n", " ]\n", "\n", "def getListKnownProcs():\n", " \n", " goodProcs = getListGoodProcs()\n", "\n", " goodProcs = [s + ('.exe' if not '.' in s else '') for s in goodProcs]\n", " df = spark.createDataFrame(goodProcs, T.StringType())\n", " df = df.withColumnRenamed(\"value\", \"Process\")\n", " return df\n", "\n", "def getSyntheticMaliciousProcs():\n", "\n", " badProcs = []\n", " goodProcs = getListGoodProcs()\n", " for process in goodProcs:\n", " length = len(process)\n", " randNum = random.randint(1, length-2) # not changing first or last letter because that is easier to spot\n", "\n", " # get other random integer\n", " randLetter = random.choice(string.ascii_lowercase)\n", "\n", " # substitute original letter with random letter\n", " temp = list(process)\n", " temp[randNum] = randLetter\n", " badProcs = badProcs + [\"\".join(temp)]\n", "\n", " badProcs = [s + ('.exe' if not '.' in s else '') for s in badProcs]\n", " badProcs = badProcs + [\"scvhost.exe\", \"svch0st.exe\"] # adding known masquerading processes\n", " df = spark.createDataFrame(badProcs, T.StringType())\n", " df = df.withColumnRenamed(\"value\", \"Process\")\n", " return df\n", "\n", "def getSyntheticEvents(typeOfEvent):\n", "\n", " processPath = \"\"\n", " numExplode = 1\n", " if(typeOfEvent == \"normal\"):\n", " processPath = \"C:\\Windows\\System32\\\\\"\n", " df = getListKnownProcs()\n", " numExplode = 20\n", " elif(typeOfEvent == \"malicious\"):\n", " processPath = \"C:\\Windows\\Temp\\\\\"\n", " df = getSyntheticMaliciousProcs()\n", "\n", " df = df.withColumn(\"NewProcessName\", F.concat(F.lit(processPath), F.col(\"Process\")))\n", " df = df.withColumn(\"NumExplode\", F.lit(numExplode))\n", " df = df.withColumn(\"EventID\", F.lit(\"4688\"))\n", "\n", " new_df = df.withColumn('NumExplode', F.expr('explode(array_repeat(NumExplode,int(NumExplode)))')).drop(\"NumExplode\")\n", " new_df = new_df.withColumn(\"TimeGenerated\", getRandomTimeStampUdf())\n", " new_df = new_df.withColumn(\"From\", F.lit(\"Hardcoded\"))\n", " new_df = new_df.select(\"NewProcessName\", \"Process\", \"TimeGenerated\", \"From\")\n", " \n", " return new_df" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Next, we define the schema of the input and get the raw customer 4688 events. We are using the following details: EventID, NewProcessName, Process, TimeGenerated." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "def security_event_schema():\n", "\n", " return T.StructType([\n", " T.StructField(name = \"EventID\", dataType = T.StringType(), nullable = True),\n", " T.StructField(name = \"NewProcessName\", dataType = T.StringType(), nullable = True),\n", " T.StructField(name = \"Process\", dataType = T.StringType(), nullable = True),\n", " T.StructField(name = \"TimeGenerated\", dataType = T.StringType(), nullable = True),\n", " ])\n", "\n", "blobManager = storage_blob_manager(connectionString)\n", "raw_df = blobManager.get_raw_df(startTime, endTime, containerName, basePath, security_event_schema(), blobManager.get_blob_service_client(connectionString))\n", "\n", "final = raw_df.where(F.col(\"EventID\") == \"4688\")\n", "final = final.withColumn(\"Process\", F.lower(\"Process\"))\n", "final = final.drop(\"EventID\")\n", "final = final.withColumn(\"From\", F.lit(\"User\"))\n", "\n", "final = final.cache()\n", "print(\"There are \", final.count(), \" events of type 4688 to process.\")\n", "final.show()\n" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Here we append synthetically created normal and malicious process creation events. This is being done to show performance of this algorithm by ensuring some masquerading process names are caught." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "normalEvents = getSyntheticEvents(\"normal\")\n", "potentiallyMaliciousEvents = getSyntheticEvents(\"malicious\")\n", "final = final.union(normalEvents).union(potentiallyMaliciousEvents)\n", "print(\"Count of SecurityEvents + Synthethically created 4688 events are : \", final.count())" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "We are comparing frequent to infrequent processes to decide maliciousness of a process.\n", "\n", "The approach here is that we consider processes occuring more than 'frequentThreshold' percentile of the time as normal and those occuring less than 'infrequentThreshold' percentile of the time as potentially malicious. Those in the middle range are excluded from analysis because they fall in the grey area of being of relatively high popularity but falling below the first threshold.\n", "\n", "The values of these thresholds can be customized by you based on your needs." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "groupProcess = final.groupBy(\"Process\").count().sort(F.desc(\"count\"))\n", "groupProcess = groupProcess.select(\"Process\",\"count\", F.percent_rank().over(Window.partitionBy().orderBy(groupProcess['count'])).alias(\"percent_rank\"))\n", "groupProcess = groupProcess.sort(F.desc(\"percent_rank\"))\n", "\n", "frequentProcess = groupProcess.where(F.col(\"percent_rank\") >= frequentThreshold).select(\"Process\")\n", "frequentProcess = frequentProcess.withColumnRenamed(\"Process\", \"frequentProcess\")\n", "infrequentProcess = groupProcess.where(F.col(\"percent_rank\") < infrequentThreshold).select(\"Process\")\n", "infrequentProcess = infrequentProcess.withColumnRenamed(\"Process\", \"infrequentProcess\")\n", "\n", "print(\"There are \", frequentProcess.count(), \" normal processes in your data\")\n", "print(\"There are \", infrequentProcess.count(), \" potentially malicious processes in your data\")\n", "print(\"Examples of some potentially malicious processes: \")\n", "infrequentProcess.show()" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Next we find the Levenshtein distance between the normal and potentially malicious processes to check whether we have any masquerading processes." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "compare = frequentProcess.crossJoin(infrequentProcess)\n", "compare = compare.withColumn(\"Dist\", calcDistUdf(F.col(\"frequentProcess\"), F.col(\"infrequentProcess\")))\n", "print(\"Showing the Levenshtein distances between various processes\")\n", "compare.show()" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "It is always useful to have the corresponding process paths from where the processes spawned to understand maliciousness of the process. This cell finds the paths of all the processes, for context. We also filter based on a threshold values which you can alter to better fit your criteria." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "frequentProcessWhole = compare.join(final, (final.Process == compare.frequentProcess), how = \"left\").drop(\"Process\")\n", "frequentProcessWhole = frequentProcessWhole.withColumnRenamed(\"NewProcessName\", \"frequentProcessPath\")\n", "frequentProcessWhole = frequentProcessWhole.withColumnRenamed(\"TimeGenerated\", \"frequentTimeGenerated\")\n", "frequentProcessWhole = frequentProcessWhole.withColumnRenamed(\"From\", \"frequentFrom\")\n", "\n", "infrequentProcessWhole = frequentProcessWhole.join(final, (final.Process == frequentProcessWhole.infrequentProcess), how = \"left\").drop(\"Process\")\n", "infrequentProcessWhole = infrequentProcessWhole.withColumnRenamed(\"NewProcessName\", \"infrequentProcessPath\")\n", "infrequentProcessWhole = infrequentProcessWhole.withColumnRenamed(\"TimeGenerated\", \"infrequentTimeGenerated\")\n", "infrequentProcessWhole = infrequentProcessWhole.withColumnRenamed(\"From\", \"infrequentFrom\")\n", "\n", "infrequentProcessWholeFiltered = infrequentProcessWhole.where(F.col(\"Dist\") > levenDistThreshold)\n", "print(\"Your anomalies: \")\n", "(infrequentProcessWholeFiltered.where((F.col(\"frequentFrom\") == \"User\") & (F.col(\"infrequentFrom\") == \"User\"))).show()\n", "\n", "print(\"Hardcoded anomalies examples\")\n", "(infrequentProcessWholeFiltered.where((F.col(\"frequentFrom\") == \"Hardcoded\") | (F.col(\"infrequentFrom\") == \"Hardcoded\"))).show()" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "To remove noise, we are extracting only the process names and path information of the potentially malicious process names." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "print(\"Showing potential anomalies after removing noise\")\n", "(infrequentProcessWholeFiltered.drop(\"frequentTimeGenerated\", \"infrequentTimeGenerated\").distinct()).show()" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Sending results to Log Analytics" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "def escape_str(str):\n", " return str.replace('\\\\','\\\\\\\\')\n", "\n", "escape_strUdf = F.udf(escape_str, T.StringType())\n", "\n", "def send_results_to_log_analytics(df_to_la):\n", " # The log type is the name of the event that is being submitted. This will show up under \"Custom Logs\" as log_type + '_CL'\n", " log_type = 'MasqueradingProcessNameResult'\n", " df_to_la = df_to_la.withColumn('timestamp', F.current_timestamp())\n", "\n", " # concatenate columns to form one json record\n", " json_records = df_to_la.withColumn('json_field', F.concat(F.lit('{'), \n", " F.lit(' \\\"TimeStamp\\\": \\\"'), F.from_unixtime(F.unix_timestamp(F.col(\"timestamp\")), \"y-MM-dd'T'hh:mm:ss.SSS'Z'\"), F.lit('\\\",'),\n", " F.lit(' \\\"NormalProcess\\\": \\\"'), escape_strUdf(F.col('frequentProcess')), F.lit('\\\",'),\n", " F.lit(' \\\"PotentiallyMaliciousProcess\\\": \\\"'), escape_strUdf(F.col('infrequentProcess')), F.lit('\\\",'),\n", " F.lit(' \\\"AnomalyScore\\\":'), F.col('Dist'),\n", " F.lit('}')\n", " ) \n", " )\n", " # combine json record column to create the array\n", " json_body = json_records.agg(F.concat_ws(\", \", F.collect_list('json_field')).alias('body'))\n", "\n", " if len(json_body.first()) > 0:\n", " json_payload = json_body.first()['body']\n", " json_payload = '[' + json_payload + ']'\n", "\n", " payload = json_payload.encode('utf-8')\n", " return log_analytics_client(workspaceId, workspaceSharedKey).post_data(payload, log_type)\n", " else:\n", " return \"No json data to send to LA\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "print(\"Sending results to LogAnalytics\")\n", "print(\"Sending \", infrequentProcessWholeFiltered.count(), \" results to Log Analytics\")\n", "send_results_to_log_analytics(infrequentProcessWholeFiltered)\n", "print(\"Done\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Run the following line if you have been running through AML\n", "%synapse stop" ] } ], "metadata": { "kernel_info": { "name": "python38-azureml" }, "kernelspec": { "display_name": "Python 3.8 - AzureML", "language": "python", "name": "python38-azureml" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.1" }, "microsoft": { "host": { "AzureML": { "notebookHasBeenCompleted": true } } }, "nteract": { "version": "nteract-front-end@1.0.0" } }, "nbformat": 4, "nbformat_minor": 2 }