{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "Timesketch And Colab.ipynb", "private_outputs": true, "provenance": [], "toc_visible": true, "include_colab_link": true }, "kernelspec": { "name": "python3", "display_name": "Python 3" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "view-in-github", "colab_type": "text" }, "source": [ "\"Open" ] }, { "cell_type": "markdown", "metadata": { "id": "mZWOhSLNRInc" }, "source": [ "# Timesketch and Colab\n", "\n", "This is a small colab that is built to demonstrate how to interact with Timesketch from colab to do some additional exploration of the data.\n", "\n", "Colab can greatly complement investigations by providing the analyst with access to the powers of using python to manipulate the data stored in Timeskech. Additionally it provides developers with the ability to do research on the data in order to speed up developments of analyzers, aggregators and graphing. The purpose of this colab is simply to briefly introduce the powers of colab to analysts and developers, with the hope of inspiring more to take advantage of this powerful platform. It is also an option to use jupyter notebook instead of colab, both are just as valid options.\n", "\n", "Each code cell (denoted by the [] and grey color) can be run simply by hitting \"shift + enter\" inside it. The first code that you execute will automatically connect you to a public runtime for colab and connect to the publicly open demo timesketch. You can easily add new code cells, or modify the code that is already there to experiment.\n", "\n", "## README\n", "\n", "If you simply click the `connect` button in the upper right corner you will\n", "connect to a kernel runtime running in the cloud. It is a great way to explore\n", "what colab has to offer and provides a quick way to play with the demo data.\n", "\n", "However if you want to connect to your own Timesketch instance, or load data\n", "from local drive or don't want the data to be read into a cloud machine then\n", "it is better to run from a local runtime environment. Install jupyter on\n", "your machine and follow the [guideline posted here](https://research.google.com/colaboratory/local-runtimes.html).\n", "These instructions are also available from the pop-up that comes when\n", "you select a local runtime.\n", "\n", "Once you have your local runtime setup you should be able to reach your local Timesketch instance.\n", "\n", "You cannot save changes to this colab document, if you want to have your own copy of the colab to make changes or do some other experimentation you can simply select \"File / Save a Copy in Drive\" button to make your own copy of this colab and start making changes.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "4_TQvEgTRRVK" }, "source": [ "## Installation\n", "\n", "Let's start by installing the TS API client... all commands that start with ! are executed in the shell, therefore if you are missing Python packages you can use pip.\n", "\n", "This is not needed if you are running a local kernel, that has the library already installed." ] }, { "cell_type": "code", "metadata": { "id": "Qmu0lOJYRFYD" }, "source": [ "!pip install --upgrade timesketch-api-client" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "mFQTCGbNeRoR" }, "source": [ "### Remember to execute the cell below\n", "\n", "Just a gentle reminder that the cell below is a code cell, so it needs to be executed (you can see the \"play\" button next to it)" ] }, { "cell_type": "code", "metadata": { "id": "nh4pwmG-RI2w", "cellView": "form" }, "source": [ "# @title Import Libraries\n", "# @markdown We first need to import libraries that we will use throughout the colab.\n", "import altair as alt # For graphing.\n", "import numpy as np # Never know when this will come in handy.\n", "import pandas as pd # We will be using pandas quite heavily.\n", "\n", "from timesketch_api_client import config\n", "from timesketch_api_client import search" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "ojceENS4hEP5" }, "source": [ "(notice that the cell above is an actual code cell, it is just using formatting to *look nice*. If you want to see the code behind it, select the cell, and click the three dots and select \"Form | Show Code\")" ] }, { "cell_type": "markdown", "metadata": { "id": "uMYsDkCgYK5y" }, "source": [ "## Connect to TS" ] }, { "cell_type": "markdown", "metadata": { "id": "9IUzNc8yRanl" }, "source": [ "And now we can start creating a timesketch client. The client is the object used to connect to the TS server and provides the API to interact with it.\n", "\n", "The TS API consists of several objects, each with it's own purpose, some of them are:\n", "+ **client**: A TS client object is the main `gateway` to TS. That includes\n", "authenticating to TS, keeping a session to interact with the REST API, and providing functions that allow you to create new sketches, get a sketch, and work with search indices.\n", "+ **sketch**: A sketch object is what you will most likely interact with the most. That allows you to operate on a sketch, so that means to see the sketch ACL, attributes, labels and other metadata as well as to run analyzers, search queries, aggregations, stories, tag or label events, etc.\n", "+ **timeline**: A timeline object allows you to view properties of a timeline, as well as adding/removing labels from it.\n", "+ **story**: A story object allows you to interact with a story, add/delete/edit blocks, move them around, export the story, etc.\n", "+ **view**: A view object is an object that holds information about saved searches, or views in a sketch. This can be passed to the sketch to query data, or to view the content of the view.\n", "+ **aggregation**: An aggregation object is used to run aggregation queries on the dataset. It can provide you with a data frame, a chart, or the option to save/delete aggregations.\n", "\n", "Let's start by getting a TS client object. There are multiple ways of getting that, yet the easiest way is to use the configuration object. That automates most of the actions that are needed, and prompts the user with questions if data is missing (reading information from a configuration file to fill in the blanks).\n", "\n", "The first time you request the client it will ask you questions (since for the first time there will be no configuration file). For this demonstration we are going to be using the demo server, so we will use the following configs:\n", "\n", "+ host_uri: **https://demo.timesketch.org**\n", "+ auth_mode: **timesketch** (this is simple user/pass)\n", "+ Username: **demo**\n", "+ Password: **demo**\n", "\n", "*keep in mind that after answering these questions for the first time, a configuration file ~/.timesketchrc and ~/.timesketch.token will be saved so you don't need to answer these questions again.*\n" ] }, { "cell_type": "code", "metadata": { "id": "1QQFoUFWRP4N", "cellView": "both" }, "source": [ "ts_client = config.get_client(confirm_choices=True)" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "pX1B1gccRv9L" }, "source": [ "### Let's Explore\n", "And now we can start to explore. The first thing is to get all the sketches that are available. Most of the operations you want to do with TS are available in the sketch API." ] }, { "cell_type": "code", "metadata": { "id": "QN2r9x3uRvRG" }, "source": [ "sketches = ts_client.list_sketches()" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "r6Qmi4v_SGX1" }, "source": [ "Now that we've got a lis of all available sketches, let's print out the names of the sketches as well as the index into the list, so that we can more easily choose a sketch that interests us." ] }, { "cell_type": "code", "metadata": { "id": "Wn0zDL6SRuYY" }, "source": [ "for i, sketch in enumerate(sketches):\n", " print(f'[{i}] `\n", "\n", "In colab you can use TAB completion to get a list of all attributes of the object you are working with. See a function you may want to call? Try calling it with `gd_sketch.function_name?` and hit enter.. let's look at an example:\n", "\n" ] }, { "cell_type": "code", "metadata": { "id": "e7oEZ80sYzc7" }, "source": [ "gd_sketch.list_saved_searches?" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "8h2S3dXBY6c0" }, "source": [ "This way you'll get a list of all the parameters you may want or need to use. You can also use tab completion as soon as you type, `gd_sketch.e` will give you all options that start with an `e`, etc.\n", "\n", "You can also type `gd_sketch.list_saved_searches()` and get a pop-up with a list of what parameters this function provides.\n", "\n", "Now let's look at somethings we can do with the sketch object and the TS client. For example if we want to get all starred events in the sketch we can do that by querying the sketch for available labels. You can look at a label as a \"sketch specific tag\", that is unlike a tag that is stored in the Elastic document and therefore is shared among all sketches that have that same timeline attached, a label is bound to the actual sketch and therefore not available outside of it... this is used in various places, most notably to indicate which events have labels, are hidden from views and are starred. These pre-defined labels are:\n", "\n", "+ __ts_star: Starred event\n", "+ __ts_comment: Event with a comment\n", "+ __ts_hidden: A hidden event\n", "\n", "Let's for instance look at all starred events in the Greendale index. We will use the parameter `as_pandas=True`. That will mean that the events will be returned as a [pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html). This is a very flexible object that we will use throughout this colab. We will try to introduce some basic operations on a pandas object, yet for more details there are plenty of guides that can be found online. One way to think about pandas is to think about spreadsheets, or databases, where the data is stored in a table (data frame), which consists of columns and rows. And then there are operations that work on either the column or the row.\n", "\n", "But let's start by looking at one such data frame, by looking for all starred events.\n", "\n", "There are two ways of doing that, either by using the search object (preferred) or via the sketch object (soon to be deprecated)\n", "\n", "Once we get the data frame back, we call `data_frame.shape`, which returns a tuple with two items, number of rows and number of columns. That way we can assess the size of the dataframe." ] }, { "cell_type": "code", "metadata": { "id": "HejGxei3hfnM" }, "source": [ "starred_events = gd_sketch.search_by_label('__ts_star', as_pandas=True)\n", "starred_events.shape" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "NVtbz3xaFfvK" }, "source": [ "Let's look at how to achieve the same using the search object." ] }, { "cell_type": "code", "metadata": { "id": "ats7kiVYFixE" }, "source": [ "search_obj = search.Search(gd_sketch)\n", "label_chip = search.LabelChip()\n", "label_chip.use_star_label()\n", "search_obj.add_chip(label_chip)\n", "search_obj.query_string = '*'\n", "\n", "starred_events = search_obj.table\n", "starred_events.shape" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "VFmX8tY-bRnC" }, "source": [ "As you noticed there are quite a few starred events.. to limit this, let's look at just the first 10" ] }, { "cell_type": "code", "metadata": { "id": "Logz1UvNbV87" }, "source": [ "starred_events.head(10)" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "6RZS868gbakg" }, "source": [ "Or a single one..." ] }, { "cell_type": "code", "metadata": { "id": "168Ny0fObbx1" }, "source": [ "pd.set_option('display.max_colwidth', 100) # this is just meant to make the output wider\n", "starred_events.iloc[9]" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "hMzUvQt2aU37" }, "source": [ "To continue let's look at what searches have been stored in the sketch:" ] }, { "cell_type": "code", "metadata": { "id": "AYgCmg_yZOO7" }, "source": [ "saved_searches = gd_sketch.list_saved_searches()\n", "\n", "for index, saved_search in enumerate(saved_searches):\n", " print('[{0:d}] {1:s}'.format(index, saved_search.name))" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "Tt5SWKzgZZe5" }, "source": [ "You can then start to query the API to get back results from these saved searches. Let's try one of them...\n", "\n", "Word of caution, try to limit your search so that you don't get too many results back. The API will happily let you get all the results back as you choose, but the more records you get back the longer the API call will take (10k events per API call). " ] }, { "cell_type": "code", "metadata": { "id": "yY8jk_UzSpCE" }, "source": [ "# You can change this number if you would like to test out another view.\n", "# The way the code works is that it checks first of you set the \"view_text\", and uses that to pick a view, otherwise the number is used.\n", "saved_search_id = 1\n", "saved_search_text = 'Phishy Domains'\n", "\n", "if saved_search_text:\n", " for index, saved_search in enumerate(saved_searches):\n", " if saved_search.name == saved_search_text:\n", " saved_search_id = index\n", " break\n", "\n", "print('Fetching data from : {0:s}'.format(saved_searches[saved_search_id].name))\n", "print(' Query used : {0:s}'.format(\n", " saved_searches[saved_search_id].query_string if saved_searches[saved_search_id].query_string else saved_searches[saved_search_id].query_dsl))\n" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "zLgLBXXMlDKa" }, "source": [ "If you want to issue this query, then you can run the cell below, otherwise you can change the view_number to try another one." ] }, { "cell_type": "code", "metadata": { "id": "MmlF6oYcj8wh" }, "source": [ "greendale_frame = saved_searches[saved_search_id].table" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "jEki5_BmZpKu" }, "source": [ "One thing you may notice is that throughout this colab we will use the \"`.table`\" property of the search object. That means that the data that we'll get back is a pandas DataFrame that we can now start exploring. \n", "\n", "Let's start with seeing how many entries we got back." ] }, { "cell_type": "code", "metadata": { "id": "1_fjRL4XZ-XW" }, "source": [ "greendale_frame.shape" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "Kuoh1BXcwndC" }, "source": [ "This tells us that we got back only 40 records... and that's because we are using a saved search, that limited the number of records returned back, let's confirm that:" ] }, { "cell_type": "code", "metadata": { "id": "Kvz85LIrwwv9" }, "source": [ "saved_search = saved_searches[saved_search_id]\n", "\n", "saved_search.query_filter" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "7MuRsRsMw2eE" }, "source": [ "You can see that the size of the return value is only 40 entries... but the log entry before told us that there were `2240` entries to be gathered so let's increase that.\n", "\n", "*warning... since this is a saved search the API will attempt to update the actual saved search on the backend, but the demo user you are using is not allowed to change the saved search, so there will be a RuntimeError raised. Don't worry though, you can still change the value locally though.*" ] }, { "cell_type": "code", "metadata": { "id": "GmGTa4Pyw0GK" }, "source": [ "saved_search.max_entries = 4000" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "okanF4TGxFfh" }, "source": [ "greendale_frame = saved_search.table\n", "greendale_frame.shape" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "XWjrJnivaSo9" }, "source": [ "This tells us that the view returned back 2.284 events with 12 columns. Let's explore the first few entries, just so that we can wrap our head around what we got back.\n", "\n", "This is a great way to just get a feeling of what the data looks like that will be returned back. To see the first five entries we can use the `.head(5)` function, and the same if we want the last entries, we can use `.tail(5)`." ] }, { "cell_type": "code", "metadata": { "id": "ymR_NtseaRrO" }, "source": [ "greendale_frame.head(5)" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "bdPUgvNtl82r" }, "source": [ "Let's look at what columns we got back... and maybe create a slice that contains fewer columns, or not necessarily fewer but at least more the ones that we want to be able to see." ] }, { "cell_type": "code", "metadata": { "id": "zuu7VFCAmB9e" }, "source": [ "greendale_frame.columns" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "D-5xReYEHxKc" }, "source": [ "Since this is a result from the analyzers we have few extra fields we can pull in. \n", "\n", "Looking at the results you see the same column names in the UI, but when you click an event you'll notice that it has a lot more fields in it than the default view is. This can also be changed in the API client. For that we use the variable `return_fields`. Let's set that one.\n", "\n", "```\n", " return_fields: List of fields that should be included in the\n", " response.\n", " ```\n", " \n", "We can use that to specify what fields we would like to get back. Let's add few more fields (you can see what fields are available in the UI)\n", "\n" ] }, { "cell_type": "code", "metadata": { "id": "Dqv6gXlKmEUW" }, "source": [ "search_obj = saved_searches[saved_search_id]\n", "try:\n", " search_obj.return_fields = 'datetime,timestamp_desc,tag,message,label,url,domain,human_readable,access_count,title,domain_count,search_string'\n", "except RuntimeError:\n", " pass\n", "\n", "try:\n", " search_obj.max_entries = 10000\n", "except RuntimeError:\n", " pass\n", "greendale_frame = search_obj.table\n", "greendale_frame.head(4)" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "_mvDnsFfIful" }, "source": [ "Let's briefly look at these events." ] }, { "cell_type": "markdown", "metadata": { "id": "PKioTKqoI85E" }, "source": [ "OK,.... since this is a phishy domain analyzer, and all the results we got back are essentially from that analyzer, let's look at few things. First of all let's look at the tags that are available.\n", "\n", "Let's start by doing a simple method of converting the tags that are now a list into a string and then finding the unique strings." ] }, { "cell_type": "code", "metadata": { "id": "MFMRBxtRJDcK" }, "source": [ "greendale_frame.tag.str.join('|').unique()" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "5Ii81u63t_NJ" }, "source": [ "Then we can start to do this slightly differently, this time we want to get a list of all the different tags." ] }, { "cell_type": "code", "metadata": { "id": "JuTczR1_uKzO" }, "source": [ "tags = set()\n", "def add_tag(tag_list):\n", " list(map(tags.add, tag_list))\n", "\n", "greendale_frame.tag.apply(add_tag)\n", "\n", "print(tags)" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "WYhye26yuepx" }, "source": [ "Let's go over the code above to understand what just happened.\n", "\n", "First of all a set is created, called `tags`, since this is a set it cannot contain duplicates (duplicates are ignored).\n", "\n", "Then we define a function that accepts a list, and then applies the function `tags.add` against every item in the list (the map function). This mean that for each entry in the supplied `tag_list` the function `tags.add` is called.\n", "\n", "Then finally we take the dataframe `greendale_frame` and call the `apply` function on the series `tag`. That takes the column or series `tag`, which contains lists of tags applied and then for each row in the data frame applies the function `add_tag` that we created.\n", "\n", "This code effectively does the following:\n", "+ For each row of `greendale_frame` extract the `tag` list and apply the `add_tag` function\n", "+ The add tag function takes then each entry in the tag list and adds it to the set `tags`\n", "\n", "This gives us a final set that contains exactly one copy of each tag that was applied to the records." ] }, { "cell_type": "markdown", "metadata": { "id": "KP_joLR7JVJ8" }, "source": [ "Looking at the results from the tags, we do see some `outside-active-hours` tags. Let's look at those specifically. What does that mean? That means that the timeframe analyzer determined that the browsing activity occurred outside regular hours of the timeline it analyzed." ] }, { "cell_type": "code", "metadata": { "id": "h9l165w_JdtT" }, "source": [ "greendale_frame[greendale_frame.tag.str.join(',').str.contains('outside-active-hours')].domain.value_counts()" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "VADR_gpAJzMz" }, "source": [ "OK... now we get to see all the domains that the domain analyzer considered to be potentially \"phishy\"... is there a domain that stands out??? what about that grendale one?" ] }, { "cell_type": "code", "metadata": { "id": "-1PBtCtjJ5Ag" }, "source": [ "greendale_frame[greendale_frame.domain == 'grendale.xyz'][['datetime', 'url']]" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "4PbDcNMzJ-8S" }, "source": [ "OK... this seems odd.. let's look at few things, a the `human_readable` string as well as the URL..." ] }, { "cell_type": "code", "metadata": { "id": "vxnMcThfKEOs" }, "source": [ "grendale = greendale_frame[greendale_frame.domain == 'grendale.xyz']\n", "\n", "string_set = set()\n", "for string_list in grendale.human_readable:\n", " new_list = [x for x in string_list if 'phishy_domain' in x]\n", " _ = list(map(string_set.add, new_list))\n", "\n", "for entry in string_set:\n", " print('Human readable string is: {0:s}'.format(entry))\n", " \n", "\n", "print('')\n", "print('Counts for URL connections to the grendale domain:')\n", "grendale_count = grendale.url.value_counts()\n", "for index in grendale_count.index:\n", " print('[{0:d}] {1:s}'.format(grendale_count[index], index))" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "1QH7LFksLuLd" }, "source": [ "We can start doing a lot more now if we want to... let's look at when these things occurred..." ] }, { "cell_type": "code", "metadata": { "id": "-EShYajvL1RE" }, "source": [ "grendale_array = grendale.url.unique()\n", "\n", "greendale_frame[greendale_frame.url.isin(grendale_array)]" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "DrvPXI8zMETu" }, "source": [ "OK... we can then start to look at surrounding events.... let's look at one date in particular... \"2015-08-29 12:21:06\"" ] }, { "cell_type": "code", "metadata": { "id": "kwJrIsErMNGv" }, "source": [ "search_obj = search.Search(gd_sketch)\n", "date_chip = search.DateIntervalChip()\n", "\n", "# Let's set the date\n", "date_chip.date = '2015-08-29T12:21:06'\n", "\n", "# And now how much time we want before and after.\n", "date_chip.before = 1\n", "date_chip.after = 1\n", "\n", "# and the unit, we want minutes.. so that is m\n", "date_chip.unit = 'm'\n", "\n", "search_obj.query_string = '*'\n", "search_obj.add_chip(date_chip)\n", "\n", "search_obj.return_fields = 'message,human_readable,datetime,timestamp_desc,source_short,data_type,tags,url,domain'\n", "\n", "data = search_obj.table" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "NvIl0ucuzBvL" }, "source": [ "And now we can start to look at the results:" ] }, { "cell_type": "code", "metadata": { "id": "6NSD4izoNExQ" }, "source": [ "data[['datetime', 'message', 'human_readable', 'url']].head(4)" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "gEQ9yZbTNzjJ" }, "source": [ "Let's find the grendale and just look at events two seconds before/after" ] }, { "cell_type": "code", "metadata": { "id": "wA9lQ1JANdGg" }, "source": [ "data[(data.datetime > '2015-08-29 12:21:04') & (data.datetime < '2015-08-29 12:21:08')][['datetime', 'message', 'timestamp_desc']]" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "9F2gAu6ScnEh" }, "source": [ "## Let's look at aggregation\n", "\n", "Timesketch also has aggregation capabilities that we can call from the client. Let's take a quick look.\n", "\n", "Start by checking out whether there are any stored aggregations that we can just take a look at.\n", "\n", "You can also store your own aggregations using the `gd_sketch.store_aggregation` function. However we are not going to do that in this colab." ] }, { "cell_type": "code", "metadata": { "id": "Hwe_qoUR14wL" }, "source": [ "[(x.id, x.name, x.title, x.description) for x in gd_sketch.list_aggregations()]" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "G3w-Wgbc8Am_" }, "source": [ "OK, so there are some aggregations stored. Let's just pick one of those to take a closer look at." ] }, { "cell_type": "code", "metadata": { "id": "0-sNmn4a1-K-" }, "source": [ "aggregation = gd_sketch.get_aggregation(24)" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "JFeCKrPA8IlZ" }, "source": [ "Now we've got an aggregation object that we can take a closer look at." ] }, { "cell_type": "code", "metadata": { "id": "Xj46DN9W8PdD" }, "source": [ "aggregation.description" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "1_H6Xtdy8RoX" }, "source": [ "OK, so from the name, we can guess what it contains. We can also look at all of the stored aggregations" ] }, { "cell_type": "code", "metadata": { "id": "Z-ihf1uP8Xwd" }, "source": [ "pd.DataFrame([{'name': x.name, 'description': x.description} for x in gd_sketch.list_aggregations()])" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "Q7okh3ik8lsL" }, "source": [ "Let's look at the aggregation visually, both as a table and a chart." ] }, { "cell_type": "code", "metadata": { "id": "XGMT_d0K8ooP" }, "source": [ "aggregation.table" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "MMtWdtTI8qkR" }, "source": [ "aggregation.chart" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "DrtWSb5Ix60q" }, "source": [ "The chart there is empty, since the aggregation didn't contain a chart.\n", "\n", "\n", "We can also take a look at what aggregators can be used, if we want to run our own custom aggregator." ] }, { "cell_type": "code", "metadata": { "id": "DZ7pbVCHyC-s" }, "source": [ "gd_sketch.list_available_aggregators()" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "bJdAaL-Jc172" }, "source": [ "Now we can see that there are at least the \"field_bucket\" and \"query_bucket\" aggregators that we can look at. The `field_bucket` one is a terms bucket aggregation, which means we can take any field in the dataset and aggregate on that.\n", "\n", "So if we want to for instance see the top 20 domains that were visited we can just ask for an aggregation of the field `domain` and limit it to 20 records (which will be the top 20). Let's do that:" ] }, { "cell_type": "code", "metadata": { "id": "DtzPsBQmc3Du" }, "source": [ "aggregator = gd_sketch.run_aggregator(\n", " aggregator_name='field_bucket',\n", " aggregator_parameters={'field': 'domain', 'limit': 20, 'supported_charts': 'barchart'})" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "VHNfI5g-yj-e" }, "source": [ "Now we've got an aggregation object that we can take a closer look at... let's look at the data it stored. What we were trying to get out was the top 20 domains that were visited." ] }, { "cell_type": "code", "metadata": { "id": "EbwgvEczvJw_" }, "source": [ "aggregator.table" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "70zZ8p53yssc" }, "source": [ "Or we can look at this visually... as a chart" ] }, { "cell_type": "code", "metadata": { "id": "3RhPxNrAyuu4" }, "source": [ "aggregator.chart" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "DRGKWSnfzg2D" }, "source": [ "We can also do something a bit more complex. The other aggregator, the `query_bucket` works in a similar way, except you can filter the results first. We want to aggregate all the domains that have been tagged with the phishy domain tag." ] }, { "cell_type": "code", "metadata": { "id": "ON6j4snf9vtC" }, "source": [ "tag_aggregator = gd_sketch.run_aggregator(\n", " aggregator_name='query_bucket',\n", " aggregator_parameters={\n", " 'field': 'domain',\n", " 'query_string': 'tag:\"phishy-domain\"',\n", " 'supported_charts': 'barchart',\n", " }\n", ")" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "kImHNTCT_gbQ" }, "source": [ "Let's look at the results." ] }, { "cell_type": "code", "metadata": { "id": "EqRjvsxY_hre" }, "source": [ "tag_aggregator.table" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "aB9VkZlV_4RT" }, "source": [ "We can also look at all the tags in the timeline. What tags have been applied and how frequent are they." ] }, { "cell_type": "code", "metadata": { "id": "H-YUCbxv-W1Y" }, "source": [ "gd_sketch.run_aggregator(\n", " aggregator_name='field_bucket',\n", " aggregator_parameters={\n", " 'field': 'tag',\n", " 'limit': 10,\n", " }\n", ").table" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "bBEORm6E__PG" }, "source": [ "And then to see what are the most frequent applications executed on the machine.\n", "\n", "Since not all of the execution events have the same fields in them we'll have to create few tables here... let's start with looking at what data types are there." ] }, { "cell_type": "code", "metadata": { "id": "i7R2uGp2AE4D" }, "source": [ "gd_sketch.run_aggregator(\n", " aggregator_name='query_bucket',\n", " aggregator_parameters={\n", " 'field': 'data_type',\n", " 'query_string': 'tag:\"browser-search\"',\n", " 'supported_charts': 'barchart',\n", " }\n", ").table" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "NQRD_9mZBHH_" }, "source": [ "And then we can do a summary for each one." ] }, { "cell_type": "code", "metadata": { "id": "GM_a666aBIzI" }, "source": [ "gd_sketch.run_aggregator(\n", " aggregator_name='query_bucket',\n", " aggregator_parameters={\n", " 'field': 'domain',\n", " 'query_string': 'tag:\"browser-search\"',\n", " 'supported_charts': 'barchart',\n", " }\n", ").table" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "EtRw5wEABLIt" }, "source": [ "agg = gd_sketch.run_aggregator(\n", " aggregator_name='query_bucket',\n", " aggregator_parameters={\n", " 'field': 'search_string',\n", " 'query_string': 'tag:\"browser-search\"',\n", " 'supported_charts': 'hbarchart',\n", " }\n", ")\n", "\n", "agg.table" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "NAV-KN6zIg82" }, "source": [ "Or as a chart" ] }, { "cell_type": "code", "metadata": { "id": "LM_CT7eWIfRS" }, "source": [ "agg.chart" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "0tA4pHRLOj5g" }, "source": [ "## Let's look at logins...\n", "\n", "Let's do a search to look at login entries..." ] }, { "cell_type": "code", "metadata": { "id": "vbsb58imPVCQ" }, "source": [ "search_obj = search.Search(gd_sketch)\n", "search_obj.query_string = 'tag:\"logon-event\"'\n", "search_obj.max_entries = 500000\n", "search_obj.return_fields = (\n", " 'datetime,timestamp_desc,human_readable,message,tag,event_identifier,hostname,record_number,'\n", " 'recovered,strings,username,strings_parsed,logon_type,logon_process,windows_domain,'\n", " 'source_username,user_id,computer_name')\n", "\n", "login_data = search_obj.table" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "YydMkgThab09" }, "source": [ "This will produce quite a bit of events... let's look at how many." ] }, { "cell_type": "code", "metadata": { "id": "-PxPYIA5Pvks" }, "source": [ "login_data.shape" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "ZsM5me3KQwp4" }, "source": [ "Let's look at usernames...." ] }, { "cell_type": "code", "metadata": { "id": "kfuzWPfJQynH" }, "source": [ "login_data.username.value_counts()" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "5ApoSLqLfXbX" }, "source": [ "Let's also look at what windows domains where used:" ] }, { "cell_type": "code", "metadata": { "id": "tKF9UR3mbSC-" }, "source": [ "login_data.windows_domain.value_counts()" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "9fy6EAURSpvK" }, "source": [ "And the logon types:" ] }, { "cell_type": "code", "metadata": { "id": "DFU56oKaSUMC" }, "source": [ "login_data.logon_type.value_counts()" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "vG1SCs_wLTwe" }, "source": [ "login_data.computer_name.value_counts()" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "8IApcGWwfaMS" }, "source": [ "Let's graph.... and you can then interact with the graph... try zomming in, etc.\n", "\n", "First we'll define a graph function that we can then call with parameters..." ] }, { "cell_type": "code", "metadata": { "id": "HwU4K4MnaYdt" }, "source": [ "def GraphLogins(data_frame, machine_name=None):\n", " \n", " if machine_name:\n", " data_slice = data_frame[data_frame.computer_name == machine_name]\n", " title = 'Accounts Logged In - {0:s}'.format(machine_name)\n", " else:\n", " data_slice = data_frame\n", " title = 'Accounts Logged In'\n", " \n", " data_grouped = data_slice[['username', 'datetime']].groupby('username', as_index=False).count()\n", " data_grouped.rename(columns={'datetime': 'count'}, inplace=True)\n", "\n", " return alt.Chart(data_grouped, width=400).mark_bar().encode(\n", " x='username', y='count',\n", " tooltip=['username', 'count']\n", " ).properties(\n", " title=title\n", " ).interactive()\n" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "9stCil8TgXhq" }, "source": [ "Start by graphing all machines" ] }, { "cell_type": "code", "metadata": { "id": "T-oUET5AgYyW" }, "source": [ "GraphLogins(login_data)" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "drEG2TTSncoS" }, "source": [ "Or we can look at this for a particular machine:" ] }, { "cell_type": "code", "metadata": { "id": "SP1vf_xBUr2a" }, "source": [ "GraphLogins(login_data, 'Student-PC1.internal.greendale.edu')" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "2axKKhp7Unfe" }, "source": [ "Or we can look at this as a scatter plot...\n", "\n", "First we'll define a function that munches the data for us. This function will essentially graph all logins in a day with a scatter plot, using colors to denote the count value.\n", "\n", "**This graph will be very interactive... try selecting a time period by clicking with the mouse on the upper graph and drawing a selection.**" ] }, { "cell_type": "code", "metadata": { "id": "D9DiG03hazwY" }, "source": [ "login_data['day'] = login_data['datetime'].dt.strftime('%Y-%m-%d')\n", "\n", "def GraphScatterLogin(data_frame, machine_name=''):\n", " if machine_name:\n", " data_slice = data_frame[data_frame.computer_name == machine_name]\n", " title = 'Accounts Logged In - {0:s}'.format(machine_name)\n", " else:\n", " data_slice = data_frame\n", " title = 'Accounts Logged In'\n", " \n", " login_grouped = data_slice[['day', 'computer_name', 'username', 'message']].groupby(['day', 'computer_name', 'username'], as_index=False).count()\n", " login_grouped.rename(columns={'message': 'count'}, inplace=True)\n", " \n", " brush = alt.selection_interval(encodings=['x'])\n", " click = alt.selection_multi(encodings=['color'])\n", " color = alt.Color('count:Q')\n", "\n", " chart1 = alt.Chart(login_grouped).mark_point().encode(\n", " x='day', \n", " y='username',\n", " color=alt.condition(brush, color, alt.value('lightgray')),\n", " ).properties(\n", " title=title,\n", " width=600\n", " ).add_selection(\n", " brush\n", " ).transform_filter(\n", " click\n", " )\n", " \n", " chart2 = alt.Chart(login_grouped).mark_bar().encode(\n", " x='count',\n", " y='username',\n", " color=alt.condition(brush, color, alt.value('lightgray')),\n", " tooltip=['count'],\n", " ).transform_filter(\n", " brush\n", " ).properties(\n", " width=600\n", " ).add_selection(\n", " click\n", " )\n", " \n", " return chart1 & chart2" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "Z4s-lEHxhQXH" }, "source": [ "OK, let's start by graphing for all logins..." ] }, { "cell_type": "code", "metadata": { "id": "PuaJmcJMhShS" }, "source": [ "GraphScatterLogin(login_data)" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "dpLDsdSGhT1r" }, "source": [ "And now just for the Student-PC1" ] }, { "cell_type": "code", "metadata": { "id": "2XaBqZqRVIoL" }, "source": [ "GraphScatterLogin(login_data, 'Student-PC1.internal.greendale.edu')" ], "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "z0f-qAxyhYa4" }, "source": [ "And now it is your time to shine, experiment with python pandas, the graphing library and other data science techniques." ] }, { "cell_type": "code", "metadata": { "id": "vfCXHHx8YNPv" }, "source": [ "" ], "execution_count": null, "outputs": [] } ] }