{ "Network": { "Inbound": { "HTTP": { "methods": { "strangeSequences": { "details": { "query": "service=80 %26%26 direction='Inbound' %26%26 analysis.service='http post no get'", "description": "Before HTTP/1.1 web servers had to accommodate each new method (GET, PUT, POST, HEAD, OPTIONS) in a new TCP session. HTTP/1.1 introduced the idea of pipelining of requests where a single TCP session could consist of multiple methods. So, we look for sequences that seem strange, like a POST with no GET. This means that a client is sending data to the server without ever requesting a web page.", "threshold": 250, "refineby": "ip.src,ip.dst" } } }, "uri": { "shortFilenameScripts": { "details": { "description": "The script name a.aspx gives us no information about the purpose of that script. So, instead of naming their webshells 'webshell.php' attackers will use short or innocuous looking script names like 'b.aspx' or 'email.php'. This can also be legitimate because web developers are taught to use obscure filenames so that their sites cannot be reverse engineered.", "query": "service=80 %26%26 direction='Inbound' %26%26 filename exists %26%26 extension='php','js','asp','aspx'" } }, "encodedQueryStrings": { "details": { "description": "Query strings are a mechanism to submit data to a web page in a GET request. Typically, a GET request is just for pulling data off of servers, but if we include a query string now we can send data in a GET request", "query": "service=80 %26%26 direction='Inbound' %26%26 analysis.service = 'http with base64','possible base64 http form data'" } } }, "headers": { "numberOfHeaders": { "details": { "description": "Anything less than 6 headers is probably not generated by a browser. So, we assume that it isn't user generated and is probably machine behavior. We then need to verify if the behaviour is nefarious.", "query": "service=80 %26%26 direction='Inbound' %26%26 analysis.service='http six or less headers' ", "threshold": 500, "refineby": "ip.src,ip.dst" } }, "hostHeader": { "directToIp": { "details": { "description": "People don't typically enter IP addresses when browsing to websites. Anything other than domain formated host headers should be looked at to determine whether or not it is expected.", "query": "service=80 %26%26 analysis.service='http direct to ip request' %26%26 direction='Inbound'" } }, "containsPort": { "details": { "description": "When the host field doesn't contain a port number the default port is assumed. For example, 80 would be the default for an HTTP URL. A nefarious actor may specify a particular port to avoid being blocked by a firewall policy.", "query": "service=80 %26%26 analysis.service='host header contains port' %26%26 direction='inbound'" } } }, "userAgent": { "old": { "details": { "description": "The user-agent header is used to specify the environment of the browser to the web server so that it can appropriately format the response for a pleasant browsing experience. If an old version of malware is simply recompiled, this value may not be changed, and old versions like IE4 can be used as an indicator.", "query": "service=80 %26%26 direction='Inbound' %26%26 client contains 'msie 4.0','msie 6.0','msie 7.0','msie 8.0'" } }, "knownBad": { "details": { "description": "The webshell or malware may use any user-agent they want, and some of them are known to be bad. For example, M0zilla may look almost right, but it is know bad. If a blacklist is not in place, try stacking and look for backwards slashes and other obvious misspellings.", "query": "service=80 %26%26 direction='inbound' %26%26 client exists" } }, "short": { "details": { "description": "Typical user-agents used by web browsers contain operating system, application, plugin, browser version information, etc. which results in a relatively long user-agent string being passed to a web site. Extremely short user agent strings, while technically valid, should be looked at.", "query": "service=80 %26%26 direction='inbound' %26%26 analysis.session='http short user-agent'" } }, "rare": { "details": { "description": "Stack rank user agents, investigate sessions belonging to outliers.", "query": "service=80 %26%26 direction='inbound' %26%26 client exists" } }, "missing": { "details": { "description": "Standard browser initiated requests and most other legit application initiated requests will contain a user agent. For sessions without a user agent, try stacking results by destination host, filename, or destination IP and look for other indicators or suspicious data transfers", "query": "service=80 %26%26 direction='inbound' %26%26 client !exists" } } }, "referrer": { "missing": { "details": { "description": "The referrer header (mispelled in the actual header as Referer) specifies the URI of where the browser obtained the URI of the request. This is typically populated by search engines when they direct users to certain sites. Since most users get to web sites through seach engines, we look for sessions where this header is missing.", "query": "service=80 %26%26 direction='inbound' %26%26 referer !exists" } }, "rare": { "details": { "description": "For Inbound traffic, we want to know from where people are vising our site. We look for the outliers. Keep in mind though that organizations can have a number of visitors from all over the world and those referrers may be completely valid.", "query": "service=80 %26%26 direction='inbound' %26%26 referer exists", "threshold": 500, "refineby": "ip.src,client" } } }, "acceptLanguage": { "details": { "description": "The accept-language header specifies the languages that the user speaks to the web server so that the web server can provide the response in a language the user understands. In some environments, we have been able to identify attacks if we know the attackers are Iranian and we see the Persian language code. This is highly dependent on your environment, and may not provide any useful value.", "query": "service=80 %26%26 direction='inbound'" } }, "contentLength": { "postMissing": { "details": { "description": "Whenever the HTTP method is sending data (PUT or POST) this is a required value. The web server can (though they shouldn't) use this value to determine how much space in memory to free to receive this content being sent. However, attackers may write web shells that don't use this header and therefore they will omit it. ", "query": "service=80 %26%26 direction='inbound' %26%26 action = 'post' %26%26 analysis.service='http post missing content-length'" } }, "wrongValue": { "details": { "description": "An attacker may decide to put something just to avoid getting caught, but they can hard-code this value. So, we look for values that don't match the length of the content sent. It's suggested to stack to other forms of rare or suspicious sessions first, as there is no way currently in NetWitness to detect this automatically", "query": "service=80 %26%26 direction='inbound' %26%26 action = 'post'" } } } }, "body": { "specificIndicators": { "details": { "description": "Looking for higher fidelity detections in appropriate keys (ioc in Netwitness) or generally deeper session inspection can lead to discoveries. This is generally a secondary hunt after refining the dataset with previous hunt items", "query": "service=80 %26%26 direction='inbound' %26%26 error !exists %26%26 ioc exists" } } } }, "SSL/TLS": { "insecureCipher": { "details": { "description": "The SSL/TLS protocols have a handshake in plain text that we can use to extract meta before the session gets encrypted. One piece of meta that we extract is the version. Anything previous to TLS v1.2 is now vulnerable. So, we look for any Inbound sessions that are supporting old versions. Also look for weak cipher suites. I use the NIST SP 800-52 document to identify weak cipher suites. I also recommend the Cisco paper attached here. That paper shows the cipher suites and command typically associated with malware using encrypted communications.", "query": "service=443 %26%26 direction='Inbound' %26%26 crypto = 'tls 1.0'" } }, "rareCipher": { "details": { "description": "Stack rank for rarity and confirm business case or look at other traffic to/from the specific IP addresses.", "query": "service=443 %26%26 direction='Inbound' %26%26 crypto exists" } }, "blacklistedSubjectCa": { "details": { "description": "When we are looking at the unencrypted certificate exchange, we can see the signing authorities. Keep in mind that there may be (and probably should be) a chain of certificates. For example, Verisign approves Symantec to authorize certificates, and Symantec authorizes Comodo and Comodo authorizes the sign in question. These certificate chains are pretty common. So, we look for self signed certificates. This is where the authorizing authority is the same as the entity holding the certificate. This happens for sites like Google though. Even in the chains, we want to look for SSL certificates that have been stolen. We would typically use a certificate blacklist for this. I tend to use the one listed here (https://sslbl.abuse.ch/). Though, our content team just issued their own feed. So, now customer can subscribe to that content. In addition, check the reputability of the signing authority. For example, Let's Encrypt has made certificates accessible to nearly everyone. This is great for encrypting your personal website, or your very own C2 communications.", "query": "service=443 %26%26 direction='Inbound' %26%26 ioc='bad ssl'" } }, "rareSubjectCa": { "details": { "description": "Stack rank for rarity on ssl.ca and/or ssl.subject and confirm business case or look at other traffic to/from the specific IP addresses.", "query": "service=443 %26%26 direction='Inbound' %26%26 ssl.ca exists" } } }, "DNS": { "recursive": { "details": { "description": "DNS servers configured to support recursive queries provide a mechanism for attackers to leverage DDOS attacks. Recursive DNS queries are when you DNS server responds to queries for which it is not authoritative.", "query": "service=53 %26%26 direction='Inbound'" } } }, "SSH/RDP": { "approvedExistence": { "details": { "description": "Typically this is blocked at the firewall inbound. If this is in your policy, anything here should be scrutinized.", "query": "direction='Inbound' %26%26 service=22,3389" } }, "expectedClientServer": { "details": { "description": "Protocols like Secure File Transfer Protocol set up an SSH tunnel and then transfer the file. Sometimes the client server meta will reveal the SFTP usage. Use this to drill into the client and server keys to map against expected business processes... do you run an SFTP server, should anyone be logging in via putty externally, etc", "query": "direction='Inbound' %26%26 service=22,3389 %26%26 client exists %26%26 server exists" } }, "rareServer": { "details": { "description": "Stack rank for rarity and confirm business case", "query": "direction='Inbound' %26%26 service=22,3389 %26%26 server exists" } }, "rareClient": { "details": { "description": "Stack rank for rarity and confirm business case", "query": "direction='Inbound' %26%26 service=22,3389 %26%26 client exists" } }, "lifetimeAnalysis": { "details": { "description": "When we are looking at these protocols, we want to know if there is an interactive session. Long lived sessions that aren't too large and have a medium ratio of transmitted to received data tends to be indicative of interactive session. Recommend viewing this (in NetWitness) in events view with lifetime and payload columns visible", "query": "direction='Inbound' %26%26 service=22,3389" } } } }, "Outbound": { "HTTP": { "methods": { "strangeSequences": { "details": { "description": "Using this as a tester for periodicity...", "query": "service=80 %26%26 direction='outbound' %26%26 analysis.service='http post no get'", "threshold": 250, "refineby": "ip.dst,ip.src" } } }, "uri": { "shortFilenameScripts": { "details": { "description": "The script name a.aspx gives us no information about the purpose of that script. So, instead of naming their webshells 'webshell.php' attackers will use short or innocuous looking script names like 'b.aspx' or 'email.php'. This can also be legitimate because web developers are taught to use obscure filenames so that their sites cannot be reverse engineered. Usage of this will involve visual analysis of filenames once drilled into the data.", "query": "service=80 %26%26 direction='outbound' %26%26 filename exists %26%26 extension='php','js','asp','aspx'" } }, "encodedQueryStrings": { "details": { "description": "Query strings are a mechanism to submit data to a web page in a GET request. Typically, a GET request is just for pulling data off of servers, but if we include a query string now we can send data in a GET request", "query": "service=80 %26%26 direction='outbound' %26%26 analysis.service = 'http with base64','possible base64 http form data'" } } }, "headers": { "numberOfHeaders": { "details": { "description": "Anything less than 6 headers is probably not generated by a browser. So, we assume that it isn't user generated and is probably machine behavior. We then need to verify if the behaviour is nefarious.", "query": "service=80 %26%26 direction='outbound' %26%26 analysis.service='http six or less headers' ", "threshold": 1000, "refineby": "ip.src,ip.dst" } }, "hostHeader": { "directToIp": { "details": { "description": "People don't typically enter IP addresses when browsing to websites. Anything other than domain formated host headers should be looked at to determine whether or not it is expected.", "query": "service=80 %26%26 analysis.service='http direct to ip request' %26%26 direction='outbound'" } }, "containsPort": { "details": { "query": "When the host field doesn't contain a port number the default port is assumed. For example, 80 would be the default for an HTTP URL. A nefarious actor may specify a particular port to avoid being blocked by a firewall policy.", "description": "service=80 %26%26 analysis.service='host header contains port' %26%26 direction='outbound'" } }, "dynamicDns": { "details": { "description": "While there are plenty of legitmate domains leveraging dynamic DNS, it's also a favorite technique used by attackers for hosting their delivery and control infrastructure. This drill will get you to the dataset, within which you will want to stack ioc and analysis.session keys as well destination organization.", "query": "service=80 %26%26 direction='outbound' %26%26 analysis.service contains 'dynamic dns'" } }, "Dga": { "details": { "description": "Dynamically Generated Domains are frequently used by malware authors to subvert domain intel based detections. While not constrained, many DGA implementations use algorithms to generate these domains predictably over time, often resulting in strange looking domains with nonsensical words or random looking sequences. Text analysis based queries are not completely ideal for DGA detection, and many DGA algorithms use legitimate words, so add other types of stacking analysis to this including destination organization, any external domain registration information, eg domain age, etc.", "query": "service=80 %26%26 direction='outbound' %26%26 analysis.service contains 'hostname consecutive consonants'" } } }, "userAgent": { "old": { "details": { "description": "The user-agent header is used to specify the environment of the browser to the web server so that it can appropriately format the response for a pleasant browsing experience. If an old version of malware is simply recompiled, this value may not be changed, and old versions like IE4 can be used as an indicator.", "query": "service=80 %26%26 direction='outbound' %26%26 client contains 'msie 4.0','msie 6.0','msie 7.0','msie 8.0'" } }, "knownBad": { "details": { "description": "The webshell or malware may use any user-agent they want, and some of them are known to be bad. For example, M0zilla may look almost right, but it is know bad. If a blacklist is not in place, try stacking and look for backwards slashes and other obvious misspellings.", "query": "service=80 %26%26 direction='outbound' %26%26 client exists" } }, "short": { "details": { "description": "Typical user-agents used by web browsers contain operating system, application, plugin, browser version information, etc. which results in a relatively long user-agent string being passed to a web site. Extremely short user agent strings, while technically valid, should be looked at.", "query": "service=80 %26%26 direction='outbound' %26%26 analysis.session='http short user-agent'" } }, "rare": { "details": { "description": "Stack rank user agents, investigate sessions belonging to outliers.", "query": "service=80 %26%26 direction='outbound' %26%26 client exists" } }, "missing": { "details": { "description": "Standard browser initiated requests and most other legit application initiated requests will contain a user agent. For sessions without a user agent, try stacking results by destination host, filename, or destination IP and look for other indicators or suspicious data transfers. Consider adding whitelist logic once you discover benign internal systems behaving this way", "query": "service=80 %26%26 direction='outbound' %26%26 client !exists" } } }, "referrer": { "missing": { "details": { "description": "The referrer header (mispelled in the actual header as Referer) specifies the URI of where the browser obtained the URI of the request. This is typically populated by search engines when they direct users to certain sites. Since most users get to web sites through seach engines, we look for sessions where this header is missing.", "query": "service=80 %26%26 direction='outbound' %26%26 referer !exists" } } }, "acceptLanguage": { "details": { "description": "The accept-language header specifies the languages that the user speaks to the web server so that the web server can provide the response in a language the user understands. In some environments, we have been able to identify attacks if we know the attackers are Iranian and we see the Persian language code. This is highly dependent on your environment, and may not provide any useful value. ", "query": "service=80 %26%26 direction='outbound'" } }, "contentLength": { "postMissing": { "details": { "description": "Whenever the HTTP method is sending data (PUT or POST) this is a required value. The web server can (though they shouldn't) use this value to determine how much space in memory to free to receive this content being sent. However, attackers may write web shells that don't use this header and therefore they will omit it. ", "query": "service=80 %26%26 direction='outbound' %26%26 action = 'post' %26%26 analysis.service='http post missing content-length'" } }, "wrongValue": { "details": { "description": "An attacker may decide to put something just to avoid getting caught, but they can hard-code this value. So, we look for values that don't match the length of the content sent. It's suggested to stack to other forms of rare or suspicious sessions first, as there is no way currently in NetWitness to detect this automatically", "query": "service=80 %26%26 direction='outbound' %26%26 action = 'post'" } } } }, "body": { "specificIndicators": { "details": { "description": "Looking for higher fidelity detections in appropriate keys (ioc in Netwitness) or generally deeper session inspection can lead to discoveries. This is generally a secondary hunt after refining the dataset with previous hunt items", "query": "service=80 %26%26 direction='outbound' %26%26 error !exists %26%26 ioc exists" } } } }, "SSL/TLS": { "insecureCipher": { "details": { "description": "The SSL/TLS protocols have a handshake in plain text that we can use to extract meta before the session gets encrypted. One piece of meta that we extract is the version. Anything previous to TLS v1.2 is now vulnerable. So, we look for any Inbound sessions that are supporting old versions. Also look for weak cipher suites. I use the NIST SP 800-52 document to identify weak cipher suites. I also recommend the Cisco paper attached here. That paper shows the cipher suites and command typically associated with malware using encrypted communications.", "query": "service=443 %26%26 direction='outbound' %26%26 crypto = 'tls 1.0'" } }, "rareCipher": { "details": { "description": "Stack rank for rarity and confirm business case or look at other traffic to/from the specific IP addresses.", "query": "service=443 %26%26 direction='outbound' %26%26 crypto exists" } }, "blacklistedSubjectCa": { "details": { "query": "When we are looking at the unencrypted certificate exchange, we can see the signing authorities. Keep in mind that there may be (and probably should be) a chain of certificates. For example, Verisign approves Symantec to authorize certificates, and Symantec authorizes Comodo and Comodo authorizes the sign in question. These certificate chains are pretty common. So, we look for self signed certificates. This is where the authorizing authority is the same as the entity holding the certificate. This happens for sites like Google though. Even in the chains, we want to look for SSL certificates that have been stolen. We would typically use a certificate blacklist for this. I tend to use the one listed here (https://sslbl.abuse.ch/). Though, our content team just issued their own feed. So, now customer can subscribe to that content. In addition, check the reputability of the signing authority. For example, Let's Encrypt has made certificates accessible to nearly everyone. This is great for encrypting your personal website, or your very own C2 communications.", "description": "service=443 %26%26 direction='outbound' %26%26 ioc='bad ssl'" } }, "rareSubjectCa": { "details": { "description": "Stack rank for rarity on ssl.ca and/or ssl.subject and confirm business case or look at other traffic to/from the specific IP addresses.", "query": "service=443 %26%26 direction='outbound' %26%26 ssl.ca exists" } }, "selfSignedCertificate": { "details": { "description": "Often times attackers will set up delivery or control infrastructure on SSL-protected websites, but without installing a valid certificate. Looking for self signed and then proceeding to stack other session features like destination organization, domain, ssl subject, may lead to findings.", "query": "service=443 %26%26 direction='outbound' %26%26 analysis.service='ssl certificate self-signed'" } }, "beaconingInterval": { "details": { "description": "Look for common beaconing intervals or periodicity between two endpoints. Eg. 1440 minutes in a day, so counts between hosts of around 1440 may signifiy programmatic communications. A crude way to do this visually would be looking at all destination IPs looking for these approximate counts. Also look for the Possible Beacon influencer that attempts to find these behaviors automatically", "query": "service=443 %26%26 direction='outbound'" } } }, "DNS": { "dynamicDns": { "details": { "description": "While there are plenty of legitmate domains leveraging dynamic DNS, it's also a favorite technique used by attackers for hosting their delivery and control infrastructure. This drill will get you to the dataset, within which you will want to stack ioc and analysis.session keys (NetWitness) as well destination organization.", "query": "service=53 %26%26 direction='outbound' %26%26 analysis.service contains 'dynamic dns'" } }, "tunneling": { "largeNumSubdomains": { "details": { "query": "", "description": "" } }, "manyNullRecords": { "details": { "description": "", "query": "" } }, "fileTransfer": { "details": { "description": "File transfer via DNS, when not a zone transfer, should be analyzed for tunneling", "query": "service=53 %26%26 direction='outbound' %26%26 filetype exists" } } } }, "SSH/RDP": { "approvedExistence": { "details": { "query": "Typically this is blocked at the firewall inbound. If this is in your policy, anything here should be scrutinized.", "description": "direction='outbound' %26%26 service=22,3389" } }, "expectedClientServer": { "details": { "description": "Protocols like Secure File Transfer Protocol (SFTP) set up an SSH tunnel and then transfer the file. Sometimes the client/server meta will reveal the SFTP usage. Use this to drill into the client and server keys to map against expected business processes (do you run an SFTP server, should anyone be logging in via putty externally, etc.). Often the name of the advertised client and server will imply its function.", "query": "direction='outbound' %26%26 service=22,3389 %26%26 client exists %26%26 server exists" } }, "rareServer": { "details": { "query": "Stack rank server key for rarity and confirm business case, further drill into IP and org if result set is too high", "description": "direction='outbound' %26%26 service=22,3389 %26%26 server exists" } }, "rareClient": { "details": { "description": "Stack rank client key for rarity and confirm business case, further drill into IP and org if result set is too high", "query": "direction='outbound' %26%26 service=22,3389 %26%26 client exists" } }, "lifetimeAnalysis": { "details": { "description": "When we are looking at these protocols, we want to know if there is an interactive session. Long lived sessions that aren't too large and have a medium ratio of transmitted to received data tends to be indicative of interactive session. Recommend viewing this (in NetWitness) in events view with lifetime and payload columns visible", "query": "direction='Inbound' %26%26 service=22,3389" } } }, "FTP": { "cleartextCredentials": { "details": { "description": "If you are not dealing with an advanced attacker, they may aggregate all of their data into a single location and exfil it all at the end of their excursion into your environment. They may use FTP since it is easy to setup and use. Since FTP is plaintext, we can see their usernames and passwords. They may be bold enough to use profane language or just lazy enough to use simple passwords like asdf1234.", "query": "service=21 %26%26 direction='outbound' %26%26 password exists" } }, "C2": { "details": { "query": "In addition to the obvious use of FTP as an exfiltration method, FTP can be used as a C2 method. In the simplest methods, the 'files' uploaded or downloaded are the command and response data. So, we expect to see a large number of these uploads and downloads when the C2 is active.", "description": "direction='outbound' %26%26 service=21" } } }, "ICMP": { "tunneling": { "fileTransfer": { "details": { "description": "We will look for file transfers over ICMP.", "query": "direction='outbound' %26%26 ip.proto=1 %26%26 filetype exists " } }, "layer7Protocols": { "details": { "description": "We also look for other protocols being tunneled over ICMP. We can do this since ICMP is a network protocol and not an OSI layer 7 application. Since ICMP is an error reporting protocol, it needed a mechanism for identifying the packets that were sent in error. So, ICMP error messages contain at least the first 64 bytes of the originating packet's header. If that data contains another protocol or a file, these can be misleading. So, we need to eliminate the error messages when looking for ICMP tunneling.", "query": "ip.proto = 1 %26%26 direction='outbound' %26%26 error !exists %26%26 service exists" } }, "largeEchoRequestReply": { "details": { "description": "Large ICMP requests can be used for ICMP tunneling. This is typically to increase the bandwidth used in the communication. Since ICMP echo request and echo reply (ping) messages are so small, they need to be padded to fit on the wire, and we can set our 'large' limit to anything larger than 70 bytes. There are traceroute applications that will exceed this limit. We want to identify ICMP traffic where the echo request and echo reply padding are different because this is the mechanism for getting data back into the private IP address space.", "query": "ip.proto=1 %26%26 direction='outbound'" } } } }, "OTHER": { "miscfeatures": { "entropyRatio": { "details": { "description": " If we scaled entropy from 0 to 1 where 1 is completely random, the things in the .97 to .99 range tend to be protocols like Google QUIC or Facebook messenger. These are protocols where a lot of thought and effort went into strong encryption and security. In contrast plaintext HTTP seems to have an entropy closer to .65 to .75 in my lab. We are looking for encoded or obfuscated content that isn't fully encrypted. So, we are looking for things in the range between the two values.", "query": "direction='outbound' %26%26 service = 0" } }, "bytesSentRatio": { "details": { "description": "We want to look at unknown protocols as well. When we look at these protocols. We are looking for custom TCP shells. The key is to look for things with medium bytes sent received ratios and an entropy that shows it is not fully encrypted by is obscured in some way.", "query": "direction='outbound' %26%26 service=0 %26%26 analysis.session='ratio high transmitted'", "threshold": 400, "refineby": "ip.dst,org.dst" } } } } } } }