Parsing atop files with python dissect.cstruct

Like you’ve probably read, Fox-IT released their incident response framework called dissect, but before that they released the cstruct part of their framework. Ever since they released it publicly I’ve been wanting to find an excuse to play with it on public projects. I witnissed the birth of cstruct back when I was still working at Fox-IT and am very happy to see it all has finally been made public, it sure has evolved since I had a look at the very first version! Special thanks to Erik Schamper (@Schamperr) for answering late night questions about some of the inner workings of dissect.cstruct.

This is one of those things that you can encounter during your incident response assignment and for which life is a bit easier if you can just parse the binary file format with python. Since with incident response you never know in which format exactly you want to receive the data for analysis or what you are looking for it really helps to work with tools that can be rapidly adjusted. python is an ideal environment to achieve this. An added benefit of parsing the structures ourselves with python is that we can avoid string parsing and thus avoid confusion and mistakes.

The atop tool is a performance monitoring tool that can write the output into a binary file format. The creator explains it way better than I do:

Atop is an ASCII full-screen performance monitor for Linux that is capable of reporting the activity of all processes (even if processes have finished during the interval), daily logging of system and process activity for long-term analysis, highlighting overloaded system resources by using colors, etc. At regular intervals, it shows system-level activity related to the CPU, memory, swap, disks (including LVM) and network layers, and for every process (and thread) it shows e.g. the CPU utilization, memory growth, disk utilization, priority, username, state, and exit code.
In combination with the optional kernel module netatop, it even shows network activity per process/thread.

The atop tool website

Like you can imagine, having the above information is of course a nice treasure throve to find during an incident response, even if it is based on a pre-set interval. For the most basic information, you can at least extract process executions with their respective commandlines and the corresponding timestamp.

Since this is an open source tool we can just look at the structure definitions in C and lift them right into cstruct to start parsing. The atop tool itself offers the ability to parse written binary files as well, for example using this commend:

atop -PPRG -r <file>

For the rest of this blog entry we will look at parsing atop binary log files with python and dissect.cstruct. Mostly intended as a walkthrough of the thought process as well.

You can also skip reading the rest of this blog entry and jump to the code if you are impatient or familiar with similar thought processes.

Continue reading “Parsing atop files with python dissect.cstruct”

Creating a ram disk through meterpreter

The magical ‘in memory execution‘ option of meterpreter is of course one of the better options that we as attackers love to use. However if you want to store ‘random files’ in memory or need to execute more complex applications which contain dependencies on other files, there is no ‘in memory’ option for that as far as i know. To be more specific, on Linux you can do it with build in commands, on Windows you need to install third party software (list of ram drive software). I decided to dig into it and see if I could achieve this through a meterpreter session. The reasons for wanting a ram disk are multiple, if you are still wondering:

  • store stolen data in memory only, until you can move it
  • execute applications which require multiple files
  • running multiple legitimate files from memory

You might be asking, why not use it to bypass AV? This is of course possible, but you would need to modify the driver for this to work and ask Microsoft to sign it. To bypass AV there are enough methods available in my opinion, I sometimes just want to be able to store multiple files in memory.

Where to start? I decided to start with the ImDisk utility for two reason:

  • It is open source
  • It has a signed driver

The first reason allows me to better understand the under the hood stuff, the second reason allows me to use it on Windows versions that require a signed driver. First thing I tried is to use the bundled tools, but it seems that the command line interface has a dependency on the control panel dll file. I tried a quick recompile, but then I thought, why not code my own version? The original version includes, amongst other things, the ability to load and save the ram disk as an image file and for the moment I won’t be needing that functionality. So i decided to code my own reduced functionality version of the original client. It would have been easier to just use the original client, but this was more fun and thought me a thing or two about driver communication.

The original source code was very very clear, which made it a breeze to hack together some code to talk to the driver. I still need to add way more error handling, but for now it does the job and you can use it through meterpreter. Be aware of the fact that it still leaves traces on the regular hard disk, like explained in this blog. A short overview of the traces left behind:

  • The dropped driver
  • The registry modifications to load the driver
    • The driver loading does not use a service, thus there is no evidence of a service creation
  • The mounted ram disk
  • Traces of files executed or placed on the ram disk

For me the benefits of having an easy way to execute multiple files from memory outweigh the above mentioned forensic artefacts. In addition it becomes more difficult to retrieve the original files, unless the incident response team creates a memory image or has access to a pre-installed host agent which retrieves the files from the ram disk. Let’s get practical, here is how to use it through a meterpreter session (I won’t go into details on how to obtain the meterpreter session):

Continue reading “Creating a ram disk through meterpreter”

Discovering the secrets of a pageant minidump

A Red Team exercise is lotsa fun not only because you have a more realistic engagement due to the broader scope, but also because you can encounter situations which you normally wouldn’t on a regular narrow scoped penetration test. I’m going to focus on pageant which Slurpgeit recently encountered during one of these red team exercises which peeked my interest.

Apparantly he got access to a machine on which the user used pageant to manage ssh keys and authenticate to servers without having to type his key password every single time he connects. This of course raises the following interesting (or silly) question:

Why does the user only have to type his ssh key in once?

Which has a rather logical (or doh) answer as well:

The pageant process keeps the decrypted key in memory so that you can use them without having to type the decryption password every time you want to use the key.

From an attackers perspective it of course begs the question if you can steal these unencrypted keys? Assuming you are able to make a memory dump of the running process you should be able to get these decrypted ssh keys. During this blog post I’ll be focusing on how you could achieve this and the pitfalls I encountered when approaching this.

Continue reading “Discovering the secrets of a pageant minidump”

Memory Zeroization: frustrating memory forensics

Zeroization of information is a long standing practices on systems that handle highly sensitive data (usually cryptography systems), yet it’s something that isn’t done very often by most applications on your regular desktop environment. I’ve got no clue why, although I guess that it is because most people assume that if memory is not available anymore to their application it won’t be available to others. This however is a flawed assumption which is eagerly used by investigations that use memory forensics techniques. If you want to see the power of memory forensics take a look at volatility which is an excellent memory forensics framework. Just to give you an idea of how powerful memory forensics can be here is an example taken from the volatility blog:

Taking screenshots from memory files

Like you can see an investigator (or an attacker) is able to pull the complete layout of your windows from a memory dump, isn’t that impressive? So like you are probably guessing by now it should actually be best practice to zero memory before it’s released. In an ideal situation your operating system would take of this for you, but unfortunately it doesn’t. With ideal I mean zero it immediately when the programmer calls the *free*()  functions.

Dear OS makers could you implement it by default?

So to try and promote the use of zeroization in regular software I’ve decided to create a few simple wrappers that zero the memory before it’s released. The message to take away here is

*always zero your memory before releasing it*

These wrappers are nothing fancy and as usual can be found on my github.

As said before the wrappers itself are not that interesting, strictly speaking they are not even needed since you can also zero memory yourself before calling a function to free the allocated memory. The important thing to take into account when doing any zeroization of memory is to make sure that the function or technique you use is not optimized away by the compiler. Compilers have a nasty habit of optimizing stuff that you don’t want optimized. It’s however pretty convenient to just call a free function and not having to worry about zeroing the memory yourself. So the first thing I did was having a look at the general concept of memory allocation which is more or less something along the following lines:

  • Caller requests 100 bytes
  • Operating system allocates a bit more, say four bytes, so it becomes 104
  • Operating system stores the size of the requested memory in the extra allocated bytes
  • Caller gets a memory block of 100 bytes without knowing that it’s actually 104

That sounds simple enough so I decided to start with writing a wrapper for the free() functions to retrieve the size of the block to be free just like the operating system would do it. To my surprise however it wasn’t as easy as it conceptually sounded at first. I started to digging into the malloc() and free() and in my tests it seemed that they are just wrappers for HeapAllocate() and HeapFree(). That didn’t sound to bad at first until I landed in the wonderful world of heap management. If you want a good read on that check out this paper which does a very nice job of explaining it.

This is when I decided to discard the method of trying to retrieve the size of the memory block based on the received pointer and decided to just add a thin layer of “memory size management”. Stupid me though because as you will read later it’s actually dead easy to retrieve the size of a memory block. So I started to write some code that just implemented the conceptual method that I first encountered. This however means I would not only need to wrap the *free*() functions but also the *alloc*() functions. This resulted in the following code for the malloc() and free() functions:

/*
	Thanks TheColonial for reminding me of pointer arithmethic and re-educating me on it.
	Old unused code, left here in case someone prefers to do it this way.
	This code also wrapped the allocater to prepend the size of the allocated memory block.
*/
void *zmalloc( size_t size ){
	size_t newsize = (sizeof(size_t) + size);
	size_t *newmalloc = 0;
	void *originalmalloc = NULL;

	originalmalloc = malloc(newsize);
	printf("size %i\n",_msize(originalmalloc));
	if(originalmalloc == NULL){
		return NULL;
	}

	(*(size_t *)originalmalloc) = size;
	newmalloc = (size_t *)originalmalloc;

	return (void *)(newmalloc + 1);
}

void zfree(void *memblock){
	size_t *newmemblock = NULL;
	size_t size = 0;

	newmemblock = ((size_t *)memblock)-1;
	memcpy_s(&size,sizeof(size_t),newmemblock,sizeof(size_t));
	size += sizeof(size_t);
	SecureZeroMemory((void *)newmemblock,size);
	free((void *)newmemblock);
}

Now as the comment suggested I first failed at properly implementing this since I forgot that when you do pointer arithmetic the size of the destination pointer is used for the operations. So in my first code I ended up with 4*size_t being allocated which is kind of a waste of space. This was all great and so but when I saw the function definition of other memory allocation functions it didn’t really inspire me to continue. So I decided to have one more look at the whole “extract size from memory block pointer” issue.

I often learn a lot of new stuff when working out ideas or playing around, but now I just felt plain stupid see for yourself:

http://msdn.microsoft.com/en-us/library/windows/desktop/aa366781(v=vs.85).aspx

Do you see anything that could be remotely useful? YES most of the memory allocation functions have a size companion that will retrieve the size of the memory block. Which means I go back to only wrapping the free function and reducing the code to:

void zfree(void *memblock){
	size_t blocksize = _msize(memblock);

	if(blocksize != -1){
		SecureZeroMemory(memblock,blocksize);
	}
	free(memblock);
}

So this was a fun and interesting path to finally end up with the original idea of just wrapping the free functions. Hope you enjoyed this entry and that after reading this you’ll all be doing zeroing of memory before freeing it :)

Java in-memory class loading

So, just when you think hypes don’t affect you, a new hype gets your attention. Lately Java has hit the news as one of the latest risks and it’s pretty well abused for exploitation. Luckily we all know that exploiting “bugs” is not the only way to abuse Java. You can also abuse the trust Java places in digitally signed code, I’ve blogged about this issue before. Nowadays metasploit/SET even has a ready to use module for it. If you are wondering what all this has to do, with in-memory class loading…well sometimes when executing a java attack you want to make it harder for someone to detect your payload and you also want to leave less traces behind. In terms of Java I think that class loading is the thing that comes the closest to traditional in-memory execution. So let’s get started on making it harder for an investigator to investigate.

Continue reading “Java in-memory class loading”