secure, that’s one of those words that is capable of triggering a (usually negative) physical reaction with most people working in the security industry. Thing is, whenever someone claims secure, they usually forget to mention against what kind of threat(s) it is secure. So every once in a while I like to attempt to build something that is secure against a chosen threat model, just for the fun of the mental workout.
This blog will be about the exercise of performing a threat model of a slack bot I might build. It will not contain instructions on how to implement it, it will just be my train of thought while doing a threat model for the solution I want to build.
Most of the times it ends in the project not being finished or if I finish it people point out all kind of security issues in the solution. The latter being the main reason that I like doing these type of projects, since I’ve come to realize that somehow when you are designing a secure solution on your own, you will always end up with blind spots. While if you where to look at the same solution without building it you’d be spotting those exact same security issues. Thus you learn a lot from attempting to build a secure solution and have some else shoot some nice holes in it.
This time I decided to build a simple slack bot that would be capable of receiving a URL to an online Youtube video and download it for offline consumption. After some thinking I came to the following definition of the slack bot being secure:
-
- Hard target to casual and opportunistic attackers
- Hard target for memory corruption vulnerabilities
- When breached, constraint the attacker to pre-defined resources
So basically I want the solution to be secure against a curious user that uses the bot and decides he wants to hack it for the lulz. In addition when the attacker succeeds, I want that the attacker is only able to view / modify the information that I consider expendable. You’ll notice that I’m saying ‘when the attacker succeeds’ and not ‘if the attacker succeeds’. This is due to the fact that I always assume it will be breached, thus forcing myself to answer the question(s): “what’s the impact? can I accept it? if not, what should I mitigate?”. The other reason is of course that I’m a terrible sysadmin, and I expect myself to forget to patch stuff :( Besides the security requirements I also wanted to learn something new, so I decided I wanted to develop the bot using go.
So how do you proceed to design something with the above requirements? Normally I just perform a threat model-ish approach whereby I mentally think of the assets, attacks and the corresponding security controls to mitigate those attacks, sometimes with the aid of a whiteboard. This time however I decided to give the more formal drawing of a threat model a go. So i searched around, found this awesome blog and after a short while of (ab)using draw.io I ended up with the following result:

Let’s dive into this diagram and see how to further improve the security controls or security boundaries.
TL;DR Threat modeling is a fun and useful mental exercise and aids in spotting potential attacks you might forget to secure against. Also it is 2019, we should be using seccomp and apparmor or similar technologies much more frequent.
Don’t be intimidated by the above diagram, basically it is a visualization of how you’d run most of your basic stuff:
The go binary that responds to slack slash commands, saves the command to a file. A different python script parses the URL from the file, downloads and saves it. The go binary and the python script are run under their own user with limited privileges and the files with the content of the slack message as well as the downloaded videos are only accessible if the user belongs to a specific group.
The rest of the security controls and threat actors are your basic security stuff like ensuring encrypted connections, input validation and avoiding as much as possible that untrusted data influences your logic. So like we can see the drawing helps us to understand the security boundaries as well as the specific controls that we are applying. For me personally, it misses however the relation between the threats and the security controls, which could just me being a n00b with the more formal methods of performing a threat model.
Regardless of that, it immediately becomes clear that the current design does not cover all of the requirements that we spelled out in the beginning, let’s review:
-
- Hard target to casual and opportunistic attackers
- Hard target for memory corruption vulnerabilities
- When breached, constraint the attacker to pre-defined information
The first and second items are pretty well covered with the current security controls. Only interpreted languages are used, input is validated and low privilege users are used to mitigate potential errors, thus restricting the attacker only to those files that the low priv user can access. The third point not so much. If we really look at the third point, the following questions come to mind:
-
- What happens if a memory corruption is found after all?
- What happens if when dowloading and processing the video, it is done by a native component that was hidden in some weird dependecy?
- Basically, what is the attack surface if the attacker is able to run code as the limited user?
Another interesting point to discuss would be the whole “golang” and “python”, why not do everything in golang? Basically, since I went down a rabbit hole to create a “secure slackbot” I decided to go all the way. Just like in the real world where the business sometimes has really challanging requirements, I decided to make my life easier by sticking to python to be able to use youtube-dl even though there are some golang wrapper packages available. This forced me to think about an inter process communication mechanism using files, not the best choice, but a fun choice if we want to do it securely ¯_(ツ)_/¯
So let’s see how we can further restrict the two applications so that even if an attacker runs code as the low privileged user, the attacker will only be able to access those files, directories and other resources that we have specified up front. For this I searched around for what type of other technologies are available (ubuntu / debian) to further restrict a process, to only those resources that are needed, which resulted in the following (probably incomplete) list:
-
- chroot
- Not intended as a security boundary, can be used as one, has many caveats
- apparmor
- Intended as a security boundary, mandatory access control
- namespaces
- Considered a security boundary, isolates on various levels
- seccomp
- Intended as a security boundary, filters syscalls
- capabilities
- Intended as a security boundary, drops privileges from root processes
- chroot
One missing technology is grsecurity, which I personally think is really pretty awesome! For now, I already went way deeper into the rabbit hole than expected, so I will not be looking into grsecurity.
So how to choose? First of all let’s not forget that we wanted to protect against a mainly opportunistic attacker. So ideally ‘ready made’ public exploits or tricks should fail / not work. We are not attempting to protect against a determined attacker that hangs around to keep trying to compromise the entire system.
Which implies that we will be correctly configuring our logging and alert on strange activities. Since the bot will run on a machine that doesn’t perform other actions, it should be easier to spot potential harmful activity on the machine.
From the identified technologies, chroot doesn’t qualify since it is abused as a security boundary, but it was not intended as such. Namespaces do not qualify due to the fact that as far as I can tell, we would be really overcomplicating our solution. We’d be venturing into container land and well, that’s a whole other world to secure. Lastly, capabilities are not applicable since our processes will not be run with elevated privileges, unless there is some hidden surprise that we encounter during development.
So this means that we will be using seccomp to restrict which syscalls our processes are allowed to execute and we will be using apparmor to restrict the resources that our processes are allowed to access. Thereby restricting an attacker as much as possible if the attacker managed to exploit the processes. This results in an update the threat model to include the seccomp and apparmor boundaries and then perform the entire analysis again which ideally results in a satisfactory results in regards to our requirements for building a secure slack bot.

Just like all pentesters, the real answer to the question is of course when we build and then not only try to exploit it, but also perform the test from an ‘assume breach’ perspective. The latter meaning that we for example build code execution functionality into the bot, and then try to breach the defined security boundaries that we identified through threat modeling. If it holds up, I’d venture to guess we’ve achieved our ‘secure slack bot’ at least with a threat actor like an opportunistic attacker. Just don’t forget to take the vulnerable code exec code out, before using it for real ;)
Oh and yes, there are a lot more threat vectors and security controls that we could include or even describe in much more detail, but I think this is a good start. Hopefully it helps you in performing your own threat model exercises.
References
-
- https://en.wikipedia.org/wiki/Threat_model
- https://security.stackexchange.com/questions/196881/docker-when-to-use-apparmor-vs-seccomp-vs-cap-drop
- https://medium.com/information-and-technology/so-what-is-apparmor-64d7ae211ed
- https://en.wikipedia.org/wiki/Chroot
- https://en.wikipedia.org/wiki/AppArmor
- https://en.wikipedia.org/wiki/Security-Enhanced_Linux
- https://grsecurity.net/