Like the title implies this time I’m not talking about being able to ‘operate at the speed of an attacker as defenders. I’m talking about, do we sufficiently account for the time factor when we design & build secure components or environments? It seems that when we build we forget about security as soon as we start to run out of time, even if we talk about security by design. Of course this isn’t universally applicable, but I’ve seen this happen at various companies and thought, well let me write it down, maybe it helps to orden my thoughts.
When projects are defined and a time estimate is provided it seems to not include the time required to do this securely, unless we explicitly make security a requirement. As expected security is not made a security requirement for a lot of projects.
The funny aspect is that the time that we (consciously) did not invest at the beginning seems to bite us in the behind later on. Yet, we don’t seem to be bothered by a painful behind or even by missing half of our behind.
Maybe all of this is just human nature? We know that smoking is bad, but since the effects are not immediately visible we are unable to oversee the consequences. Same goes for not doing security from the start, we know the consequences can be bad, but we are unable to oversee how bad exactly.
You might be wondering about specific example to substantiate the above claim. Let’s have a look at some example, that in my opinion are purely a time matter and not so much a resource or money matter. Yes, you could convert all time to resources & money, but in my simple mind, sometimes just allowing for activities to take longer will save you a lot of time & money later on. The interesting aspect is that when I used to be on the offensive side it never crossed my mind to think that one of the causes might be time related, I always assumed that more resources & money would just fix it.
TL;DR: After writing this post I realise that we just can’t seem to find consensus on what the bare minimum security level is that should always be implemented. Which eventually results in people forgetting about security or resulting in security absolutism / perfectionism with the end result of rather not implementing security by default than running the risk of not meeting our (often) self-enforced deadline.
Do read on if you are curious about the examples that lead me to believe that time is crucial if we want to change our behaviour for more secure by default approaches.
When we deploy new devices it often doesn’t fit the deployment deadline to review which connections are required and only allow those, also known as deny-by-default / allow-listing. Yet, after several months or years of doing this we realise that the network has become a jungle with a lot of freedom of movement on a network level for attackers. Now for the pain in the behind, when the risk is too big and companies decide to fix this, it takes them much, much more effort, due to legacy, people being used to work in this manner and the business never having been bothered on how to think about security.
Allow-listing of executables
This one has recently become an eye-opener for me. We are all willing to literally build security teams around our infrastructure, but we are not willilng to expand our sysadmin teams so that they can implement & maintain allow-listing? Seems like it is easier to just deploy EDR after something has happened and then tell the security team to just deal with it. Which is a nice ‘aha!’ moment, since the security team is usually not inline! What I mean by that is that if we have the sysadmin team implement & maintain the allow-list it delays the deployment of the environment, but if we tell the security team to just deal with it, they can do it in parallel and they can always be told to ‘do not block, just detect & respond’.
Creating new applications
Why would we even have a secure SDLC, we just started proto typing? Let’s not have security slow us down we’ll implement it when we are further ahead. So just to name a few candidates, the following subjects thus never become part of the development culture:
- Secure code guidance
- Architecture analysis from a security perspective
- Automated static / dynamic checks
The pain in the behind then comes when a security test is performed or a breach happens and then we start to implement the parts that we skipped. Suddenly we not only have to repair code, but also improve the deployment infrastructure and change the development culture.
This one when not properly defined and when it is done manually can indeed be a big time sink. Still it is 2023, shouldn’t we all about ‘implement once, execute many times‘? Reviewing all options when deploying appliances takes more time than just reading the functional administration guide, but it will make you understand which settings can be a security issue in your environment, even if the vendor says they are safe by default.
Identity & access management
Yes, sometimes it really is difficult because some ***** vendor created an application that required the highest privileges by default. Still, if we take the time to understand the application often we can identify which privileges are exactly required and only assign those, but this would take time we don’t have since the tickets with requests from the business are just piling up. Or why would we inconvenience people by giving least-privilege accounts, it’s faster and less frustration prone to hand out accounts with privileges that cover most tasks of most employees, also reduces the amount of tickets created by them. Or last but not least, why should the Human Resource department think about what actions a certain role is allowed to perform, isn’t that an IT department thing? The pain in the behind comes when after some months or years we find out that almost every account is part of an attack path towards the highest privileges available, but we can’t just take those privileges away because then everything breaks!