I came across this article this morning: 5 Ways Event Log Management Makes You More Secure. The claim there is, that adding 5 additional tasks will make you more secure:
- Centralized logging makes it possible to review all the logs
- SIEM is French for “locked down”
- Event correlation helps you find bad things
- Search and destroy
- Compliance, covered
I would like to add a few thoughts to this and add some of my reality.
We all agree (and often tell our customers), that monitoring is probably one of the key ways to find targeted attacks. Obviously, monitoring does not prevent the attack from happening but you can find it much earlier. This is true.
My reality shows, however, that most networks are not in a state from a hygiene perspective that the noise they generate is on an acceptable level. Often customer buy IPS, IDS just to realize that the number of alerts they get in one day keeps them busy for the rest of the week â€“ only to figure out, which alerts they need to look at, not to say to really act on the suspicious one (I know that this is exaggerated but it comes close to the truth). To me, the problem with event logs today is two-fold:
- Even with a reasonable setup, the amount of data generated is huge. As an example: A customer once tried hard to focus on the key events only. So, they e.g. switched off a successful logon event for their users. While this, initially, might sound like a smart idea to reduce the load, it led them to a difficult situation: one day, somebody was trying to guess a password of a user, so the admin saw an unsuccessful logon attempt, an unsuccessful logon attempt, an unsuccessful logon attempt, an unsuccessful logon attempt, and so on and then it stopped. Does this now mean that the attacker gave up or that the attacker finally got the credentials? While this is a fairly simple and not too sophisticated example, it shows the problem. You need to collect a lot of data and then search for the needle in the haystack. And this is a lot of work for very experienced people and the tools in this space to my knowledge are not too sophisticated.
- Event correlation sounds good but is very complex. I agree that event correlation could show a lot of possible attacks â€“ if we know what we need to look for and correlate. That’s the key problem. To correlate events in a meaningful way, you need to understand all the details behind the events of each and every application and then try to understand how an attack looks like. Even though there are tools here, the work to make it happen is significant and I am unsure of the return.
Again, it is not me saying that the proposal above does not make sense. It does but we should not oversimplify the complexity of such an effort and the return we expect. To me, we need to make sure that we understand the network and the data, and that we can do a proper threat modelling on there and understand the security boundaries. Segment the network is still one of the key prevention techniques (besides keeping all your software on the latest version). Once we are there, we can understand what we need to monitor and where. Otherwise, such a project is subject to failure (or a very high cost) in my opinion.
- Playing chess with APTs (blogs.gartner.com)