When we are looking at solutions like SIEMs (Security information and event management), they are following a promising approach: You are collecting events from different systems and are trying to correlate the events to figure out what is happening and to find anomalies. Actually a good idea. There are a few “howevers”, however. It definitely works if you have a fairly stable and well-defined environment as the ruleset will remain for or less stable. If not, you run the risk of getting a lot of false positives and this will then lead to a dulling on the side of your analysts. If you want to avoid that, you will need a fairly well-staffed team of engineers to keep up with the rule changes which are needed. HP published a good article on this: 6 ways to screw up a SIEM implementation.
A long time ago at Microsoft we wanted to change the future of the ISA Server and build a bus which would have taken a different route. The assumption was, that the real reason behind an event is only known to the vendor of the software firing the event itself. Therefore, if you consolidate the events, you need product knowledge at the central server. If we could add an abstraction layer, this would basically help to make it product agnostic. It was one of these initiatives, where the idea behind it was very good but it always requires all vendors to play together and then integrate into a third-party bus and thus becoming replaceable (except for the bus J). This did obviously not take off either. So we are back to the SIEM.
There comes the next “however”: As everybody is running to implement a SIEM environment it seems that it is rarely questioned whether the approach is actually really paying off. Is the cost/benefit ratio at an acceptable level? At Swisscom we decided that it is not. We decided to get off a classical SIEM as we could simply not afford to have a big enough team to run it. I am not talking of the team really taking care of the events then, just to make sure that we find the real positives. We decided to take a different route: Make sure that good analysts get the right threat intelligence presented in a fast and efficient way. They can now look at it and take the right decision. Attackers want to stay below the radar and therefore we need to make sure, they do not understand the details of our radar. In parallel I do not think that you can replace the human (yet) when it comes to anomaly detection. The human brain is still far ahead in this respect!
Let’s see where this leads us