In my last post we looked into why Zero Trust is not this huge revolutionary vision but something reflecting today’s reality. Technology is ready to go –technology is ready for you to embark on a journey and start to align your security architecture and investments with this approach. The biggest change when implementing Zero Trust comes with the perimeter. The network perimeter does not have the same impact and importance anymore, the modern perimeter is the identity. Remember, you do not trust anything, not the user, the device, the network, the application before they have proven to be trustworthy.
If we agree on the notion of the identity perimeter, what does this mean? From a networking perspective it means that we need to implement a few very important changes:
- Internet breakouts shall be as close to the user as possible. This typically guarantees best performance, latency and therefore user experience. Still, a lot of customers are hesitant and want to route the traffic to their central datacenter and then to the internet or the cloud because they feel that they can control the data flow.
- The client network in your offices is a public network. At least technically and policy-wise it can be treated like a public internet. You might still want to protect it to manage risks and you might want your users to authenticate as you do not want just anybody on the network but conceptually you want to treat it like a public internet, making it less complex and reducing costs.
- The biggest change typically is that you need to make your application accessible from the internet. This makes a lot of people feel uneasy but if we agree that identity is your perimeter, then the question is, why? Again, it just reflects reality however, it is a huge leap of faith.
What do you need to make this happen? What components are relevant? I guess, this picture summarizes it very well:
- As we want to authenticate users, all the information around identity and user authentication is relevant. In other words, you would like to understand whether the credentials the user uses are somewhere leaked to the dark web, or whether the user had suspicious behavior beforehand, or whether the user’s credentials were cached on a (potentially?) compromised machine and therefore at risk, or or or…. There is a lot of intelligence flowing into the user risk and there is a lot of collaboration needed between the different toolsets. For us, this is a close collaboration between Azure Active Directory Identity Protection, Azure Advanced Threat Protection (protecting you on-prem Domain Controller), Microsoft Cloud App Security, Windows Hello for Business and Microsoft Defender Advanced Threat Protection.
- You need some kind of policy enforcement engine, which can embrace all the relevant information coming from the user, the device, the threat intel etc., then takes this into account and acts as you defined in your policy based on the risk exposure (in collaboration with your apps, see below). In our case, this is Azure Active Directory Conditional Access.
- If the user’s risk is too high, you might want to trigger either an additional verification step (like using an additional factor like the authenticator app on the mobile phone) or trigger a password reset. This can be done using Azure Active Directory Multifactor Authentication and Azure Active Directory Self-Service Password Reset.
- Now you want to mix in signals from your device. You want to understand, whether the device is potentially compromised, rooted or in a compliant state. We collect this information through Microsoft InTune and feed with different tools first and third party like Microsoft Defender Advanced Threat Protection.
Once there is a session and a user risk score, you decide what you want to do. Do you want to block access, enforce a password reset, limit access, allow access? This is then the policy enforcement engine, Conditional Access, in collaboration with your applications.
Finally, the applications you want to authenticate to must integrate. We do this natively with AAD for thousands of apps, which are either SaaS or leverage modern authentication like OpenID or SAML. For legacy applications, we offer Azure AD Proxy, which can help to ease this pain. If these apps still use legacy authentication protocols, you might want to re-think your strategy. As we assume compromise on your network (remember, you do not trust it anymore), you will not want to run NTLM or LM Hash across this network anyway.
Putting it all together technically, this picture reflects this view:
If you add signals and a clear monitoring strategy to it, you will get better security, a higher level of controls and better alerting for your environment as you reflect today’s reality. Especially you accept the fact the just having your data on-prem within your network perimeter does not necessarily make it more secure. Breaking the network perimeter is typically the easiest part of an attack…
Security Operations will benefit heavily from this as well if set up correctly and we will look into this the next time.