My Primer on Zero-Trust Architecture

The idea of not having a traditional castle-and-moat security architecture and replacing it with a Zero-Trust method to defend the crown jewels sounds a little absurb? Not really. In a Zero Trust Architecture, it eliminates the inherent trust bestowed on internal networks. Instead all access to resources are inspected and granted on a least-privileged strategy regardless of location. Sounds cool? Read on.

The following contents are based on my interpretation and understanding of Zero-Trust security model or Zero-Trust Architecture. I am not an expert in this area and I am just a random guy on the streets trying out Zero-Trust.

Castle-and-Moat

This is not a unfamiliar strategy to most. Essentially, the landscape is divided into at least two segments – external and internal (i.e. the castle). What divides them is a moat and a drawbridge that selectively permits traffic to and fro the castle.

In digital sense, the castle refers to your home network while the external whole wide world is kept seperated by your home router (and it’s firewall) acting like a moat and drawbridge.

The downpoint of this strategy is that if you let a malicious actor in or if he/she is already inside the castle, they can freely access the resources within and take their time to pick the various locks protecting treasure chests or rooms while avoiding detection (a.k.a lateral movement). Therefore, enterprises invest heavily in perimeter defence products like firewalls, data loss prevention, VPN, traffic inspection, etc.. To address lateral movements, periodic log reviews are conducted.

So does this strategy still apply to cloud deployments? Yes but Zero-Trust Architecture (ZTA) would also be appropriate.

Zero-Trust Architecture

The idea of Zero-Trust was mooted by John Kindervag in a Forrester report back in 2010 and Google shared their successful implementation (i.e. BeyondCorp) in recent years. So ZTA is not exactly a new idea but implementating it correctly is.

What zero trust model does is that access to resources is withheld until a user, device or even an individual packet has been thoroughly inspected and authenticated. Even then, only the least amount of necessary access is granted. An adage commonly used to describe zero-trust is “never trust, always verify” which is an evolution from the old “trust but verify” approach to security.

Zero Trust is based on three (3) core principles:

  1. All resources are accessed in a secure manner regardless of location (i.e. internal, external)
  2. Adopt a least-privileged strategy and strictly enforce access control
  3. Inspect and log all traffic – from any source to any destination

Why ZTA got popular in recent years could also be attributed to advancement in technologies like microsegmentation, step-up multi-factor authentication, machine learning (ML), user and entity behavior analytics (UEBA), unified logging, virtual network functions (VNF), etc.. All these makes ZTA implementation a whole lot more easier than in the past.

While researching on this topic, you might come across similar sounding terms from various product vendors like software defined perimeter (SDP), identity perimeter, etc. Well, they pretty much mean the same thing but with different approaches.

Zero Trust Architecture in a Nutshell

At the very essence of ZTA, it is about knowing who is attempting to access what (resource) and whether this (access) is permitted.

So in the context of a client accessing a service/resource provider (e.g. API), a lot of factors can be considered before granting access, for example:

  • Where the client is accessing from? Within the enterprise network, outside within the country or somewhere out in the world?
  • What device is he using? Corporate or personal?
  • Is the device healthy? Has it been patched? Did it posture change since the last time we learnt about it?
  • Has the user accessed this service at this timing before in the past with this client?
  • What was the previous resource accessed by this client?
  • What was the business transactions made prior to this?
  • Is the user authorized to access this service?
  • Is the user authorized to access this function?
  • Should we ask for a stronger proof of identity? Step up to 2FA from 1FA?

As you can see, it is no longer the case of simply granting access once identity and authorisation is verified. There is now a strong need to learn more about the request and evaluating it holistically before permitting or denying access.

With this understanding, I, personally, pretty distilled the ZTA to the following four main components:

  1. Centralized log management
  2. Access management Service proxy
  3. Resource policies
  4. Trust evaluation engine

Centralized Log Management

Logging and logs have a huge role to play in ZTA.

There is a need for a proper log management solution to capture logs, metrics, web applications, data stores, and various cloud services, all in continuous, streaming fashion. Not only ingesting, it should also transform and prepare data regardless of format or complexity for consumption by the trust evaluation engine.

Access management Service proxy

Service providers or application will now need be Zero-Trust Architecture aware.

These application could be rearchitected to work with the various ZTA components or a ZTA aware proxy can be in place to front requests infront of the application.

This is easier said than done. There is often a need for a mixture of solution to address this need.

Resource policies

For the protected application or resources, basic policy definitions could be in place to determine the minimum trust level required.

If thr request meets the trust level required, it goes to the resources. Otherwise, it will be ignored.

Trust evaluation engine

This is where all the magic and complexity is. The logs and metrics get evaluated and a trust score or level is determined.

If a user typically accesses from a personal device in country but the current request originates from a unknown device from a foreign location, the trust evaluation engine might tag a low trust level to the request. This would in turn result in the request being denied by access management service proxy and resource.

Similarly if the user typically uses a personal device to access resources on the enterprise network decides to access the resources from a cafe with the same device, the trust evaluation engine might prompt for a step up authentication (e.g. 2FA from 1FA) to tag a high trust level or tag a medium trust level with the preexisting authentication.

This evaluation can be based on a weighted metric or complex machine learning model that continuously evolve based on the real-time dataset generated by the entire organization.

There is no single right way to do this and no single approach is better than the other. It all depends on the resources available, schedule and goal.

What does it mean for Cybersecurity?

It is worth noting that when a cyber attack takes place, the point at which the malicious actor entered is not usually where their target files or information are. It is often the weakest point. This is the reason why preventing lateral movement and access across the network is so important.

That is also why extensive logging and review is important to detect signs of lateral movements in both traditional and Zero-Trust security model.

However, it is worth noting that ZTA might fare better in this aspect of mitigating lateral movements across the network as every request is evaluated.

Various Implementations

There are a few companies and product vendors with solution for part and parcel of Zero-Trust Architecture.

Some companies/products that I came across or interacted with:

  • Google’s BeyondCorp
  • Zscaler
  • Cyxtera AppGate

The above listing is in no particular order.

Before you get excited about a particular vendor’s product and jump blindly into it, take a step back to consolidate and crystalize your requirements to look at the bigger picture.

God speed in implementing the Zero-Trust Architecture.