Q&A with Zero Trust Architecture Writers from NIST
We interviewed Scott Rose and Oliver Borchert from the National Institute of Standards and Technology (NIST) about their publication on Zero Trust Architecture (ZTA) discussing zero trust principles and how it affects organizations.
We’ve edited the interview and cleaned it up into a Q&A so the reader can jump to the sections that interest them. There should be an interactive Table of Contents on the left sidebar.
Q: What was the catalyst for writing the zero trust architecture document?
Scott: We started thinking about it around 2018 as a response to federal agencies echoing a need. A lot of government agencies – especially ones that have branch offices or remote workers such as the Department of Interior and Fish & Wildlife wanted to see how they can shift away from hard exterior locations due to how spread out their workers are. They first wanted to see how they can shift away from creating a trusted infrastructure at every branch office — for example the Department of Interior would need one at every national park, site, forest; if there was a park ranger there it would need to be secured.
The federal CIO council, which consists of CIO and CISOs from various agencies, got together to discuss the BeyondCorp papers which had just released earlier. They began workshops and discussing ZTA, then reached out to NIST. NIST’s job is to create cybersecurity guidelines and recommendations for these civilian agencies and so we authored the ZTA paper with the goal of creating a conceptual framework for ZTA. It looked to answer:
- What is Zero Trust Architecture?
- What do agencies need to know about ZTA when they start down this path?
- What do they need to have in place?
- What do they need to think about when they start doing procurement and talking to vendors?
- How do you describe what you want [to vendors]?
- How do you mentally map what [a vendor] is offering into what you actually want to do?
The paper gives general deployment models and use cases where zero trust could improve an organization’s overall information technology security posture.
The new reality of federal IT is what’s driving zero trust.
Q: Have you seen a real shift in adoption rate of ZT due to COVID?
Oliver: COVID’s direct impact is that remote work has become the norm. If you all of a sudden have 3000 people teleworking instead of maybe a hundred people teleworking that puts an immense strain on the [VPN and] infrastructure. So everybody is now looking into: “Okay, what has to be changed?”
Scott: I don’t know if the [shift in adoption rate] will slow down. It might slow down a little, once places start opening back up. But it may not, [organizations] may think “Hey, this telework stuff is working. Why don’t we just, you know, maintain flexibility?” Secondary to a lot of this [telework during the pandemic], one of the other drivers is ransomware. I think we’ve had examples of anytime any large organization gets ransomware, it always follows the same script. Somewhere, somebody falls for a phishing email or clicks on a bad link and they download malware or it gets installed on the machine, there’s some lateral movement and eventually [bad actors] get admin privileges for what they want. And then they start either exfil or ransomware, and zero trust is supposed to stop that unauthorized lateral movement.
That’s kind of the big thing that we’ve never gotten a handle on. The traditional responses for ransomware is to have backups and upgrade everything, but that only works so far. You need to re-engineer architecture to micro-segment things away. I think that will become a driver as things get going again, because it’ll stop a ransomware attack or it might be a limited blast radius that’ll only affect a small unit, segmentation of the network or small set of resources and won’t attack the most important corporate resources. I think that’s probably the next driver coming, or at least the driver that’ll zoom the primary focus for a lot of the zero trust stuff.
Q: Do you have the impression that organizations think ZT is a rip-and-replace process? Or have a slow-adoption mindset?
Scott: So for the federal side, rip-and-replace is never an option because of budget and resources – you can’t do that. So we have been seeing this kind of pilot programs where they’ll take some sort of functionality – mobile devices, some sort of new cloud – where they’re migrating some sort of old fashioned non-prem app to the cloud instance and they’ll start there. They’ll say: “Well, here’s a chance that we’re kind of doing something new or we’re transitioning this here’s our chance to piggyback off of [zero trust].”
They’ll do it as part of an upgrade cycle for something or they’ll look at what are the main advantages for zero trust, say like remote workers or branch offices, and they’ll start there as they’re the ones that will get the most use out of it.
Q: Does NIST have a ZTA playbook?
Scott: That’s one of the things that we’re working on and planning on. After this document the next stage is with the National Cyber Security Center of Excellence (NCCoE), which is part of NIST. They’re doing a lab demonstration project. Part of the goal there was to get more user experience with [vendor products]. A lot of vendors volunteer to come on site with NIST to work and develop basically a mock enterprise and they run these kind of test scenarios.
Hopefully from that there will be some more material that they could make more than one playbook, because we may require several playbooks: one for devices, legacy apps, etc.
Q: How do you two see the evolution of zero trust solutions and their role in the cybersecurity market?
Scott: It’s such a jumble out there. So a lot of things in the market right now are agent-based – device inspection agent, AV agent, data agent, etc. Enterprises are probably hesitant to add yet another agent to all devices. There’s a choice for everybody; I could be wrong but I don’t see any one [solution] gaining dominance.
Oliver: The thing is, if one looks very close towards zero trust, one notices that we are actually on the road to zero trust for many years. Zero trust is not one solution, not one product where I flip a switch and now I have zero trust. And also if you look at not only the government but also large corporations with industrial IOT, [which] is a completely different game than regular IOT, what we have for zero trust looks a little bit different. I do not believe that there is this one-stop shop for all — there is a fear of vendor lock-in and all these kinds of [barriers]. What one has to consider, and what I saw since the [ZTA] document came out, is that not only government, but also industries, are becoming more aware that [zero trust] is something where everybody steers towards, and the field is very large currently.
We have to wait a little bit and see where technology goes. Some vendors might over-claim that their product is fully zero trust and [yet] others under-claim or don’t claim at all [yet] to be zero trust but are in fact already a nice fit for a zero trust solution.
Moving and changing technologies within the federal government is always a little bit slower because of budget restrictions and so forth. You also have a lot of old technology floating around but you just cannot replace it right away. If you look into the scientific area, you have many instruments that still run on Windows XP or Windows 95. In some instances the vendor might not even exist anymore, but the instrument still works and one needs the data.
So ripping out old equipment and replacing it, in my personal opinion, is a non-starter. And again, it’s a new conversation that basically needs to happen — other people might disagree with what I just said. One can even say: “We talked about zero trust since many years but we called it differently back then.”. I believe the idea of zero trust is to look at security from a different angle than just perimeter security – Again, we have to see where it goes.
Q: What are some of the challenges you see enterprises & federal agencies facing when implementing zero trust architecture?
Scott: From what I’ve seen — and Oliver touched on it — the legacy issue.
You’ve got some old stuff that either can’t adapt to architectures or have to be segmented away and put something in front of it again, and then you got more infrastructure.
The other one we’ve seen, or beginning to see now, largely on the federal side but I’m sure it affects every industry, is auditing and certification. Now you have these kind of new architectures, how do you approve that they’re better? Or how do you certify that those meet the requirements in the controls that you originally set out.
Especially for agencies and governments we have things like FISMA (Federal Information Security Management Act). The idea of FISMA was that you define a system and you draw boundaries around a wire diagram and say: that is a system. And then you set up a bunch of controls around that system.
Zero trust also takes into what you’re using it for, not just what it is, into account. So you can’t just talk about a database, you have to talk about what kind of data is in the database, where in the workflow is that database access, who accesses that database to write, who needs it to read? And you set policies around that.
So we’ve got a lot of people in CISO shops that like to come to the auditors and they want to know: How do I certify this?
I’m sure other organizations may have it – the authority to operate (ATO). It’s the official blessing that you’re good. You can start using it because it’s deemed to be secure.
Also, zero trust itself is very dynamic. It changes a lot, you change policies, and the idea is that you’re supposed to adapt to new threats as they emerge. And an ATO is a snapshot in time. So CISOs want to know: how do I maintain that accreditation when I change my system, policies, and upgrade things?
That means technically [the CISO] needs to drag the auditors back and they need to check, say, “OK, was version you know, 1.0.1 and now it’s version 1.0.2” and that’s just a lot of bureaucratic overhead. But enterprises thrive on [ATO] and that’s what they need. [CISOs] can’t just say: Yeah, it all works, trust me.
And so that’s kind of the secondary order [of issues] as we’re rolling out zero trust, because everybody’s saying: OK, now how do I fit that into the current set of rules and regulations that I have to follow in order to actually do business?
So that’s going to be the next set of problems that I kind of see coming down the road.
Q: Do you see policy enforcement points (PEP) slowly getting merged between gross centralized access and application specific policy? Will there be an interplay between application authorization and centralized endpoint authorization?
Scott: Maybe. You can say in an ideal world: yeah, that’d be great.
If not a single source, or at least maybe a single language and system that [the policy] is described in. So applications and endpoints, you can have a central database of, say, JSON objects that define all these kind of different policies and then applications and endpoints can reach whatever they need at the time or that can be pushed out at a regular basis. That way you don’t have separate people maintaining policies.
Will that ever happen? I don’t know.
Does it really need to be centralized? Again, I don’t know.
I’ve seen some products and some solutions that kind of break everything up and there isn’t really a central policy engine, but they’re all the individual PEPs have their own little engine attached to it. And it kind of works like a hive mind. I don’t judge what’s better or not, it depends on how well it actually works for that organization for what they’re trying to do.
Ideally, it would probably be easier for everybody if there was this kind of centralized way of expressing policy. It doesn’t have to be centralized, but at least maybe a standardized way. So the application people and the endpoint people and access people and networking people can at least understand what they’re trying to do, and can at least understand each other. Then maybe have some sort of way that they can easily transport these kind of policies or requirements of these policies back and forth.
If that’ll ever happen, I don’t know.
Q: What do you find to be important in authorization decisions?
Scott: Identity is important. You know, any kind of attributes or roles that that identity has at the time, which may vary or may not necessarily have all the same roles at the same time.
We have seen some clever solutions coming that also take a lot of environmental factors into effect, such as local time zone, asking: “What time are these requests coming in?” A little bit of network location sometimes, you know, other data access policies and trying to build up a profile.
If you know this user ID on this device or this user identity in general has this sort of access pattern to this resource, but it’s changing all of a sudden, you know that it will show either a red flag or will ask for a reauthentication.
I think primarily everybody’s focused on identity because that’s one of the hardest ones AND one of the easiest ones because users do stuff. I mean, devices usually don’t do stuff on their own. So [users] are the ones wanting to do things.
Oliver: I think one of the biggest problems is related to device identity. The printer or the smart light doesn’t have an identity. And that’s basically what Scott said. There are some interesting ideas out there that aim to put an identity around devices. Devices normally don’t do things on their own, though it might happen, especially if they get hacked, when they get [malware] installed or something of this kind.
And then all of a sudden they start changing their predefined behavior, so I think identity is a very important factor. I don’t think that everything is solved yet around identity — there’s still some way to go so we’ll have to see.
Q: In your paper, you give a baseline for what companies should attempt to achieve before trying to implement zero trust. How do you see enterprises achieving that baseline, and are there any tools to help them achieve that?
Scott: There’s a big DHS program in the government for agencies called CDM – Continuous Diagnostics and Mitigation. Their goal is to identify who and what is on the network and what are they doing, that sort of thing. And they actually listed a bunch of tools [for monitoring your network and identifying all the devices] that are part of their project. At the very least, [these tools] have met some of the goals stated in the [DHS] requirements.
You can [also achieve the baseline] with identity management systems to see who are the users and user accounts, both humans and non-humans. You need to go from there knowing that it’s not perfect, but at least, you know who is actually in the network, their identities, the devices they’re using, and what they’re actually trying to do, as much as you know your business processes. You can’t hope to actually have a set of coherent policies until that happens.
So you try to get a firm grasp of what’s on your network, in your infrastructure and [how it impacts] your business processes. What are the [users] trying to do, in what order, and what access do they actually need at any given time of that part of that workflow? Then develop a set of policies from there.
We didn’t actually invent [the baseline] — we just agree with it. There’s been other documents on zero trust, such as this group called ACT-IAC, and they labeled it as like the pillars of zero trust: identity, device health, network, analysis, etc. It’s [figuring out] the same thing:
- User identity
- Device being used?
- What is enterprise-owned, what isn’t?
- Where is the traffic flow going?
So like a common theme and a lot of the zero trust stuff is to see the knowledge, figure out what exactly you’re doing as an enterprise before you start setting out policies.
Q: How should people think about the combination of micro-segmentation, software-defined networks, & context-aware proxies?
Scott: I think there will probably be a mix in any kind of enterprise. There may be different business processes that do different things and take different approaches
A good approach will have all three. There may be some micro segmentation or at least some segmentation and then some sort of software-defined [networking]. I say there will probably be a mix and match as [organizations] apply things to different existing systems as they re-engineer some of their systems — [organizations] are goin to choose one thing over another. But you’ll probably see a mix of everything.
What we’re trying to do at the NCCOE is kind of go along the progression of the low hanging fruit of what’s easiest. And it usually tends to be like enhanced identity governance, because it’s just identity at that point. You’re not doing a lot of re-engineering of an architecture of systems too much, and they usually try to get [identity governance] done first. And then going down the more micro-segmentation or software-defined perimeters, if you’re trying to protect a lot of cloud assets, a lot of the software-defined stuff seems to be a little easier to do while a lot of on-prem stuff does a lot more segmentation where you can’t do a lot of software-based stuff, like IOT – micro-segmentation may be your only choice. So it’s probably going to be a mixing and matching depending on what you’re protecting and what the business process is.
Q: What are you most looking forward to in the field of zero trust?
Scott: One thing I’m interested in but I’m not very knowledgeable in is what kind of role AI Machine learning [has in zero trust]. Like what kind of role that’s going to play and how different that’s going to make enterprises when a lot of decisions aren’t being made by administrators anymore.
There may be some kind of AI or some sort of machine learning algorithm that may be able to detect and respond to an attack before a human can look at a log file, for example.
You know, when [a breach] happens, can [AI] actually respond faster and how many false positives are they going to trip up before [AI and machine learning in cybersecurity] actually becomes usable?
There’s also a lot of stuff going on with software-defined networking and what they call intent-based networking (which is just a variation). And that’s kind of interesting, you know, being able to just kind of change the network behavior programmatically so the network looks different for every device and maybe every user will actually have a different view of the network based on what they are authorized and have authenticated to. Then, pushing [software-defined networking] out, to not just the internet, but 5G.
So, NIST has started to look at that, not just faster cell phone reception, but also a lot of the compute work that used to be done in cloud on server farms are now getting pushed closer to the edge devices — and what does that look like? So you know, how much control do you have over any telecom provider’s 5G network? You might actually have some little power there where you can actually move stuff into their network to actually do work. How will that work?
Oliver: There are two things I’m very interested in right now. One thing is we have an ongoing demonstration project at the NCCOE where we have many vendors who participate in making multiple (enterprise) builds. There we explore zero trust, the components in a zero trust environment, and how everything works together. So really getting a hands-on experience that will lead into some kind of playbook. That’s one very exciting thing that’s currently going on.
The other thing is in the standard sector, especially regarding IOT devices — how can one bring them into your zero trust sphere? All this stuff like identity, networking, and all these kind of things come into place. So that’s the exciting thing for the foreseeable future.
The field is very much in movement and exciting things happen pretty much on a weekly basis.