Launched some 20 years ago, the National Institute of Standards and Technology’s information security standard set a go-to benchmark for organizations to secure IT systems and data. Today, NIST’s Special Publication 800-53 endures as not just the foundational controls to which all federal agencies must adhere, but in combination with NIST’s Cybersecurity Framework, as the common lexicon and baseline for information security across industries.

    The introduction of AI into cybersecurity raises many questions about potential new uses and vulnerabilities, the best ways to adopt and key considerations for ensuring continued security. These questions, and plenty of others, are all focus areas for NIST as the agency works to provide practical and impactful guidance for organizations looking to integrate AI into their cybersecurity arsenal.

    “At NIST, whenever we look at a new space and when we look at what can we do here, we always start by engaging with the community and talking to the community about how we can we help,” said Kat Megas, NIST program manager for cybersecurity, privacy and AI.

    By consulting the user community, Megas got a clear signal for emerging requirements.

    “I asked a lot of CISO community colleagues as I was able to engage — whether it be in roundtables or in different discussions at different conferences — would it be helpful if NIST would do something like use the Cybersecurity Framework, which is a tool you’re all already familiar with, to create this common taxonomy? Would it be helpful for these different references that are out there, whether they be standards or other NIST guidelines, to have those mapped back to this common framework that we all broadly already use?” Megas said. “And the feedback from the community was a resounding yes.”

    Building the blueprint…from the existing blueprint

    It was clear there’s no need to start from scratch when it comes to NIST guardrails on AI security. Instead, agency leaders are working to develop overlays for NIST 800-53, reviewing the entire catalog to identify and highlight key controls for adoption or adaptation to secure AI systems. The goal is for agencies to leverage the overlays as guidance for implementing AI security.

    Moreover, NIST is looking at the Cybersecurity Framework to help build out a Cyber AI profile that helps agencies recognize the opportunities, risks and impact of AI on their cybersecurity – and to develop strategies accordingly.

    Early on in NIST’s efforts to understand and evaluate the impact of AI, there were three areas that emerged as priorities for addressing risk and impact: cybersecurity of AI systems, AI-enabled cyber defenses and AI-enabled cyberattacks.

    Megas said her engagement with the CISO community revealed a couple recurring themes in the feedback she was getting. For one, CISOs are highly concerned about AI’s effects on their cybersecurity, but struggle to balance their day-to-day demands against digging into specific best practices, plans and strategies. For two, data and discussion around AI and cybersecurity is voluminous, but a lack of common lexicon further complicates CISOs’ ability to interpret and relate that information back to their respective cybersecurity strategies.

    “When you think about the Cybersecurity Framework profile for AI, I would think of it as more of a strategy, a planning document,” Megas said. “I talk about CISOs because I often think CISOs look at, how do I allocate my resources? How should I be thinking about integrating and communicating about my cybersecurity strategy?”

    Charting a clear path through AI’s noise and complexity

    The CISO perspective helped clarify needs and better frame the potential solutions.

    “This is where the CSF and the Cyber AI profiles help a lot. It helps you with assessing internally: Is my strategy focused on the right things? Should I be focusing on other things? Do I need to be looking at integrating tools into my portfolio of what I’m doing to manage cybersecurity?” Megas said.

    For NIST, the vision among those working on developing these critical frameworks and guidelines is that they compliment each other and provide a familiar path forward as organizations plan for adopting AI for cybersecurity.

    By providing the guidance, use cases and essential considerations – think priorities around trust, risk-mapping and metrics – Megas hopes to provide a playbook of sorts.

    “From a federal agency perspective, I think usage of those overlays coming out of this effort and seeing agencies adopt and use those is paramount,” Megas said. “Something that I anticipate and hopefully would be of use…would be to get feedback from federal agencies on how to evolve it, how we might need to add additional considerations to it after they’ve been using it for a while. It’s also very non-sector specific, so I’d say we’ve been successful if different sectors pick up the profile and adapt it for, let’s say, financial use cases or healthcare use cases. I think those two together would be my measurement, looking back a year from now, of how successful we’ve been.”

    Copyright
    © 2026 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

    Share.

    Comments are closed.