Generic filters
Exact matches only
Search in title
Search in content
Filter by Custom Post Type
Posts
Filter by Categories







Securing AI: Guidelines and NSA’s AI Security Center

Executive Summary

We are pleased to share positive developments regarding artificial intelligence (AI) security.


Today, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) jointly marked a key AI security milestone. They released the Guidelines for Secure AI System Development.

it is the outcome of a collaboration among 23 domestic and international cybersecurity organisations. These organisations encompass all G7 members: Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States. Clearly, these guidelines are the first of their kind to be globally agreed upon.

On the same topic, the National Security Agency (NSA) has recently introduced the AI Security Center. This new entity is dedicated to overseeing the development and integration of AI capabilities within U.S. national security systems. The entity will establish best practices, evaluation methodologies, and risk frameworks for securely adopting AI.

Indeed, these collaborative initiatives aim to ensure the secure and trustworthy integration of AI technology into critical systems. They serve as noteworthy examples of responsible and secure AI development.

Guideline for Secure AI Development

The Guidelines for Secure AI System Development emphasise the importance of adhering to Secure by Design principles and prioritise security outcomes. The guidelines cover all types of AI systems and offer suggestions and mitigations throughout the system development life cycle.

Although AI systems hold immense potential for society, their development must prioritise security as a matter of priority. In light of this, the guidelines provide practical considerations and mitigations, offering valuable insights into the AI system development life cycle. They are structured around four key areas: secure design, secure development, secure deployment, and secure operation and maintenance. Please see a brief description of each area below:

  • Secure Design: Focuses on understanding risks, threat modeling, and considerations for system and model design.
  • Secure Development: Includes guidelines for the development stage, covering supply chain security, documentation, and asset management.
  • Secure Deployment: Addresses protection of infrastructure and models, incident management, and responsible release.
  • Secure Operation and Maintenance: Provides guidelines for the ongoing secure operation of AI systems, including logging, monitoring, and update management.

As another key point, The guidelines follow a ‘secure by default’ approach and align closely with practices defined by the NCSC’s Secure development and deployment guidance, NIST’s Secure Software Development Framework, and ‘secure by design principles’ published by CISA.

NSA’s AI Security Center

The National Security Agency (NSA) has established the AI Security Center to oversee the development and integration of AI capabilities within US national security systems. This new entity plays a vital role in shaping the responsible adoption of AI in national security.

Moreover, the government centre collaborates with various entities, including industry, national labs, academia, the intelligence community, and the Department of Defense.


It’s noteworthy that the mandate encompasses formulating best practices, assessment methodologies, and risk frameworks. This is to ensure the secure and effective utilisation of AI across the national security enterprise and defence industrial base.

In summary, this initiative aims to protect AI systems from vulnerabilities, foreign threats, and theft of innovative capabilities. Also, the establishment of the new entity underscores the NSA’s commitment to maintaining leadership in AI technology and protecting national security interests.

Closing Comments

Such government-led initiatives play a crucial role in setting security baselines and standards for emerging technologies. Indeed, these efforts are instrumental for both public and private organisations, ensuring responsible and secure AI development.

RECENT BLOG POSTS

PODCASTS

Cubic Lighthouse is a cybersecurity publication dedicated to demystifying security, making news actionable, providing deeper thinking about the fundamentals of security, and providing decision-makers and the community at large with the right information to make the right decisions. We will also feature more technical articles and provide personal/family security advice.

©2024 Cubic Consulting, a Smart Security Company for your Business – All Rights Reserved.

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00