1 min read

AI Risk Management Framework version 1.0

AI Risk Management Framework version 1.0

The National Institute of Standards and Technology (NIST) has released the Artificial Intelligence Risk Management Framework (AI RMF), version 1.0. This multi-tool is designed to help organizations manage and design responsible and trustworthy AI. It contributes to the evolving US policy on AI and the international debate on AI policy and development. The AI RMF follows in the footsteps of the Cybersecurity Framework released in 2014 and the Privacy Framework released in 2020, with a similar approach, using core functions, subcategories, and implementation profiles.

AI is a general-purpose technology, with a wide range of technologies, data sources, and applications, making it uniquely challenging for IT risk management. As such, the AI RMF introduces socio-technical dimensions to the risk management approach, encompassing societal dynamics and human behavior across a wide range of outcomes, actors, and stakeholders, considering both people and the planet.

The AI RMF provides a conceptual roadmap for identifying risks in the AI context and outlines general types and sources of risk relating to AI. It also lists seven key characteristics of trustworthy AI: being safe, secure, resilient, explainable and interpretable, privacy-enhanced, fair, accountable and transparent, and valid and reliable.

In addition, the AI RMF provides a set of organizational processes and activities to assess and manage risk, linking AI’s socio-technical dimensions to stages in the lifecycle of an AI system and the actors involved. These processes and activities are broken down into core functions, including governing, mapping, measuring, and managing, which are further broken down into subcategories.

The AI RMF does not reference implementation tiers and profiles to guide implementation more specifically, as the previous framework did. However, NIST is also launching a “playbook,” which will provide additional suggestions for actions, references, and documentation for the core functions and subcategories.

The AI RMF is a living document, and NIST expects to conduct a full formal review by 2028, which could produce version 2.0. The agency will take continue to take comments on the playbook and integrate these semi-annually, potentially issuing Versions 1.1-n.

In conclusion, the AI RMF is a crucial tool for organizations to design and manage trustworthy and responsible AI. It is a product of a highly consultative and iterative process and a voluntary, rights-preserving, non-sector-specific, use-case agnostic, and adaptable tool for all types and sizes of organizations. The playbook provides additional suggestions and documentation for the core functions and subcategories. Overall, the AI RMF adds coherence to evolving US policy on AI and contributes to ongoing international debate about AI policy and development.

To download the AI RMF, please go to the NIST website.

The Path to SOC 2 Compliance: A Guide for Security-Conscious Companies

The Path to SOC 2 Compliance: A Guide for Security-Conscious Companies

In the current context of growing cybersecurity concerns, companies are facing an increasing need to obtain SOC 2 accreditation. But what exactly...

Read More
The Cybersecurity Conundrum in the Electric Vehicle Revolution

The Cybersecurity Conundrum in the Electric Vehicle Revolution

The electric vehicle (EV) sector is undoubtedly rising, representing a significant shift in the automotive landscape. However, amidst the excitement...

Read More
A Comprehensive Guide to Email Security for Small to Medium-Sized Businesses

A Comprehensive Guide to Email Security for Small to Medium-Sized Businesses

One ofthe most critical elementsof a comprehensive cybersecurityplanfor small to medium-sized businesses (SMBs)in today's digitalenvironment is

Read More