IT Security Blog | Rivial Security

Preparing for NCUA and FDIC AI Requirements

Written by Lucas Hathaway | 01 Jul 2024

We've noticed a rising trend among our clients, examiners are bringing up the topic of AI, inquiring whether AI is incorporated into their systems, what internal AI policies are in place, and how they are assessing and managing the risk of AI.

To ensure you're prepared during assessments, we've compiled a few things (standards, frameworks, and general areas in AI) for you to brush up on to stay ahead of examiner expectations.

 

ISO 42001 - The Standard For AI

 

ISO/IEC 42001 is the world’s first AI management system standard. It is designed for entities providing or utilizing AI-based products and services, ensuring responsible development or use of the technology. Countries around the world look to ISO/IEC 42001 as a foundational cornerstone for safe implementation and governance, as it sets a structured approach to managing the risks and opportunities associated with AI technology.

The standard includes numerous controls, with some key elements that highlight its focus:

  • Risk Management: Organizations need processes to identify, analyze, evaluate, and monitor risks throughout the entire system lifecycle.
  • Vendor Management: The organization's internal principles and approaches to safe and sound AI must also apply to their vendors.
  • AI Impact Assessment: The AI Impact Assessment is used to ensure an organization has sufficiently considered a system’s relative benefits and costs before implementation.

It is a worthwhile initiative for organizations to start researching and implementing the standard to demonstrate a commitment to quality, ethical practices, and adherence to industry-recognized benchmarks with the examiner.

 

NIST AI Risk Management Framework

 

Developed through extensive industry collaboration, the U.S. National Institute of Standards and Technology (NIST) published the Artificial Intelligence Risk Management Framework (AI RMF) to assist organizations in identifying the unique risks posed by AI and suggest risk management steps that will align with their business goals.

NIST's framework is divided into two parts. The first part focuses on planning and helping organizations analyze AI risks and benefits to spot and define a trustworthy AI system. The second part, described as the “core” of the framework, provides actionable guidance through four steps: govern, map, measure, and manage. "Govern" aims to create a culture of risk management. "Map" identifies and contextualizes risks. "Measure" analyzes and tracks these risks. "Manage" prioritizes risks and applies appropriate mitigation measures.

The AI RMF, much like the earlier cybersecurity framework, is being introduced early in a rapidly evolving field. This timing gives it a strong chance to shape how organizations understand and implement trustworthy AI practices, making it an influential framework to study.

 

AI Vendor Management

 

Artificial intelligence (AI) is swiftly becoming a part of our daily operations, bringing both technological advancements and potential risks. One significant risk is the opportunity for malicious actors to identify and exploit vulnerabilities, as highlighted in the recent  Request for Proposal (RFP) issued by the U.S. Treasury. Even if your organization doesn't create or work with AI directly, it is crucial to examine how your vendors are using it. Therefore, incorporating questions about AI into your vendor risk questionnaire is essential to stay ahead of examiner inquiries.

Some questions that are worth adding to your vendor risk questionnaires include:

  • Are you currently using or planning to implement AI in any of your products, services, or operations?
  • Does that AI store or have access to sensitive data?
  • What are your internal policies with the use of AI?
  • Do you require training for employees on the use of AI?
  • Is training on the proper use of AI included in your employee training programs?

Aside from having robust security protocols and complying with relevant regulations, examiners are interested in whether organizations are proactively mitigating risk exposure from third-party AI risks. As a result, this means organizations will need to conduct more thorough vendor risk assessments.

 

Rivial’s AI Risk Assessment 

 

To effectively manage AI-related risks, conducting a comprehensive risk assessment is essential. Rivial's risk assessment solution helps ensure that your AI risk exposure stays within acceptable limits, protecting your organization and ensuring compliance with regulatory requirements. 


Whether you're a bank, credit union, or any other organization in a highly regulated industry, Rivial’s risk management solutions enable you to swiftly and accurately assess your AI risks. Schedule a demo to learn more about how we can help you stay ahead in managing AI-related risks!