We've noticed a rising trend among our clients, examiners are bringing up the topic of AI, inquiring whether AI is incorporated into their systems, what internal AI policies are in place, and how they are assessing and managing the risk of AI.
To ensure you're prepared during assessments, we've compiled a few things (standards, frameworks, and general areas in AI) for you to brush up on to stay ahead of examiner expectations.
ISO/IEC 42001 is the world’s first AI management system standard. It is designed for entities providing or utilizing AI-based products and services, ensuring responsible development or use of the technology. Countries around the world look to ISO/IEC 42001 as a foundational cornerstone for safe implementation and governance, as it sets a structured approach to managing the risks and opportunities associated with AI technology.
The standard includes numerous controls, with some key elements that highlight its focus:
It is a worthwhile initiative for organizations to start researching and implementing the standard to demonstrate a commitment to quality, ethical practices, and adherence to industry-recognized benchmarks with the examiner.
Developed through extensive industry collaboration, the U.S. National Institute of Standards and Technology (NIST) published the Artificial Intelligence Risk Management Framework (AI RMF) to assist organizations in identifying the unique risks posed by AI and suggest risk management steps that will align with their business goals.
NIST's framework is divided into two parts. The first part focuses on planning and helping organizations analyze AI risks and benefits to spot and define a trustworthy AI system. The second part, described as the “core” of the framework, provides actionable guidance through four steps: govern, map, measure, and manage. "Govern" aims to create a culture of risk management. "Map" identifies and contextualizes risks. "Measure" analyzes and tracks these risks. "Manage" prioritizes risks and applies appropriate mitigation measures.
The AI RMF, much like the earlier cybersecurity framework, is being introduced early in a rapidly evolving field. This timing gives it a strong chance to shape how organizations understand and implement trustworthy AI practices, making it an influential framework to study.
Artificial intelligence (AI) is swiftly becoming a part of our daily operations, bringing both technological advancements and potential risks. One significant risk is the opportunity for malicious actors to identify and exploit vulnerabilities, as highlighted in the recent Request for Proposal (RFP) issued by the U.S. Treasury. Even if your organization doesn't create or work with AI directly, it is crucial to examine how your vendors are using it. Therefore, incorporating questions about AI into your vendor risk questionnaire is essential to stay ahead of examiner inquiries.
Some questions that are worth adding to your vendor risk questionnaires include:
Aside from having robust security protocols and complying with relevant regulations, examiners are interested in whether organizations are proactively mitigating risk exposure from third-party AI risks. As a result, this means organizations will need to conduct more thorough vendor risk assessments.
To effectively manage AI-related risks, conducting a comprehensive risk assessment is essential. Rivial's risk assessment solution helps ensure that your AI risk exposure stays within acceptable limits, protecting your organization and ensuring compliance with regulatory requirements.
Whether you're a bank, credit union, or any other organization in a highly regulated industry, Rivial’s risk management solutions enable you to swiftly and accurately assess your AI risks. Schedule a demo to learn more about how we can help you stay ahead in managing AI-related risks!