AI Governance
Aug 20, 2024

How to Effectively Apply Third-Party Risk Management Principles to Generative AI

Financial services are racing to integrate GenAI. But with new regulations and growing complexity, how can organizations effectively manage AI risks and ensure transparency?

How to Effectively Apply Third-Party Risk Management Principles to Generative AI

Low-code tools are going mainstream

Purus suspendisse a ornare non erat pellentesque arcu mi arcu eget tortor eu praesent curabitur porttitor ultrices sit sit amet purus urna enim eget. Habitant massa lectus tristique dictum lacus in bibendum. Velit ut viverra feugiat dui eu nisl sit massa viverra sed vitae nec sed. Nunc ornare consequat massa sagittis pellentesque tincidunt vel lacus integer risu.

  1. Vitae et erat tincidunt sed orci eget egestas facilisis amet ornare
  2. Sollicitudin integer  velit aliquet viverra urna orci semper velit dolor sit amet
  3. Vitae quis ut  luctus lobortis urna adipiscing bibendum
  4. Vitae quis ut  luctus lobortis urna adipiscing bibendum

Multilingual NLP will grow

Mauris posuere arcu lectus congue. Sed eget semper mollis felis ante. Congue risus vulputate nunc porttitor dignissim cursus viverra quis. Condimentum nisl ut sed diam lacus sed. Cursus hac massa amet cursus diam. Consequat sodales non nulla ac id bibendum eu justo condimentum. Arcu elementum non suscipit amet vitae. Consectetur penatibus diam enim eget arcu et ut a congue arcu.

Vitae quis ut  luctus lobortis urna adipiscing bibendum

Combining supervised and unsupervised machine learning methods

Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.

  • Dolor duis lorem enim eu turpis potenti nulla  laoreet volutpat semper sed.
  • Lorem a eget blandit ac neque amet amet non dapibus pulvinar.
  • Pellentesque non integer ac id imperdiet blandit sit bibendum.
  • Sit leo lorem elementum vitae faucibus quam feugiat hendrerit lectus.
Automating customer service: Tagging tickets and new era of chatbots

Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.

“Nisi consectetur velit bibendum a convallis arcu morbi lectus aecenas ultrices massa vel ut ultricies lectus elit arcu non id mattis libero amet mattis congue ipsum nibh odio in lacinia non”
Detecting fake news and cyber-bullying

Nunc ut facilisi volutpat neque est diam id sem erat aliquam elementum dolor tortor commodo et massa dictumst egestas tempor duis eget odio eu egestas nec amet suscipit posuere fames ded tortor ac ut fermentum odio ut amet urna posuere ligula volutpat cursus enim libero libero pretium faucibus nunc arcu mauris sed scelerisque cursus felis arcu sed aenean pharetra vitae suspendisse ac.

It's difficult to find a financial services organization that's not engaging (or actively debating) the use of generative artificial intelligence (AI). With masters of the universe making visionary proclamations, such as Jamie Dimon, CEO at JPMorgan Chase, stating that "AI could be as transformative as electricity or the internet", pressure continues to fill the boardrooms and senior leadership ranks with questions — and demands — on how to effectively use AI to transform operations.

On the technology front, there's a consolidation of early winners utilizing their infrastructure foothold and mass exposure of consumer products to get ahead. Google, Amazon Web Services (AWS), and Microsoft (in partnership with OpenAI) have already established a presence in financial services through their cloud integrations. Meanwhile, Meta and Apple are leveraging their widespread use of personal devices and consumer apps to make inroads into the financial services sector.

This leaves financial services organizations with a narrow vision as to how and where they should focus their risk management efforts.  

Navigating regulatory and risk management challenges

In June 2023, joint guidance from the Board of Governors of the Federal Reserve System (FRB), the Federal Deposit Insurance Corporation (FDIC), and the Office of the Comptroller of the Currency (OCC) published interagency guidance on risk management for third-party relationships. Although agencies acknowledged requests for more specific AI guidance, they chose a broad, principle-based approach.

As a result, risk managers are closely examining and dissecting this guidance as they work to implement AI across their organization's internal, and eventually, customer-facing use cases. Specific components being examined cover unique vendor landscape, existing entrenchment, and impact on financial services strategic risks:

  • Addressing the need for 'independent testing and objective reporting of results and findings', especially in a field where the required machine learning expertise is scarce, both internally and across vendor partners
  • Choosing an acceptable 'conformity assessment or certification by independent third parties' in a space where financial services regulators have published little specific guidance, and broader government and trade associates are only recently beginning to publish and battle-test principles and risk management control recommendations
  • Effectively implementing contract provisions that allow for 'periodic, independent audits of the third party and its relevant subcontractors, consistent with the risk and complexity of the third-party relationship', given that the large technology partners hold vast internal and political power, and the black box nature of AI naturally inhibits efforts for explainability, demonstrated evidence, and traceability
  • Proactively inserting 'ongoing monitoring and independent reviews' into the technology lifecycle to address changing risk or material issues, specifically as these risks are changing as AI models learn and mature
  • Establishing a process to provide evidence of risks to the vendor to facilitate remediation of identified issues or course correct the operational processes

Emerging guidance on AI-specific cybersecurity risks

More recently in March of 2024, the U.S. Department of Treasury released guidance to the financial services industry focused on cybersecurity risks, a priority for U.S. regulators, senior management, and board members of these organizations.

'Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector’ tackles a number of third-party risk management concerns, in particular how to decipher 'explainability for black box AI solutions'. The Treasury admits the concern of lack of explainability, specifically around safety, privacy, and consumer protection concerns, and points to the research and development community as providing an avenue for answers.

And while the Treasury alludes to frameworks to support longer-term assessment, it's clear that a mix of experience, talent, research, independence, and evidence will be part of the solution. This mix of experience, talent, research, independence, and evidence is producing breakthrough AI research each day that's highly useful for financial services organizations.

AI researchers at Dynamo AI divide explainability into four assessable components, with risk-mitigating controls alongside each:  

  1. Transparency: Making the development and training process of AI systems clear and understandable to users, developers, and stakeholders
  2. Interpretability: Enabling people to comprehend how an AI system arrives at its decisions or predictions, and what factors influence its outputs
  3. Accountability: Ensuring the processes used to train and deploy AI systems can be monitored and held accountable for their decisions and actions, especially under the lens of user privacy, fairness, and bias for critical domains, such as healthcare, finance, and law
  4. Trust: Building trust between users and the outputs of AI systems in terms of safety, truthfulness, and helpfulness

Overcoming barriers to effective AI oversight

Each of these four components are crucial in deploying AI safely and effectively. However, there are significant roadblocks to achieving each in practice. Companies can be increasingly secretive of their model development processes, infamously going down the path of closed source black box models in 2021. This not only makes AI transparency extremely challenging, if not impossible, to achieve, but calls into question interpretability, accountability, and trust.  

Where does that leave financial services in terms of effective independent oversight of AI? Dynamo AI identifies several key strategies emerging as organizations seek to balance competition, innovation, and risk management:

  • Continuous evaluation of your AI technology stack and partners, assessing for areas of risk. This is particularly relevant in evaluating whether the technology vendor is also providing risk management metrics or controls on the AI being deployed.  
  • Incorporate methods to intake and assess new research on AI, its risks, and control methods. This may involve in-house initiatives within risk and audit divisions and ensuring vendors have dedicated research teams or capabilities to review and adapt to the latest AI deployment and risk management strategies.  
  • Establish effective controls for AI as part of the pre- and post-deployment and ongoing monitoring strategy. Controls implemented pre-deployment may need to provide input back to any AI organizational governance function established, as the risks identified (or mitigated) during this phase may inform the overall risk profile of the use case and its impact. Ongoing monitoring should be constant, and embedded in the AI stack, with clear outputs across a variety of technical and risk management skills.  
  • Require specific control reports and expectations from vendors to help inform stakeholders about expectable AI testing and validation. Aligning this within your process, risk, and control self-assessment processes is a recommended way to bring together a collective set of stakeholders who all require knowledge about risk management of AI.  

Dynamo AI provides financial services organizations with the tools needed to assess and demonstrate AI compliance. Schedule your free demo.