Artificial Intelligence – Is your compliance program (getting) ready?
Chances are AI – extractive or generative – is either being used or being built in your organization today and/or you’re engaged with a third party who leverages it in products you use. While guidelines and guardrails are not sufficiently or consistently defined today across global supervisory regimes, compliance organizations can demonstrate proactive risk management by integrating AI into their institution's governance framework.
Supervisors are focused on AI. The European Union introduced the Artificial Intelligence Act earlier in the year with implementation dates ranging from +6 months to +36 months post-force. The EU Act classifies AI by risk level and generally places the highest regulatory burden on the owners/developers of the systems. In June 2024, Secretary of the Treasury, Janet Yellen, delivered remarks at the Financial Stability Oversight Council (FSOC) Conference on Artificial Intelligence and Financial Stability alerting the industry that AI, both due to the opportunity and significant risk it poses, is rising to the top of the agenda for the Treasury Department and FSOC. Following the conference, Treasury issued a formal Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector. Written comments are due by August 12, 2024.
As Milton Friedman, economist and Nobel Prize winner said, “When everybody owns something, nobody owns it, and nobody has a direct interest in maintaining or improving its condition.” To reduce cost, complexity, and administrative burden cross-functional collaboration is key. Compliance professionals should anticipate that the use, creation, governance, and oversight of AI will impact numerous functions across your organization. Identifying key stakeholders, establishing their responsibilities, and leveraging your existing governance framework will help simplify a complex undertaking.
How to get off the ground and avoid over-engineering and over-spending:
Educate – Compliance, risk, legal, technology, and business partners will need a baseline understanding of what AI is (what it’s not) and why supervisors are interested in it.
Integrate – Identify which existing board and management committees have a nexus to AI and partner cross-functionally to suitably allocate responsibility.
Update – While creating a standalone AI policy is one approach, consider cataloging relevant corporate policies and identifying opportunities to plug into those minimizing disruption. E.G., Standard of Conduct, Records Retention, Privacy, Data Governance, Information Security, Third-Party, etc.
Assess – Conduct compliance and risk talent review. Does your organization need to build capabilities to effectively oversee AI and its use cases?
Build – Evaluate your organization’s maturity against available guidance from key supervisors and build a roadmap that fosters transparent discussion with those supervisors about where you are and where you’re going.
Proactive adoption of principles and standards that safeguard against the misuse or unethical use of data through generative AI demonstrates thoughtful oversight and management of risk. Compliance officers are uniquely positioned to help their organizations build a foundation from which they can grow and adapt as regulations come into force and best practices are shaped.