Stock image of a robot and human hand

A wise uncle once said to his nephew in a movie, “with great power comes great responsibility.” This adage has never been more true as it pertains to the emergence and popularity of AI.

However, organizations are starting to reflect on and question who ultimately bears this responsibility from an enterprise risk management perspective. Is it:

  • The Chief Risk Officer (CRO) and Enterprise Risk Management (ERM)?
  • The Chief Audit Executive (CAE) and Internal Audit (IA)?
  • The Chief Legal Officer (CLO) and Legal?
  • A combination of leaders and groups, including the Chief Information Officer (CIO) and Chief Information Security Officer (CISO)?
  • Or even a newer position such as the Chief Artificial Intelligence Officer (CAIO)?

All organizations and risk professionals will most likely come to an impasse at some point and will face the question, head-on, of who is responsible. There is also the equally important question of how to identify, understand and address the related risks of AI.

Noah Jellison headshot
Noah Jellison, 
​​​Executive Director,
The Risk Institute​​​

In some cases, organizations will tackle the questions above simultaneously or interchangeably. Unfortunately, some have already started down this journey and have experienced the pain points, challenges and risks of AI. The case of Samsung employees using ChatGPT in an attempt to improve their job functions is one example. In this real-time AI use case, two events were reported where Samsung programmers used ChatGPT to upload, review and optimize source code. A third centered on another Samsung employee feeding ChatGPT an internal meeting recording to create a presentation. At the time, Samsung allowed these employees to use ChatGPT without considering the potential risks and likely thought it would be an innovative way to empower employees as it relates to their job duties. As a result, though, confidential and proprietary company information was shared within ChatGPT and with its developer, OpenAI. Furthermore, it seems that Samsung has no recourse to remove the information, which is now part of the ChatGPT training model and can potentially be accessed by both OpenAI and anyone using the software.

ChatGPT and OpenAI

The popularity of ChatGPT is unprecedented, especially as it relates to empowering the everyday user with AI. It was released in November 2022 and gained over one million users within the first five days. It grew to over 100 million users by January 2023, becoming the fastest-adopted software platform in history. In contrast, some of the other most popular software and social media platforms took months to years to achieve the same adoption milestones. OpenAI is the San Francisco-based AI research lab behind the creation of both the popular DALL-E text to image AI generator and the ChatGPT AI chatbot. “GPT” stands for generative pre-trained transformer, which is part of the family of large language model (LLM) neural networks and deep learning. What differentiates ChatGPT from other similar AI chatbots is that it was specifically trained using both supervised learning and RLHF (Reinforcement Learning From Human Feedback) by leveraging various sources of web content.

With over 100 million users and increasing each day, organizations must start to consider the potential risk exposure if they have employees who use ChatGPT or other similar AI platforms. Not only do they need to consider employees using these platforms for work purposes, but also what information employees might be sharing through their personal use that could present a potential risk to the organization. Uploading meeting notes to create a presentation might seem harmless at first, but what if knowing who attended these meetings might pose a risk to the organization since anyone who uses ChatGPT could find this information?

Users of ChatGPT have also come across ways to directly conduct malicious behaviors and activities. Phishing emails, creating malware for advanced low-cost and efficient cyber-attacks, and collecting intelligence on users and organizations that will help them perform virtual or even physical attacks are a few examples of the malicious behavior seen so far. What users and organizations also might not realize is that perhaps ChatGPT and OpenAI are learning more about the users than the users are learning about the capabilities of the software. The more questions and information that is fed into ChatGPT, the more information and data the tool is gathering. Subsequently, that information is stored on OpenAI servers and is cycled through the platform training models, available for anyone to leverage.

AI Impact and Concerns

The questions of who is ultimately responsible and what exactly they are responsible for have come to the forefront recently among leaders in technology.

An open letter was published on March 29, 2023, by the Future of Life Institute, an organization with the mission of “steering transformative technology towards benefitting life and away from extreme large-scale risks.” The letter currently has nearly 30,000 signatures and calls for all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months. Those who have signed the letter include Twitter CEO Elon Musk and Apple Co-Founder Steve Wozniak. Additionally, on March 30, 2023, the nonprofit research group, Center for AI and Digital Policy (CAIDP), filed a new 46-page complaint to the Federal Trade Commission (FTC) urging the agency to investigate OpenAI and suspend its commercial deployment of large language models such as ChatGPT. In the letter, CAIDP accuses OpenAI of violating a part of the FTC Act that prohibits unfair and deceptive business practices and the agency’s guidance for AI products.

Isil Erel headshot
Isil Erel, 
Academic Director,
The Risk Institute

Regulatory Responses and Impact on Risk and Risk Management

From a regulatory perspective, the October 2022 AI Bill of Rights of the U.S. Office of Science and Technology Policy attempts to provide a framework for how organizations and developers can address and manage risks for consumer-facing AI. Meanwhile, the European Union (EU) is intending to regulate AI under the proposed EU AI Act, which would make it the first AI law by a major regulator. Italy has banned ChatGPT and informed OpenAI that it must communicate how it plans to sufficiently comply with the EU General Data Protection Regulation (GDPR). There are hundreds of pending AI policy initiatives across almost 70 countries. Additionally, some organizations are determining how to frame up their own internal policies specific to AI models. For example, Google has applied its code of conduct guiding principle “don’t be evil” into the development and rollout of its own AI chatbot, Bard, which was initially released in a limited capacity on March 21, 2023, in response to ChatGPT.

Just as the genesis of the internet paved the way for advanced cyber-attacks and other related risks, AI will be the genesis for another generation of new risks, many of which we are not prepared for or are not able to conceptualize yet. From the digital divide to more sophisticated malware and ransomware cyber-attacks, to deepfakes and beyond (as seen in the recent example of a fake Joe Rogan podcast created with the help of ChatGPT), we will continue to see unprecedented emerging risks. However, we must still ground ourselves with understanding the reality of current limitations of AI versus getting lost in the overall hype and grandeur that is often portrayed around the hysteria and what the future holds for AI. Rodney Brooks, Panasonic Professor of Robotics (emeritus) at MIT, provides some considerations about the seven deadly sins of predicting the future of AI, and Michael I. Jordan, Pehong Chen distinguished professor at the University of California Berkeley, provides the reminder to stop calling everything "Artificial Intelligence."

In many ways, trust is at the center of how organizations will address AI in the context of enterprise risk management. Trust, however, is fragile in that it can easily be broken and costly, or in some cases impossible to regain. From the human and core elements of trust, organizations will need to embody and navigate uncharted territory within their enterprise risk management programs and all other risk and compliance functions to create a truly integrated enterprise risk management platform to identify, mitigate and adapt to a multitude of risks and considerations including:

 

  • Operational
  • Marketing
  • Performance
  • Societal
  • Financial
  • Culture and Change
  • IT
  • Diversity, Equity and Inclusion
  • Cybersecurity
  • Reputational
  • Privacy
  • Legal
  • Processes and Controls
  • Sustainability and Climate Change
  • Economic
  • Ethics

Internal audit, enterprise risk management, and all other first and second lines of defense functions within organizations will need to innovate in how they work together to stay apprised of new AI technology and how they approach the related use and impact. This collaboration will further accelerate the way organizations become more data-driven and digital on their journey to discovering untapped sources of full potential and value. External auditors will also need to quickly adapt their risk assessments and audit procedures (both in how they audit AI and how to leverage AI to further enhance and drive their digital audit endeavors), especially as regulations and guidance continue to evolve. Even at the board level, organizations will need to think innovatively and strategically to challenge their current way of operating, approaching and adapting to risk, which could mean acquiring new skill sets and/or board members to stay in lockstep with the progression of AI and its impact to the organization.

Conclusion

AI certainly offers exciting possibilities and unlimited potential, especially for organizations that are looking to gain competitive advantages and innovating new ways of delivering products and services. While the concepts of AI and some of the related technologies have been around for a while, we are now just beginning to understand and realize the real-world applications and uses in how AI will indefinitely transform all organizations. The related risks are unprecedented and are still unfolding as we take each new step towards enhancing AI technologies and capabilities. Many organizations will involuntarily find themselves in an unprepared reactionary state or, in worst-case scenario, playing damage control when the time comes that they are facing AI head-on. Similar to other risks, such as cybersecurity, it may not be a matter of if preventing AI risks is possible, but a matter of when they will occur and how prepared an organization is to identify, mitigate, resolve and adapt to these risks.

The conversation has started here at the Risk Institute. Let’s continue to advance it together. We are a consortium of forward-thinking organizations and academics with a focus on risk and enterprise risk management here at The Ohio State University Max M. Fisher College of Business. We operate at a unique intersection of risk leaders, academic researchers, students and industry professionals across all industries and functions.

Please also visit the Translational Data Analytics Institute, which brings together Ohio State faculty and students with industry and community partners to create interdisciplinary, data-intensive solutions to grand challenges.

Please stay connected and reach out!

— Noah Jellison,
Executive Director, The Risk Institute

— Isil Erel,
Professor of Finance,
David A. Rismiller Chair in Finance,

Academic Director, The Risk Institute