The evolving landscape on AI regulation
"By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it."
— Eliezer Yudkowsky
The exponential growth of — and spotlight on — artificial intelligence over the past several years is fascinating on many fronts. It seems like every day a new artificial intelligence (AI) tool is announced with new and far-reaching capabilities. Some of the more unique and innovative new AI tools in mid-2023 include:
- Robots that can recognize and respond to human emotion
- AI tech that can diagnose diseases earlier and more accurately
- AI assistants that can go to meetings for you
- Self-driving cars and autonomous delivery via cars or robots
- Chatbots in customer service to job applications to medical screenings
- AI financial advisors that create a personalized and accurate investment portfolio
- AI-rendered public figures and celebrities, from Eric Adams, mayor of New York City to AI K-pop groups
The rise of artificial intelligence as a massive industry brings with it questions about how AI is developed, how organizations can harness its benefits, and what major risks AI can pose. Specifically in the context of risk management, there are many potential risks for organizations whose employees use AI, and as these organizations try to harness or restrict AI tools at an enterprise level.
The Risk Institute explored this in an earlier piece titled AI's Evolving Impact on Risk and Risk Management, and hosted a follow-up event with industry and academic leaders that explored AI as both an opportunistic tool and a big question mark, in regard to leading practices, regulations and threats to security. In this piece, we seek to explore the regulatory landscape and outlook of AI.
The term "artificial intelligence" was coined in 1955 as experts from Harvard, Dartmouth, IBM and Bell Telephone Laboratories embarked on a study "[based on] the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." The concept of logical and systematic reasoning has existed since at least the 18th century. It seems, however, we have long had a desire to automate our lives.
As new AI, machine learning and predictive tools have recently emerged in virtually every industry from medicine to retail to fast food, questions are also emerging regarding the regulation of these new technologies. Historically, an argument can be made that government regulation typically tends to lag far behind innovation; for instance, modern car seat belts were invented in the late 1950s, but the first law requiring people to actually wear them did not go into effect until the mid-1980s. New AI innovations have the potential for unprecedented impact on our safety, health and daily lives, and regulation is a hot topic for both industry leaders, governing bodies and average citizens around the world.
"If this technology goes wrong, it can go quite wrong."
While artificial intelligence has been in and out of headlines often throughout the past decade, the November 2022 release of ChatGPT thrust AI into the spotlight, and we have not looked away. ChatGPT, a natural language processing, chatbot-like tool that allows users to chat with the software, ask questions, and generate content has been adopted by professionals and enterprises across a wide variety of roles and industries. Its use has allowed for increased efficiency and learning but also sparked questions about the potential for cheating, plagiarism and the spread of proprietary knowledge and false information.
Despite (or perhaps because of?) ChatGPT's success, OpenAI CEO Sam Altman has been one of the most vocal industry experts on the need for government regulation of artificial intelligence.
"If this technology goes wrong, it can go quite wrong," he stated.
In June 2023, Altman not only testified before Congress in a well-publicized hearing, but he also traveled around the world and spoke with dozens of global leaders in 10 countries to voice his concerns about the future of AI, calling for global cooperation to make AI safer.
In his congressional testimony, Altman referenced the great potential harms of artificial intelligence (e.g. manipulating peoples' beliefs and behaviors, weaponized drones selecting targets on their own, creating novel biological agents). Members of Congress mostly brought the discussion back to the impact on election integrity and job preservation. These areas are, of course, crucially important to the U.S. economy and culture, and they should be addressed. Altman's concern, however, is getting "stuck" on these current state risks. AI is developing, changing and evolving so quickly that these risks may be quickly outpaced by even more serious ones. Altman seemingly warned that the time when AI becomes capable of more dangerous activities may be sooner than anyone thinks:
"We spent most of the time today talking about current risks, and I think that's appropriate, and I'm very glad we have done it," Altman told Congress. "As the systems do become more capable, and I'm not sure how far away that is, but maybe not super far, I think it's important that we also spend time talking about how we're going to confront those challenges."
Global regulation of AI
Around the world, governments are attempting to enforce sweeping AI regulations. The European Union's "AI Act" has been in the works since 2021 and has been dubbed the "world's first comprehensive AI law." The AI Act remains in the proposal stages, and amendments adopted in June 2023 launched a new stage of negotiations among member states. As of October 2023, discussions are ongoing, and a more complex tiered approach to different types of AI has been proposed.
Asia is taking a different, and arguably more business-friendly approach. The Association of Southeast Asian Nations (ASEAN) "AI guide" is designed as a guide to domestic regulations in a region with complex national laws. It aligns closely with the U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework, a voluntary set of guidelines developed and released in January 2023 for the public and private sectors as they develop new AI products and services.
The United Kingdom houses the largest number of AI startups in Europe, and their Competition and Markets Authority (CMA) published guiding principles in September 2023 aimed at both safeguarding consumers and fostering competition. The U.K. is hosting the upcoming Global AI Safety Summit in November and will be expected to highlight their balanced approach to safety and trust as well as innovation and competition, aiming to "take the lead" on research and development of AI technology.
China began enforcing regulations on AI in 2021, and has launched new rules each year since, most recently around generative AI. These rules focus mainly on information control, but also contain measures to protect worker rights and prohibit price discrimination, potentially providing a framework of reference for efforts at regulation in other countries.
Regulation in the U.S.
Americans vary widely in their use of AI tools and their perceptions of AI's impact. However, whether they are a daily user or have never touched AI, whether they think it's an amazing new branch of tech or possesses the ability to doom humanity, it appears that most Americans do support government regulation of AI.
Members of Congress launched an AI Caucus in 2017, though it was arguably less-than-popular until the 2022 launch of ChatGPT.
"We've all been stunned by ChatGPT," said Don Beyer, a state representative from Virginia.
Beyer is one of the more technologically savvy members of Congress, having recently earned his master's in machine learning. Even he expressed his surprise at the power of a tool like ChatGPT. The AI Caucus now boasts 45 members, and they recently introduced a bipartisan bill to expand access to AI research to universities, nonprofits and government agencies.
Late in 2022, the White House Office of Science and Technology policy published a "Blueprint for an AI Bill of Rights," a set of suggestions for the responsible design and use of artificial intelligence. This document spoke more to the ethics of AI but signaled more federal action to come. On October 30, 2023, President Biden issued a sweeping executive order on AI, equally focused on ensuring Americans' safety, security, and privacy, as well as promoting innovation, competition, and American leadership in AI globally. Some of the most notable inclusions:
- The requirement for AI developers working on "any foundational model that poses a serious risk to national security, national economic security, or national public health and safety" to notify the federal government when training the model and share the results of their safety tests.
- The commitment to promoting a fair, open and competitive AI ecosystem where small developers and entrepreneurs are granted access to technical assistance and resources.
- A focus on streamlining the visa process to encourage highly skilled immigrants to study, stay and work in the U.S.
U.S. states have taken a varied approach to regulating AI and AI use cases. So far in 2023, 25 states, plus Puerto Rico and the District of Columbia have introduced bills involving AI; 15 of these resulted in resolutions being adopted or legislation being enacted. The majority of bills address the potential for bias or discrimination in the use of AI and machine learning in applications for jobs, housing, benefits and goods. Other bills include formal encouragements for the federal government to regulate AI. In North Dakota, legislation was enacted "defining a person as an individual, organization, government, political subdivision, or government agency or instrumentality, and specifying that the term does not include environmental elements, artificial intelligence, an animal or an inanimate object."
Ethics seem to be at the heart of state and federal approaches to regulation in the AI industry. For organizations interested in the ethics of AI, Ohio State researchers published a related article in May 2023 based on a multi-year study partially funded by the Risk Institute on ethical AI implementation in organizations.
Self-regulation?
Regardless of where governments are choosing to situate themselves in the broad spectrum of AI regulatory approaches, the most likely case is that widespread regulations could realistically be years away in the U.S. and the E.U. As a result, on July 26, 2023, the leaders of companies at the forefront of AI development (Google, Microsoft, OpenAI, Anthropic) announced the formation of an industry-led body to lead the regulation of AI development, with a focus on safety standards for the rapidly advancing technologies.
The Frontier Model Forum is open to members who develop "frontier" models, namely cutting-edge and large-scale models that exceed AI's current state.
"Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control," said Brad Smith, vice chair and president of Microsoft.
Many are critical of this attempt at self-regulation, with some policymakers referencing tech companies' hesitation to regulate social media: would allowing the companies that are developing AI to also develop the blueprint for AI regulations be hypocritical? Given the government's lack of knowledge of the AI industry and the historically slower timeline to develop and implement government regulations, however, is something better than nothing?
It is also notable that while the government is concerned generally with ground-level regulation, penalties, biases and individuals' experiences with AI, the tech companies are more broadly concerned about hypothetical future AI and machine learning models that outpace humans' ability to control them. The phrase "human extinction" has been thrown around recently in the context of AI, as well as the concept of "technological singularity," which has many definitions, but broadly refers to a point in time in the future where technological innovation and machine learning surpasses humans' ability to innovate and learn. The result is irrevocably transformed (for the worse) human life.
Monitor AI's evolution
So, what's next? At the Risk Institute's on AI's Evolving Impact on Risk & Risk Management in June 2023, we explored the audience's most common questions and topics of interest around AI. Attendees were also surveyed about AI use within their organizations and their thoughts about the growth and risks of related AI tools and technologies.
The key takeaway is that there are no clear answers on if, when, or how to regulate AI for now. What exists is only the ability to lean in, learn and stay updated on what is happening in the field. Organizations and individuals would be well-advised to stay informed, whether they are users of any AI technology. All indicators are showing that it will likely soon be impossible not to interact with an AI tool in some facet of daily life. For organizations in all industries, it will also be important to keep up-to-date on what regulations various state, federal and international governments are proposing and putting into effect.
AI might bring about a life-saving innovation or an incredibly powerful tool to boost business or productivity; but this is tempered with the potential for AI weaponization, discrimination and bias.
Regardless of opinions, thoughts and knowledge about AI today, the only thing certain is that things will continue to change. The Risk Institute and Ohio State will continue to track these changes and find ways to connect our knowledge with practitioners and the business community.
Learn more and register to attend our upcoming event on November 2, 2023, featuring Rehgan Avon, co-founder and CEO of AlignAI. The discussion will explore a common theme and challenge related to AI: balancing speed with safety.
Additionally, the Executive Education Department at Fisher will host a program focused on understanding the power and governance of AI.