Continuing the Conversation: A Refresher on COE’s 2023 AI Event

In a world of change and technological innovations, it can be difficult to keep up with the consistent shifts in the corporate world. One of the most discussed changes is the emergence of Artificial Intelligence. The AI revolution has left businesses struggling to find how to implement this into their systems successfully and ethically. 

In December 2023, the Center for Operational Excellence dove into this topic at an event featuring Fisher College of Business Professor Aravind Chandrasekaran, Moritz College of Law Professor Dennis Hirsch, and a panel of industry leaders from Nationwide, JPMorgan Chase, and Narwal. With a focus on the ethics and uses of AI, they addressed the question: What are the pros and cons of such a powerful, effective, and even dangerous tool? 

The conversation will be continued this December at an upcoming event featuring Quantum Health President Shannon Skaggs and Fisher College of Business Professor Nate Craig. Through a combination of academic and industry insights, this year’s event will explore ideas of artificial intelligence in the workforce and what it will take to maintain uniquely human experiences.  

To set the scene for this year’s Human-Centered AI: Insights from Academia to Industry event, here are five primary takeaways from last year’s discussion: 

  1. The question of AI ethics is complex, calling for an interdisciplinary look. 

    During his presentation, Professor Chandrasekaran asked the fundamental question of whether we are “using AI in the right way.” The answer that emerges illustrates that there is no black or white answer. It’s all legal, but is it all ethical? There is no single expert who can simply determine how much AI should be integrated into the workforce, and it calls for voices and opinions from many industries and academic disciplines. 

  2. There are many risks associated with the use of artificial intelligence. 

    The risks associated with the use of emerging AI technologies are unprecedented and often misunderstood. While effective, it is important to understand the implications before implementing it into business practices and corporations. 

    Some risks, as analyzed further in the lecture, pertain to privacy, bias, manipulation, and procedural unfairness. For example, many are concerned about how their personal information could be used. Professor Hirsch described an example in which Target inferred the pregnancy status of customers to better market to them, which led to public outcry as an invasion of customer privacy. 

    As AI technology is developed and incorporated into the workplace, the list of risks and anxieties will continue to evolve. 

  3. Companies are investigating the ethics of these resources, though there are currently no data ethics laws and regulations in the United States.

    While data ethics laws remain a distant possibility in the United States, many companies are taking the matter of data ethics into their own hands to preemptively avoid controversy and problematic business practices. 

    Many companies have been self-motivated to invest in data ethics research. Some of these motivators include wanting to maintain good trust and reputation with users and business partners, improving relationships with their employees, creating competitive advantage, and demonstrating their corporate values.

  4. Companies are responding well to Responsible AI Management (RAIM). 

    In a survey with 75 responses, companies were asked the amount of value RAIM (Responsible AI Management) creates for their organization. While the data is acknowledged to likely be skewed with some selection bias, the results depict RAIM being an incredibly useful resource – with zero percent of respondents saying RAIM has no value, and over two-thirds indicating that RAIM holds at least moderate value to their company.

  5. Ohio State researchers are leading the way on developing responsible AI management practices.

    Ohio State continues to lead research on this trending topic and how it can best be implemented and contained. The Center on Responsible Artificial Intelligence and Governance (CRAIG) is a group at Ohio State University involved in shaping AI ethics. They are continuing research and applying for grants to expand their work. The professors are working to create the first Industry-University Cooperative Research Centers Program (IUCRC) on research surrounding responsible artificial intelligence use. The result would be an increase of funding for this line of work, plus more research and knowledge being created about the nuanced topic. 

In conclusion, the topic of artificial intelligence is incredibly complex. It requires an interdisciplinary look at technology, ethics, and the workplace. There is significant research to be done and significant opportunity for companies to mitigate risks and improve processes. 


COE members are invited to attend this year’s event, Human-Centered AI: Continued Insights from Academia to Industry, on December 6. Additional details and registration are available on our Upcoming Events page

Missed the 2023 AI event or want to rewatch in advance of this year’s event? A recording of the AI: Insights from Academia to Industry event is available on COE’s members-only Digital Content Archive.

Don’t have an account? COE members can create one here.