The conversation about risk starts here.
1 min read

The Future of the Workplace: Practical Considerations for Using Agentic AI

By Rebecca Jacob

May 13, 2026

We’ve entered into the next phase of AI, agentic AI. No longer is AI only a tool that responds to prompts. Now, as agentic AI, it can be proactive, functioning as a teammate or colleague and work independent of human input or action. Unlike traditional AI tools that respond to prompts, agentic AI is proactive. It can independently take action and continue working with little human oversight. Applications already span customer service, logistics, finance and healthcare, signaling that this technology will play a major role in our future careers. As indicated by Rob Reynolds, vice president of data and AI at Kyndryl, during a recent Risk Institute event, agentic AI will play a major role in how organizations and the workforce function, but it also has flaws and limitations that all employees, current and future, need to understand.

One of the most important takeaways from the discussion with Reynolds was that agentic AI systems are not inherently reliable. They can hallucinate – commonly seen when an AI agent fabricates information. Other potential concerns with agentic AI include misinterpretation of goals and cascading failures, in which one mistake triggers a chain of errors. Even more worrisome than its drawbacks is the accountability question: If agentic AI makes a mistake, who is to blame – the platform, software engineers or someone else?

To address these concerns, organizations must build guardrails. These include clearly defining the limits of AI autonomy and requiring human oversight for high-stakes decisions. Reynolds described this as “human in the loop,” maintaining oversight and approvals for major tasks while also allowing the agent to be useful without 24/7 supervision and implementing ethical and risk-based governance frameworks. Implementing guidelines is also crucial to maintaining boundaries within AI’s scope.

As a student preparing to enter the workforce, this raises an important challenge. Many of us are already comfortable using AI tools in our academic work and even for simplifying daily tasks. However, reliance on these systems can become a weakness. When agentic AI acts on our behalf, it becomes even easier to defer judgment rather than apply critical thinking. The real value will not be in simply using AI but in learning to question and prompt it. The new employer standard will be expecting us to understand when agents can be relied on and when they need supervision.

Ultimately, agentic AI presents both risks and opportunities. It has the ability to reshape productivity and workforces, but it also demands a thoughtful approach to ethics and decision-making.