caalley logoThe alley for Indian Chartered Accountants

AI Agents Are Undermining SOX, And CISOs Are On The Hook

Feb 3, 2026

For more than 20 years, the Sarbanes-Oxley Act (SOX) has governed financial reporting. Its entire architecture was built around a human-centered world: People initiate transactions, people review exceptions, people make judgments, and people follow the controls that keep financial statements accurate. The idea that nonhuman actors might one day participate directly in financial workflows was not even a consideration.

That era is ending.

AI agents have moved far beyond the role of “co-pilots.” They now prepare journal entries, resolve exceptions, classify transactions, enrich financial datasets and initiate ERP workflows. They are not merely supporting the accounting process; they are contributing to it. And unlike traditional automation, AI agents reason, adapt and change their behavior based on prompts, context and model updates. They make decisions at machine speed and scale that was unimaginable when SOX was written.

This shift is quietly collapsing the boundaries between security, engineering, finance and internal audit. For the first time, SOX, a statute historically owned by finance, is coming under the purview of CISOs who must assume responsibility for governing identity, privilege and digital behavior for AI agents.

As a result, AI adoption is forcing organizations to rethink the processes they’ve relied on for decades, especially compliance. Frameworks like SOX were designed for predictable, human-operated systems with periodic oversight. But AI agents introduce a level of complexity that demands a new compliance posture: one built on continuous control validation, identity governance for AI agents and auditable change management for prompts, models and permissions. SOX must evolve accordingly.

AI Is Reshaping Financial Controls

Once embedded inside the processes that drive financial reporting, AI agents transform data, adjudicate exceptions and determine the sequence of actions in complex workflows. A human accountant reviewing dozens of reconciliations in a day can spot anomalies instinctively. An AI agent reviewing thousands in minutes has no intuition at all, yet its decisions may flow directly into reported results.

The challenge is not that AI introduces new types of errors. Humans also make mistakes. It’s that AI behavior is fluid. A prompt adjustment, a model retraining or even an unexpected change in upstream data can cause an agent to behave differently from one week to the next. A control validated in Q1 can be silently invalidated by Q2.

Machine-Driven Decisions Jeopardize Reporting

SOX’s control environment assumes predictable actors. Humans understand role boundaries, act with intent and produce evidence an auditor can evaluate. AI agents do none of these things. They operate based on statistical inference, not rules. They follow privileges, not job descriptions. And while they generate logs of what they did, they do not generate explanations for why they did it.

This undermines several foundational SOX concepts. Segregation of duties can collapse if an agent is granted credentials that allow it to cross functional boundaries. Management review controls become unreliable if the underlying data was produced through opaque reasoning that cannot be reproduced. Periodic testing becomes insufficient because an agent’s behavior may drift long before the next audit cycle.

Even identity governance, long a security staple, becomes an issue when AI is introduced into SOX reporting. Many organizations still deploy agents with shared credentials, unclear ownership or broad access intended to “get them working.” Those shortcuts create blind spots that threaten financial integrity.

CISOs New Role In SOX Compliance

Whether they planned for it or not, CISOs now own the systems and practices that determine whether AI agents can be trusted inside SOX workflows. Finance and internal audit may define the controls, but AI requires that security teams enforce them in practice.

CISOs are inheriting this responsibility because they already govern the AI elements SOX is colliding with: identity, privilege, drift detection, data integrity and behavioral monitoring. Security leaders are uniquely positioned to understand how production systems evolve and how easily controls can be bypassed when nonhuman actors operate with broad or unmonitored access.

More importantly, CISOs sit between engineering, which builds and deploys AI agents, and the governance functions that must attest to the accuracy of their outputs. In the AI-SOX convergence, CISOs become the connective tissue between the teams that deploy AI and the teams that certify the financial outcomes.

Blending Security, AI And Finance

As AI enters financial workflows, CISOs must lead in three critical areas to keep SOX controls intact.

First, they must ensure that AI agents are treated as privileged identities, not as background scripts. That means clear ownership, life cycle management, least-privilege access and continuous monitoring for behavioral drift, all concepts familiar to security but new to financial governance. An AI agent adjusting journal entries is no different from a new employee with access to the general ledger: Both must be governed, monitored and auditable.

Second, CISOs must help engineering bring software discipline to AI workflows. Prompts, model versions, plug-ins and data pathways must be versioned, reviewed and controlled with the same rigor as production code. In a SOX context, a prompt change is effectively a change to a financial control, and CISOs can push organizations to treat it that way.

Third, CISOs must work directly with finance and internal audit to create transparency around how AI-driven decisions are made. An agent’s behavior must be explainable enough that auditors can validate its reasoning and certify its outputs. This requires implementing the appropriate instrumentation, logging and governance structures.

In each case, the CISO is not replacing finance or audit, they are enabling these teams to operate safely in a world where critical tasks are performed by digital actors.

As AI agents take on real responsibility inside workflows like SOX, identity controls must become an extension of financial governance to lower audit risk and build trust in systems that move at machine speed. Unlike periodic, human-centric control models, AI agents demand continuous verification of identity, privilege and behavior at every step to ensure that automated decisions are both defensible and auditable.

[Forbes Technology Council]

Don't miss an update!
Subscribe to our email newsletter
Important Updates