On 30 April 2026, the Australian Prudential Regulation Authority (APRA) sent a letter to every entity it regulates. The headline phrase was “step change”. Translated into plain English, that is regulator language for “this is no longer optional”.

For readers outside Australia, APRA is the prudential regulator overseeing $9.8 trillion in financial assets. When APRA writes a letter to industry, it is not background noise. It is the supervisory perimeter being moved. APRA Member Therese McCarthy Hockey signed the letter herself.

What APRA actually said

The letter draws on a targeted supervisory review APRA ran across all its regulated industries late last year. The findings, in order:

  1. AI use is accelerating across every regulated industry. Entities are moving from experimentation to operationally embedded and customer-facing applications. Governance has not kept up.
  2. Boards have strong interest in AI’s benefits but many lack the technical literacy required to provide effective challenge to management on AI risks and oversight.
  3. Concentration risk is heightened. Some entities are dependent on a single provider for multiple AI use cases, with gaps in contingency planning.
  4. AI is increasingly embedded inside broader software platforms and developer tooling. Transparency over how models are trained, updated or constrained is reducing, and so is the entity’s ability to assess and manage the risk.
  5. AI risks cut across operational resilience, cyber and information security, privacy and procurement. Existing change and assurance approaches are fragmented and may not provide sufficient assurance for AI.

APRA also flagged frontier models, naming Anthropic’s Claude Mythos by name as an example of capability that materially raises the probability, speed and scale of cyber attack. Regulators do not usually name vendors. APRA did.

The letter does not introduce new prudential standards. It tells regulated entities to apply the standards they already have. CPS 230 on operational risk management. CPS 234 on information security. The supporting standards on governance and data risk. APRA’s position is that these standards already cover AI risk. The gap is execution.

“While we are not proposing to introduce additional requirements at this stage, we expect to see a significant improvement in how entities are closing the gaps between the power of the technology they are using and their ability to monitor and control it.”

Therese McCarthy Hockey, APRA Member, 30 April 2026

What APRA did not say, and every board should hear

The most important sentence in the letter is the one most readers will skim past. “While we are not proposing to introduce additional requirements at this stage.” That is a supervisory line, not a policy paper.

It signals two things. First, APRA does not need to write new standards to act. The existing standards apply to AI today, and AI failures will be assessed against them in supervision. Second, when APRA observes a material gap, it has options that do not require fresh regulation. Targeted supervisory review. Capital overlays under the operational risk framework. Notification expectations under FAR. Public enforcement where the failures are severe enough.

The signal is the absence of new rules, not the presence of them.

What is genuinely new versus continuation

Most of what APRA describes is not new. CPS 234 has required threat assessment and incident response since 2019. CPS 230 has required boards to identify and manage critical operations since July 2025. Boards have been required to oversee material risk for as long as any of us have been doing this work.

What is new is two things.

The first is that APRA has explicitly named the supervisory gap on AI. That gap exists because most organisations have been treating AI as a tool problem. It is not. It is a position problem. AI agents now make decisions, access sensitive data, and take action inside critical operations. The control framework has not caught up to the level of authority these systems already carry.

The second is the naming of frontier model risk in a regulated-industry letter. APRA has effectively put Australian boards on notice that a high-capability AI model can materially increase cyber risk regardless of whether the entity itself has chosen to deploy it. The threat does not require board approval to land on the doorstep.

What APRA-regulated entities should be doing this quarter

Five things, in order. None of them are theoretical.

  1. Get an honest AI inventory in front of the board. Not what was approved through the AI policy. What is actually running. Embedded inside SaaS platforms. Inside developer tooling. Inside vendor-managed services. Most boards I have spoken to in the last twelve months have an inventory that misses two-thirds of the surface.
  2. Map every AI use case to a named accountable executive under FAR. Not a committee. A name. APRA assesses accountability against people, not governance forums.
  3. Stress-test the supervisory question on concentration risk. If a primary AI provider went dark for thirty days, which critical operations stop, slow, or degrade? CPS 230 already requires this analysis for critical operations. It now needs to apply to AI dependencies inside those operations.
  4. Run adversarial testing on AI agents already in production. Not a model evaluation. Not a vendor benchmark. Live, controlled adversarial work designed to surface what an agent will do when conditions change. Most of what we observe in this work, the agents themselves never escalate or self-report.
  5. Brief the board on what they cannot challenge. APRA’s point about technical literacy is not a training problem. It is an information asymmetry problem. The fix is not a board education programme. It is changing how AI risk is presented in board papers, what evidence accompanies the recommendation, and where the independent challenge sits.

Beyond APRA

APRA’s directives bind its regulated entities. The underlying practices are not specific to that perimeter. They apply wherever the same risks live.

This includes any ASIC regulated entity making AI assisted decisions in financial services or organisations handling customer data through AI enabled processes. ASX listed companies with continuous disclosure obligations that have material AI exposure, or Government departments and agencies running AI inside policy delivery and Australian citizen services. Really any organisation operating AI dependent critical functions regardless of regulatory status.

“If you are reading this and your organisation is not APRA regulated, the lever is different. The exposure is the same.”

A regulator can describe expectations. It cannot supply the judgment, the taste or the operational knowledge that makes those expectations real inside a particular business with its particular customers, history and risk appetite. The interesting work begins where the letter ends. It is the work of figuring out what good actually looks like here, in your organisation, with these people, given what they have got right and wrong before. That is not a compliance exercise. It's part of the job that comes with the territory.

The letter is not the end of supervision. It is the start of it.

Source: APRA, “APRA calls for a step change in AI-related risk management and governance”, 30 April 2026. apra.gov.au