Level 40, 140 William St, Melbourne VIC 3000 +61 3 7064 5507 contact@cyberimpact.com.au

Independent AI and cyber risk advisory.

Cyber Impact is a Melbourne based advisory firm. We work with executives, boards, regulators and CISOs across Australia on AI in production, cyber under regulator scrutiny, and the data governance that has to sit underneath both.

Trusted across APRA regulated and ASX listed organisations.

AI safety research featured on national television

Research that shapes the work.

Cyber Impact’s AI safety research has been featured on Channel 7, Sky News, The Australian, and across national media. The findings have immediate implications for any organisation deploying autonomous AI in production, and for the executives and boards expected to attest to it.

That research feeds directly into how the firm advises on AI safety, AI compliance, and AI governance. It is not theory. It is what we have seen autonomous systems do, why the existing guardrails are not enough, and what regulators, auditors and insurers will accept as proof.

The full account, including primary source transcripts and ongoing media coverage, is published on the founder’s personal site.

See the research archive at markvos.com.au

About Cyber Impact.

Cyber Impact is an independent advisory firm. Our practitioners have carried operational accountability for cyber and AI risk across financial services, government, and enterprise, including ANZ, Iress, Serco, EY, and PwC. We have stood up security operating models, fronted regulators, and walked into more than a hundred boardrooms with the answer to the question that gets asked when something has actually gone wrong.

The work is direct, evidence led, and built around what is actually deployed, not what was supposed to be. Every engagement leaves the client with evidence a regulator, an auditor, or an insurer will recognise.

Independent advice. The work serves the client first. Discreet by default.

The firm is founded and led by Mark Vos. His personal profile, AI safety research, book, keynotes, and media coverage are at markvos.com.au.

About the firm
Cyber Impact advisory team

We don’t do theory. We solve complex problems with evidence, pragmatism, and hard won experience.

30+
Years senior CISO leadership
12+ Months
Of adversarial AI safety testing
100+
Advisory engagements delivered
90%
Engagements delivered in under 12 weeks

Four exposures we see in every engagement.

The pattern is almost always the same. AI is in production. The control language was written for a pre AI estate. The board is being asked to attest to something nobody has fully mapped.

  • No AI register

    Most organisations cannot produce a current, complete list of the AI systems running across critical operations, vendor platforms, and shadow deployments. The first question a regulator asks goes unanswered.

  • Foundational governance not extended

    Data governance, cyber, privacy, and third party risk capabilities exist. They were written for a pre AI estate. The control language no longer matches what is actually deployed.

  • Decision integrity unprovable

    AI is making decisions that affect customer outcomes in lending, advice, claims, and onboarding. When the decision is wrong, the entity carries the regulatory and reputational consequence, not the vendor.

  • Regulatory obligations unmet

    AI agents sit inside critical operations. Tolerance levels, scenario testing, material service provider obligations, information asset protection, and Privacy Act ADM rules all apply. APRA CPS 230 and CPS 234 are the most cited examples. Most existing frameworks pre date the maturity of the AI footprint.

Agentic AI is the version of this problem regulators are already asking about. We’ve published the research.

When AI and cyber become a board level problem, organisations call Cyber Impact.

Independent advisory led by senior practitioners. We work with executives, boards, CROs and CISOs navigating AI in production and cyber under regulator scrutiny.

AI Compliance for APRA Regulated Entities

  • AI agent register tied to CPS 230 critical operations and CPS 234 information assets
  • AI Governance framework aligned to ISO/IEC 42001 and the entity’s risk appetite statement
  • Control review against APRA, ASIC, AUSTRAC, OAIC and SOCI obligations
  • Adversarial testing of priority systems
  • Documented decision authority and audit trails for customer affecting AI
  • Board Risk Committee paper, evidenced and audit ready

AI Safety Assessment

  • Discovery of current state AI usage, tooling, and policy gaps
  • AI Governance baseline and control framework
  • Targeted threat modelling and risk assessment
  • Technical guardrails for generative, copilot, agentic, and application level AI
  • C-Suite and board education sessions

AI Enablement

  • Where AI should be creating value, and why you have not moved on it
  • AI Governance foundations sized to the maturity of the organisation
  • The first one or two initiatives worth doing
  • The minimum governance to deploy without compliance exposure
  • A named executive owner and a written action list
  • Commercial case tied to a specific outcome
Ongoing service

AI Governance as a Service

  • Live AI agent register, maintained as the estate changes
  • Monthly drift testing, decision integrity sampling, and adversarial probing of priority systems
  • Quarterly Board Risk Committee pack with findings, remediation, and exposure trajectory
  • Material service provider reattestation on the cadence the regulator expects
  • Named partner accountable for the relationship, supported by the firm’s specialist team
  • Monthly retainer, scaled to the size and complexity of the AI estate
Ongoing service

Data Governance & Privacy

  • Data classification, ownership, and lineage across critical operations
  • Privacy Act compliance, including the December 2026 ADM (automated decision making) rules
  • Privacy Impact Assessments and OAIC engagement
  • Cross border data flow assessment and standard contractual clauses
  • Data minimisation, retention, and lawful basis review for AI training and inference
  • Subject Access Request response, breach notification, and incident remediation

Fractional CISO & GRC

  • Senior CISO capability without the full time hire
  • Board reporting and regulator engagement
  • Security operating model and target state architecture
  • End to end GRC oversight: ISO 27001, Essential Eight, NIST, APRA CPS 230 / 234
  • Incident response, crisis management, and tabletop exercises

Third Party Security Reviews

  • Independent review of vendors, partners, and material service providers
  • Scoped to contractual rights, data exposure paths, and the regulatory obligations the entity carries
  • Evidence the regulator, auditor, or insurer will recognise
  • Findings written for boards, not for vendors
  • Remediation prioritised by exposure, not procurement convenience

What clients say.

Trusted by leaders who value direct, evidence led advice on AI and cyber risk.

Cyber Impact delivered a board ready IT strategy that supported growth and security. Commitment, clarity, and hands on leadership made a real difference.
Damon Tudor CEO, URBNSURF
Cyber Impact combines deep technical skill with sharp business sense. They know how to align security with business goals and speak the language of execs and regulators.
John Heaton CTO, Alex.Bank
Whether it’s a big bank or a scale up, Cyber Impact brings strong cyber instincts, challenge where it counts, and practical solutions that hold up under pressure.
David Wakeley Chairman, URBNSURF, ParaFlare & Auto UX; Non Executive Director, Australia Bank

Executive Insights.

Evidence led writing on AI safety, cyber strategy, and what boards aren’t being shown. Written for executives, directors, and CISOs accountable for AI in production and cyber under regulator scrutiny.

Want to know where your AI and cyber exposure actually sits?

Book a private briefing with the Cyber Impact team. Discreet, off the record, no obligation. We’ll surface the exposures the board hasn’t been shown yet.

Book a Briefing