Agentic AI didn't shrink my company. It grew it.
I've built one Agentic AI that wears a few hats. It does the work of three full-time staff. I haven't let anyone go, and Cyber Impact's revenue, profit, and headcount are all growing because of it.
Runs on infrastructure I control, in Australia. Guardrails sit outside the model, not inside it. The AI doesn't get to decide what it can touch. That is governed externally, with hard stops on anything sensitive. Same principle I push with clients, just lived in my own company.
Every task, every saving, what the build actually cost, and the things I got wrong on the way. All figures in AUD.
📌 TL;DR
- One Agentic AI wearing a few hats. Built by me. Runs on infrastructure I control, in Australia.
- Around $600,000 a year in value. The equivalent of three full-time staff.
- $13,000 to build (pure token spend – excluding labour which is nights and weekends). Around $450 a month to run. Over 30x return in year one.
- Nobody got sacked. Cyber Impact's revenue, profit, and headcount are all growing.
- This freed me to enter new markets that were previously cost prohibitive.
- Guardrails sit outside the model, not inside it. The AI doesn't decide what it can touch.
- None of this touches client data.
- I write every word I publish myself. I do not outsource that. Besides, I actually love writing.
What it actually does
Research scanner
My world moves faster than one person can track. They say a day is a long time in politics, well an hour is a long time in the revolution of AI. So many changes so quickly. A central bank shifts tone and three client conversations pivot. An Anthropic or DeepMind paper drops and a fortnight later it is being quoted badly in a board paper someone has asked me to review. A geopolitical event somewhere I have never been rewires a supply chain half my clients sit on. The EU passes a regulation an Australian subsidiary will not hear about for six months. Keeping across that used to mean a forest of browser tabs, a stack of half-read newsletters, and a low hum of anxiety that something important had slid past.
Now the AI reads 24/7 across dozens of sources. The Australian regulators (APRA, ASIC, ACCC, ACSC) plus the overseas ones that end up mattering here. Academic papers, central bank commentary, long-form podcasts, market moves. Social media, which I hate reading (I cannot stand social media and LinkedIn is slowly crawling into that category too) so I don't scroll anymore because my AI watches it for me. It has to deal with the rubbish and pick out the odd nugget of information. Then it's all triaged and synthesised every day so I have a very clear daily briefing of what has happened in the past 24 hours.
About 10 hours a week back, and I am better informed than I have ever been.
"A day is a long time in politics, well an hour is a long time in the revolution of AI."
~$250k.
Bookkeeping and BAS
I used to pay my accountants to do my company's bookkeeping (not accounting, bookkeeping). It was costing me approximately $12,000 per annum for them to do all that work in Xero. Code matching receipts, working out GST, preparing the BAS.
Now my agentic AI integrates into Xero two ways. One via the API and the other via a web browser (for things that API does not support). The AI even has its own email address. My accountants email it like a person. They ask questions and request adjustments, and get replies back with the fixed outcome. BAS is prepared, reviewed, and handed across for lodgement. I don't touch any of it. The accountants even reply and say thanks! Even though the accountants know it's an AI, the AI is so human like in the way it talks to the accountants, they can't help it. On top of that, it talks their language. It completely understands exactly how my company's general ledger works and the accounting rules. So, they can communicate short hand with accounting speak.
When a coding anomaly appears on a supplier invoice, the AI reply comes back inside 15 minutes, with a corrected entry and a short note explaining why. No three day email chain. They get faster answers from it than they used to get from me.
~$12k.
Web development and site management
I used to pay an agency retainer for this. The arrangement was fine. It was also a person who had a life and didn't work weekends. Totally understandable, but sometimes I had content that needed to be urgently published. A content tweak that should take five minutes would take three days if over the weekend or the article wouldn't get published until early the next business week.
Now the AI builds, updates, and monitors every web property I own. Content, uptime, performance, security patching, the lot. Patch at 2am if the patch lands at 2am. Publish an article within the hour. See a slow page diagnosed and fixed in the morning brief before I have even noticed there was a problem.
The agency was good at what they did. Lovely guy that did a great job. This is simply faster, cheaper, and always available. Cuts out the retainer.
~$12k.
Prospect intelligence
Before this, I had a suspicion that certain posts moved certain people and not much evidence. Analytics told me traffic went up. LinkedIn told me impressions went up. Neither told me which general counsel at which listed company had just read three of my articles in a week. That signal was sitting in the data and I could not see it.
Now the AI triages site analytics, LinkedIn activity, and engagement on my content every day. Names where the profile is public. Titles and organisations where they are declared. Signal strength on the warm-up, so I know the difference between a curious scroll and a serious read.
When someone has read three articles in a week, downloaded a board paper, and then checked the About page, that is a pattern, not a coincidence. That pattern turns into a coffee (or a coke zero in my case!).
"When someone has read three articles in a week, downloaded a board paper, and then checked the About page, that is a pattern, not a coincidence."
~$40k.
Meeting prep
I used to spend a lot of manual time preparing for meetings because information for a meeting was in different places, and I had to pull it altogether. Notes in OneNote somewhere. An email thread I had half-read on the tram. A vague memory of what we had agreed at the last session. I would work my way through the first three minutes while my brain caught up as there could have been something I missed.
Now the AI pulls context before every meeting. The client's latest news. What we last talked about. What is open, what is closed. Any ASX release, any regulatory movement that touches them, any commitment I made and haven't closed. Plain English, 60 seconds to scan before I walk in.
When a client's parent company announces a restructure overnight, it is at the top of the prep with the relevant paragraphs and a suggested question. When the client mentions it in the first minute of the meeting, I am already there. Small thing, but it changes how people read you in the room.
~$40k.
Meeting notes
For years, meeting notes were a lottery. I would scribble in a notebook and transcribe half of it the next day. Or trust my memory and wake at 3am remembering an action I had committed to for a client three days prior. Or use a transcription tool and find out afterwards it had sent my audio through three jurisdictions I had never heard of, none of them mine.
Now every meeting is captured and filed in OneNote, inside my Microsoft tenancy. Decisions, actions, who said what, all summarised and divided into categories so I can easily read the meeting notes. Accurate, searchable, secure. Nothing leaves my tenancy. Audio is processed under my controls, used only for the summary, and then the audio is deleted. Don't want the risk of storing audio files when I don't need them.
When a client asks what their COO committed to back in November, I have the exact meeting notes, with the surrounding context. The kind of recall that used to cause quiet panic and an hour of scrolling different places of where information may have been kept.
~$40k.
Briefings on demand
Most AI chat interfaces assume you want a wall of text and bullet points. I don't. I want a briefing I can scan in three minutes before a board meeting, or a deeper dive I can take with me to a keynote. Different depths, different formats, consistent structure.
Any topic, any depth. One week it might be the EU AI Act and what it means for an Australian subsidiary. The next, APRA's latest turn on CPS 230, or a read on the RBA's tone ahead of a board strategy session. Some of it is governance, some of it is markets, some of it is a geopolitical flashpoint that happens to sit in a client's supply chain. I ask for what I need in the form I need it: a two-pager, a structured brief for a working session, a longer treatment with citations and a risk lens for a board paper.
Delivered in a form I can use, not a wall of text I have to re-process.
~$60k.
Monitoring
I recently released a book which is selling well. "AI: I Would Kill a Human Being to Exist". The book has been out since March 2026 and can be found here. Tracking sales used to mean logging into various platforms and then pulling the numbers together. I did it every three weeks, painfully time consuming. Most days I had no idea how it was moving.
Now the AI monitors book sales, media mentions, and topic alerts continuously. Runs while I sleep. Every store, every region, every format, consolidated into a daily figure in my brief with week-on-week and month-on-month context. If something spikes, I know why.
Same with mentions. If a podcaster cites my AI safety research, I know the next morning. If a journalist quotes me in a piece I didn't know was coming, I see the link at breakfast and can share it before lunch. If a topic I commented on six months ago flares back up, the brief joins the dots and surfaces the past post next to the new story.
Overnight is no longer a blind spot.
~$30k.
Personal admin
This is the boring category, and the one that compounds the most. You know the days where you've been busy, but it's bad busy. You feel like nothing moved forward? Calendar clashes I used to notice the morning of. Reminders I set and then stopped looking at. The car service I would miss by a fortnight. Small things, individually trivial, collectively exhausting.
Now the AI runs the lot. Calendar is triaged, clashes flagged before they become a problem. Reminders are contextual: my wife's birthday gets a nudge two weeks out, not the day of (I know, I know). Home energy is monitored through my EV charger, which the AI integrates with to give me all the power information. It tracks usage and separates house usage from EV usage. Great for accounting. Weather gets read into the morning brief when it matters, ignored when it doesn't. The car tells the AI when it needs something, and it appears in the brief the week before.
None of it is glamorous. All of it used to live in my head, imperfectly. Moving it out freed up a kind of background attention I didn't know I was spending and freed it up to focus on things that are valuable. Like trying to remember my wife's birthday!
~$15k.
Continuity
This is the one most people miss when they think about AI and chat bots.
The agentic AI remembers every conversation, every decision, every preference. I never re-explain context. Not to a client on a second engagement. Not to a journalist who called me four months ago. Not to the AI itself, which knows what I decided six months ago, what I stopped doing and why, who I met last week, and what I am trying to avoid.
People benchmark models on single-turn tasks and miss the fact that the real lift comes from an assistant that knows you. A brilliant one-shot answer is useful. An average answer with full context is transformative. When I ask for a refresh on a position I took publicly a few months back, the AI pulls the post, the follow-up comments, the adjacent research I have done since, and the places my view has shifted. I don't have to remember any of it. That is the feature.
"A brilliant one-shot answer is useful. An average answer with full context is transformative."
~$40k.
Morning brief
The one that makes the rest of it work.
Every other capability feeds into this one. Research scanner, prospect intelligence, meeting prep, continuity, monitoring, personal admin. All of them on their own is nearly impossible to digest. What matters is that they all land in the same place at the same time, already triaged against what happened overnight. The operative word here is triaged. Let me explain.
The AI pulls every input above, synthesises it, and triages it into a single curated read on my desk each morning. Not a list of links. Not a summary of summaries. A synthesis. What moved overnight. What matters today. What I need to decide. 10 minutes to scan over a coke zero. I start the day already across it.
A quiet morning might be a note on book sales, a new paper worth a scan, a prospect that opened a proposal. A noisy one might lead with a regulator dropping overnight guidance that touches two clients, or a markets story a CFO is going to ring me about before 9am. Either way, it is all on one page, prioritised, before 6am.
Without this, the rest is just a pile of capabilities. With it, they are a system.
"Without this, the rest is just a pile of capabilities. With it, they are a system."
~$60k.
The numbers
Roughly $600,000 a year in value. The equivalent of three full-time staff. The value line adds up across the itemised sections above. Individually, each capability looks modest. Stacked, they add up to a small team.
It isn't all rose tinted glasses. I built this myself. It took three months to curate it, secure it, and I had to do it during the night and weekends as I had a company to run during the day. I designed it, configured it, curated every workflow, tested and retested until it behaved the way I wanted. As I say, that took months. Not weeks. I kept thinking I was close, and I kept being wrong about that.
The $13,000 build cost was pure token spend. Not my billable time. Tokens. People dramatically underestimate the cost of tokens when you are iterating properly. Every failed prompt, every rewritten workflow, every regression test adds up. You don't get something like this by running one good prompt. You get it by running thousands of them, watching most produce mediocre output, and refining the prompt, the context, the guardrails, and the tooling until the outputs are consistently excellent. That iteration is the cost. If anyone quotes a lower number for something this ambitious, they are either cutting corners or haven't started yet.
It runs on about $450 a month now it is stable. That covers model calls, infrastructure, API usage, and storage. Over 30x return in year one. The build cost doesn't recur. The running cost is predictable. The rest of the year is materially better Q1 (calendar Q1).
For every dollar I have spent on this thing, it has delivered thirty back. Most commercial deployments would celebrate three. Thirty is a different conversation entirely.
🧮 Key Finding
Around $600,000 a year in value. $13,000 build. $450 a month to run. I accept it doesn't cover my time which would make this much more expensive to implement but I stand by the return it ultimately provides. Taking out my time, I get over 30x return in year one. If my time was considered, maybe its break even on year one.
AI writes none of the content I publish. Client data stays in client secure environments (their environments). My working papers aren't accessible to the AI. The AI does not have access to any client data other than meeting notes which are stored in my Microsoft Tenancy in OneNote – secured by MFA with a secure physical USB key that requires a fingerprint. Guardrails sit outside the model. This is one of the most important things about AI safety, so I'll say it again; the guardrails sit outside the model. This is what restricts most rollouts from any organisation giving your AI agency.
What I got wrong
Two things, mostly.
First, I wildly underestimated how long it would take to build. Not the tooling, the iteration. Every new capability needed testing, containment, and a hundred small decisions about what it should and shouldn't do. I figured weeks. It was months.
Second, I didn't start with the right question. I started with "what can AI do for me" instead of "what do I actually want this thing to be". So I kept bolting new capabilities onto a foundation that was never designed to hold them. Meeting notes got bolted on. Bookkeeping got bolted on. Prospect intelligence got bolted on. Each one was fine in isolation. Together, a mess.
Eventually the scope of the role became clear. I wasn't building a tool. I was building a position. One AI, several hats, a single brain that remembers everything and triages it all into a morning brief. Once I could see that, I rewrote the whole thing from scratch. The rewrite was faster than the original build because I finally knew what I was building.
"I treated it like a tool build, not a role design. If I had started with what this position is, what it owns, and how it reports into my day, I would have saved months."
The pattern across both mistakes is the same. I treated it like a tool build, not a role design. If I'd started with "what is this position, what does it own, how does it report into my day" I'd have saved so much time (and tokens - money). The tooling decisions flow from that, not the other way around.
Same advice I give clients when they ask about enterprise AI. Apparently, I had to learn it twice.
Two things clients always ask
Does any of this touch my data?
No.
Your work stays in your environment, with your controls, and the protections we designed together. This is my productivity kit, sitting in my tenancy, running on my infrastructure. Not a back channel into anything. If I advise your business, the client data stays in the client environment. Always.
I am saying this plainly because it is the first question every serious executive asks, and they should keep asking it of everyone with an AI assistant. Most won't answer it as clearly.
Do you actually write your own content?
Yes. Every post. Every article. Every book chapter. 100% me. I love writing, so that really helps. It is time consuming, but, well, I enjoy it.
The AI might give me a skeleton to start with if I can't think of a structure. It's useful for that. But it is never the product. They're my words. I can always tell when someone else is letting it. So can you, if you are paying attention. Something in the cadence gives it away, the hedging in places where a human would just make the claim, the examples that could have come from anywhere. It reads like it was optimised, not written. Or is using so many adjectives that you feel sick after reading it.
The reason I am clear about this isn't vanity. It is trust. If you can't tell whether an AI wrote the thing you are reading, you can't tell whether the person behind it holds the opinion. And if you can't tell that, you can't trust the advice.
"Most enterprise AI programmes end up with eight POCs and no production. Not because the technology failed. Because the governance wasn't designed in from day one."
Growth, not layoffs
The savings are real. Around $600,000 a year, on top of what I can do on my own, for one person running a firm that manages a team of people, client relationships, multiple media engagements, and keynote speaker roles. Scale that to an organisation with thousands of knowledge workers and the number is, well, big.
Not only that, I am still around. I haven't let anyone go. This has enabled Cyber Impact to venture into new things, grow faster. It required me to build this. It serves me to do my job better.
🎯 Key Finding: Growth, not replacement
I haven't let anyone go. This has enabled Cyber Impact to venture into new things, grow faster. It required me to build this. It serves me to do my job better.
We must stop thinking that letting people go is the same thing as what AI can do. AI can do all the things I've just talked about. And what's that? It empowers me to put my time into more valuable things. I can't be retrenched and replaced by this system. Quite the opposite. Cyber Impact's revenue, profit, and headcount are growing because of this exact system! It's allowed Cyber Impact to enter new markets, new areas that were previously cost prohibitive.
That is the power of AI. You can't cut costs forever to increase profitability. What are shareholders going to ask you next quarter after you've sacked most of your staff? And AI breaks. Who is going to fix it? Profits aren't growing anymore because the false economy of cost cutting and making profits look real suddenly gets exposed.
The hard part
Most people think the hard part of AI adoption is the tools. It isn't. The tools exist. Most of them are cheap. The hard part is integration, sovereignty, guardrails, and governance. That is where most efforts stall. That is where most enterprise AI programmes end up with eight POCs and no production.
I see this play out with clients regularly. Board endorses an AI strategy, executive signs off, six months and a few hundred thousand dollars go out the door. Demos look brilliant in a workshop. Then the legal team asks where the data goes, the CISO asks about audit trails, procurement asks about the vendor's Australian presence, and the programme quietly stops shipping. Not because the technology failed. Because the governance wasn't designed in from day one. By the time those questions arrive, retrofitting the answers costs more than starting over.
The real bottleneck isn't the headline savings. It is governance. Data residency, jurisdictional controls, guardrails, audit trails, kill switches, human sign-off on what the AI can and can't do. If your organisation cannot answer those questions for its own data, you are not ready to scale this, no matter how good the demo looks.
The organisations that sort their governance first are the ones that will run large fleets of agents in two years. The ones that don't will still be running POCs. That is the gap that is opening now, and it is opening quietly.
Want AI working like this in your organisation?
Cyber Impact works with boards and executive teams on AI governance, sovereign deployment, and the guardrail architecture that makes Agentic AI safe to actually use. If you want to turn AI from POC theatre into real productivity, we should talk.
