Agentic AI Is Already in Your Business β Here's What That Actually Means
69% of Australian organisations already use AI agents. Only 22% have adequate governance. New joint government guidance published this week explains the risks.
The statistic that should focus Australian business leaders right now comes from Deloitte's 2026 State of AI in the Enterprise report: 69% of Australian organisations are already using autonomous AI agents β systems that don't just answer questions but take actions, make decisions, and execute tasks without waiting for a human to approve each step. Only 22% of those organisations say their governance frameworks can keep pace with that deployment. This week, Australia's own Australian Signals Directorate joined with security agencies from the US, UK, Canada, and New Zealand to publish the first joint government guidance on agentic AI β titled, pointedly, "Careful Adoption of Agentic AI Services."
What an AI agent actually does
Unlike a chatbot, which answers questions and stops there, an AI agent can act. It can book a meeting, send an email, update a spreadsheet, query a database, place an order, or file a report β and it can chain those actions together to complete multi-step tasks. Hand an agent access to your email, calendar, and accounting software and it could, in principle, draft a quote, send it to a client, and log the interaction in your CRM without you touching it.
That capability is genuinely useful. A sole trader who puts an agentic AI tool to work on client correspondence and invoice follow-up could recover several hours every week. A 30-person firm could automate supplier communication, internal reporting, or onboarding paperwork. The productivity case is real.
But the risk profile is different from a chatbot. A chatbot that hallucinates gives you a wrong answer. An agent that hallucinates β or is manipulated by malicious input β could take a wrong action at scale: send an email you didn't authorise, delete a file, or expose data to the wrong recipient. The margin for error is narrower, and consequences arrive faster.
Why governance is the gap
Deloitte's Australian data reveals a pattern that should concern anyone deploying these tools. Only 12% of Australian organisations say generative AI is genuinely transforming their operations β less than half the 25% global average. Organisations are running pilots, acquiring subscriptions, and deploying tools, but the gap between "we have it running" and "we have it running safely and effectively" remains wide.
Agent governance means being clear about what a system can access, what actions it can take without human approval, what it logs, and who reviews those logs. In most deployments β particularly in smaller businesses β that thinking gets skipped entirely, either because the tool makes it easy to get started without it, or because no-one in the business is assigned to think about it.
KPMG Australia notes that while generative AI augments humans by providing information and guidance, agentic AI can fully automate what humans do by taking actions on their behalf. That shift in capability is also a shift in accountability: when an AI answers wrongly, you correct it. When an AI acts wrongly, you often clean it up after the fact.
The ASD's guidance is explicit about the consequences. Organisations that give agentic AI broad access to sensitive data and critical systems before establishing controls are accepting risks they often can't see. The guidance identifies five categories of concern β privilege escalation, design and configuration flaws, unpredictable behaviour, structural vulnerabilities, and accountability gaps β and notes that these are not theoretical problems for future deployments. They are present in systems running now.
What the government guidance actually recommends
The joint guidance from the ASD, CISA, the NSA, and their Five Eyes counterparts is pragmatic and readable. The core advice is practical: start with low-risk, non-sensitive use cases; apply the principle of least privilege (give the agent only the access it needs, nothing more); monitor what it does; and plan for the possibility that it will behave unexpectedly.
The guidance also recommends folding agentic AI into your existing security and governance frameworks rather than treating it as a separate discipline. Zero trust, defence-in-depth, and incident response planning all apply. If those frameworks don't exist yet, this is the prompt to build them before the agent does.
For smaller businesses, the practical translation is: before giving any AI system access to email, financial systems, or customer data, write down what it's allowed to do and what it's not. Review the logs weekly. Don't let the tool expand its own permissions without someone explicitly approving the change. Build in a way to switch it off quickly if something goes wrong.
Before deploying an AI agent: four questions to answer first
What data does this agent need access to β and what could it access if misconfigured? Who in your business is responsible for reviewing what the agent does each week? What's your procedure if the agent takes an action you didn't intend? And have you read the full permissions you accepted during setup? Answering these questions before deployment costs an hour. Not answering them tends to cost considerably more.
Australian organisations are not behind on adopting agentic AI β 69% are already there. The gap is in the controls. The Deloitte data and the government guidance arrived in the same week, pointing to the same conclusion: investment in these tools is running well ahead of investment in understanding them. That imbalance is manageable now, while deployments are still small and correctable. It becomes much harder to address after an agent has acted on bad data, exposed client information, or run up costs no-one approved.