Shadow AI Is Already in Your Business β What to Do About It
49% of employees use unsanctioned AI tools at work. In Australia, 92% of leaders say that introduces risk. Here's what to do before a shortcut becomes a breach.
Around half the staff at most Australian businesses are using AI tools at work that were never approved by their employer. A 2025 BlackFog survey found that 49% of employees admit to using unsanctioned AI tools at work. In Australia specifically, 72% of leaders say they see AI applications being used by staff in non-IT functions, and 92% believe the unauthorised use introduces risk. Very few have an answer for it.
This is "shadow AI" β generative AI used outside any governance framework, formal approval, or disclosure. The cost of entry is a browser tab, which makes it quieter than shadow IT and expanding faster than most policies can keep up with. IT Brief Australia reports enterprise genAI usage rose 50% in the three months to May 2025, with more than half of all AI adoption inside organisations now sitting in the shadow category.
The data leaking out is not hypothetical
The clearest warning sign is what staff are actually pasting in. One-third of employees have shared research or internal data sets through unsanctioned AI tools. More than a quarter have shared staff information β names, payroll data, performance reviews. Around 23% have shared financial statements or sales figures. Anything that leaves the business this way does not come back.
Samsung's semiconductor division is the canonical example, used in a UNSW BusinessThink analysis of the governance problem. In March 2023, within 20 days of allowing engineers to use ChatGPT, three separate incidents saw employees paste in source code, a test sequence for identifying defective chips, and the transcript of a confidential internal meeting. Because the public tool retained user input to train on, that information effectively moved to OpenAI's servers. Samsung banned the tool, then later reintroduced governed access β conceding that controlled use was safer than outright prohibition.
For a smaller Australian business the risk profile is different but not smaller. A sole trader pasting a client's personal information into a free chatbot to draft a proposal has, under the Privacy Act, likely breached obligations to that client. A bookkeeper running a free transcription tool over a meeting that included wage details may have exposed employee data. These are actual patterns of use, not abstract threats.
Speed is winning against security β especially at the top
Shadow AI is hard to stop because it works. It speeds up drafting, summarising, coding, and research. Staff who use it get more done and they know it.
Sixty percent of employees in the BlackFog survey said using unsanctioned AI tools was worth the security risk if it helped them meet a deadline. The numbers get worse up the org chart: 66% at director or senior vice-president level, and 69% at president or C-level, said speed outweighs privacy or security. The people most likely to cut corners on AI policy are often the people responsible for writing it.
That makes shadow AI a management problem as much as a technical one. An AI use policy that nobody in leadership follows is a fiction. A ban on tools staff obviously need creates an incentive to hide the usage, not to stop it.
A practical starting point
Before writing a policy, do an inventory. Ask everyone β including directors β what AI tools they have used for work tasks in the last month, with no penalty for honesty. Sanction two or three tools officially so staff have approved options with better data handling than the free consumer versions. Then write a one-page acceptable use rule covering what categories of information must never be pasted into an AI tool: client data, staff data, financial records, and anything covered by a confidentiality clause. Revisit the list every quarter.
The regulatory vacuum will not save you
Some businesses are waiting for regulation to force a decision. That wait looks like it will be long. Australia's National AI Plan, released in December 2025, dropped the mandatory guardrails for high-risk AI that had been proposed the previous year. The government's position now is that existing laws β Privacy Act, Australian Consumer Law, anti-discrimination legislation, sector-specific rules β already apply to AI use and will continue to.
That does not mean there are no rules. The Privacy Act applies to any business with turnover above $3 million, and to all health service providers and employee data handlers regardless of size. Amendments that come into effect on 10 December 2026 add new obligations around automated decision-making. The Office of the Australian Information Commissioner has already opened its first compliance sweep. If an employee leaks personal data through a shadow AI tool, the consequences sit with the business, not the tool.
IBM's 2025 Cost of a Data Breach Report put numbers on the outcome. Breaches involving shadow AI cost an average of US$4.63 million β roughly US$670,000 more than breaches without shadow AI involvement. Sixty-three percent of breached organisations had no AI governance policy at all.
Shadow AI is already inside your business. The choice is not whether to allow it β that decision was made by whichever staff member opened a browser tab this morning. The choice is whether to see it, shape it, and make the fastest path to getting a job done also happen to be the safe one.