Observations
The Vendor Arithmetic and Its Flaws
Microsoft prices Microsoft 365 Copilot at AU$26.91 per user per month (ex GST)1,2. For a 200-licence deployment (the minimum number that Microsoft is pushing hard into enterprise agreement renewals), this translates to AU$64,584 in base licensing, rising to approximately AU$71,688 annually once GST and Universal Support are included. The vendor's headline claim – 1.2 hours saved per week per user – produces seductive spreadsheet logic: 200 staff saving 1.2 hours weekly, at a burdened hourly rate of AU$90, yields an annual productivity value of approximately AU$240,000 against an AU$71,688 licensing cost.
However, this arithmetic contains three embedded fallacies that IBRS encounters repeatedly in business cases.
- Fallacy 1: The licence is the cost. Real deployment costs extend significantly beyond the subscription fee. Training and change management (AU$25,000-$45,000), technical integration (AU$5,000-$15,000), and ongoing governance (AU$8,000-$12,000) will conservatively double the first-year investment to AU$109,688-$143,688. These costs do not disappear in year two; ongoing training and governance consume 10–15 per cent of the licensing cost annually.
- Fallacy 2: Time saved equals value created. As the first paper in this series detailed, task-level time savings do not automatically translate into organisational productivity. The Verification Tax – the cognitive overhead of checking AI-generated output – consumes a substantial portion of any drafting gain. The Federal Reserve Bank of St. Louis quantified this: individual users report saving 5.4 per cent of their work hours, but aggregate organisational productivity increases by only 1.1 per cent.3 The gap is not a rounding error. It is a structural feature of prompt-based GenAI.
- Fallacy 3: The gains are universal. Research consistently demonstrates that AI productivity gains are heterogeneous. Lower-performing staff see significant improvement (up to 34–43 per cent in controlled experiments), while high-performing experts see marginal or even negative impacts.4 A business case that applies a uniform productivity multiplier across 200 staff is, at best, aspirational.
High Failure Rates
These fallacies matter because the stakes are not theoretical. The projects that succeeded in introducing AI to the workforce shared common characteristics:
- They are targeted, specific workflows.
- They are deeply integrated into existing processes (automated, semi-automated, and manual).
- They saw heavy investment in workforce preparation.
The majority of the lacklustre Copilot and GenAI proof deployments that failed treated AI deployment as a technology procurement exercise.
The Shadow AI Imperative
There is, however, a compelling and often overlooked reason to formalise a Copilot programme – and it has nothing to do with productivity claims. Research from MIT's Project NANDA found that while only 40 per cent of organisations have official AI subscriptions, employees at over 90 per cent of organisations regularly use personal AI tools for work tasks.5 A separate Harmonic Security analysis of 22.4 million enterprise AI prompts found that 16.9 per cent of sensitive data exposures occurred through personal free-tier accounts where organisations have zero visibility.6
In short, your staff are already using AI. They are using it on personal devices through personal accounts to access sensitive data, enterprise documents, and private customer information. And in many cases, the tools they use grant themselves full access to that information. IBM's 2025 Cost of a Data Breach Report found that shadow AI incidents cost US$670,000 more per breach than standard incidents7.
The business case for a managed Microsoft 365 Copilot deployment is therefore not purely about productivity. It is about risk mitigation: bringing AI usage inside a governed, auditable perimeter before an unmanaged breach forces the conversation.
Reframing: From Purchase to Programme
The fundamental error in most Copilot business cases is the framing. When Copilot is positioned as a technology purchase, the ROI discussion defaults to a license-cost versus time-savings calculation. This framing fails because it ignores the organisational preconditions for value creation.
IBRS recommends reframing the investment around three pillars:
- Organisational digital maturity, not tool deployment. The evidence is unambiguous: training determines adoption, not enthusiasm. IBRS's own research across more than 1,700 Australian workers demonstrates that formal, structured training is the primary driver of AI uptake – not pester power from tech-savvy staff, not consumer AI experience, and not government mandates. Staff who receive three or more forms of training report 75 per cent higher confidence in the tool. Councils must invest in contextual, role-based training using actual council documents, not generic vendor demonstrations.
- Sector-specific use case development, not generic deployment. Success requires identifying which specific council functions will benefit from Copilot, rigorously testing those use cases, and measuring outcomes before scaling. Planning approvals, financial reporting, citizen correspondence, contract management, and meeting administration each require distinct prompting approaches, verification protocols, and measurement criteria. Enterprise-wide deployment before sector-specific validation is a leading cause of failures in Microsoft 365 Copilot and GenAI proofs of concept.
- A formal ideation and benefits realisation structure. Saved time has zero financial value unless it is deliberately redirected toward strategic priorities. This requires a Benefits Realisation Register, quarterly measurement cycles, and named accountability for benefit capture at the departmental level. The CFO's question should not be "how much time did Copilot save?" but "what strategic outcomes were achieved with the time that was freed?" This will be explored in more detail in a future paper.
Running a Copilot Proof of Concept (POC) is not Practical
Organisations wishing to run a Microsoft 365 Copilot POC face significant challenges. Adding Microsoft 365 Copilot to your organisation's MS 365 tenancy requires an investment in licensing that cannot be easily revoked or reallocated among users. The licensing creates a lock-in and bottleneck for running flexible experiments that can truly test not only the technology, but also the best use cases and different workplace personas.
It is therefore important to carefully consider the use cases, metrics, ideation program, benefits realisation framework, and a fully costed enterprise deployment plan before starting a Microsoft 365 Copilot deployment. The POC, therefore, becomes the first wave in a phased Copilot rollout, with an implementation plan, each phase resulting in expanded usage (more people and more licences) and gated reviews of the benefits at each phase.
The Counterargument – and Why It Falls Short
The counterargument is familiar: "We cannot afford to wait while competitors and other councils move ahead." This argument commits the Appeal to Novelty fallacy. Don’t fall for it!
Deploying Copilot without organisational readiness does not create a first-mover advantage; it creates a first-mover liability.
Organisations that rush to scale AI before building capability foundations are overwhelmingly likely to see lacklustre Copilot usage and fail to achieve the benefits the technology offers. Subsequently, this undermines organisational confidence in future AI adoption.
The correct response is not inaction. It is a disciplined, staged investment that builds the organisational capability to extract value from Copilot – or any successor platform. The licence is the entry fee. The maturity is the investment. And this must be the heart of any ROI calculations.
Next Steps
- Reframe the Copilot business case from a licence-cost-versus-time-saved calculation to a capability investment with phased returns, and present the total cost of ownership (TCO), including training, governance, and change management.
- Audit current shadow AI usage across the organisation to quantify the risk exposure that a managed Copilot programme would mitigate.
- Identify five to ten sector-specific use cases (planning, finance, citizen services) and design small-scale pilots with measurable success criteria before committing to enterprise-wide deployment.
- Invest at least 30 per cent of the programme budget in training, change management, and data hygiene, rather than concentrating expenditure on licence procurement.
- Establish a Benefits Realisation Register with named departmental accountability, quarterly measurement cycles, and clear criteria for licence reallocation if adoption targets are not met.
- Define falsification criteria upfront: if fewer than 40 per cent of assigned licences show active use of 3 or more features per week by day 90, the pilot should be restructured or suspended.
Footnotes
- ‘Microsoft 365 Copilot Plans and Pricing’, Microsoft, February 2026.
- For the sake of brevity, in this advisory paper, where we refer to Copilot, we are referring to Copilot for Microsoft 365, which should not be confused with Copilot Pro. Copilot Pro is an offering for personal, family and small business plans that cannot integrate with the Microsoft Enterprise Graph and Data Leakage Projection, meaning it is unsuitable for working with enterprise email, Teams, and SharePoint. Copilot for Microsoft 365 is the version that is needed for enterprise usage.
- Federal Reserve Bank of St. Louis, cited in multiple secondary analyses of AI productivity measurement, 2025.
- ‘Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality’, Harvard Business School, 2023.
- ‘MIT Project NANDA The GenAI Divide: State of AI in Business 2025’, MIT Media Lab, 2025.
- ‘What 22 Million Enterprise AI Prompts Reveal About Shadow AI in 2025’, Harmonic Security,2025.
- ‘Cost of Breach Report, 2025’, IBM, 2025.


