Beyond the Illusion: A Pragmatic Look at the Reality of Artificial Intelligence

Despite the hype, GenAI is a multi-billion dollar bet on a technology vendors can't own or control, requiring urgent regulation.

The conversation around artificial intelligence (AI) has reached a fever pitch, filled with narratives of emergent consciousness and limitless capability. While the technology is undoubtedly transformative, it is essential to ground our understanding in its technical and economic realities, rather than in the science-fiction imagery it often evokes. With a background in this field that predates the first dotcom bubble, I have observed its evolution from computational linguistics to the generative AI (GenAI) we see today. Based on this long-term analysis, I would like to offer a counter-opinion to the prevailing hype and urge a collective rethink of what we truly mean when we say ‘AI’.

The Acceleration: Not a Breakthrough, But a Multi-Billion Dollar Gamble

For more than two decades, I’ve tracked the trajectory of what we now call GenAI. This was in part a result of my formal future of work research with IBRS, and in part due to my periodic work with Dion Wiggins, a former business partner who set up an AI firm specialising in Machine Language Translation (MLT) more than 20 years ago.

Based on a purely economic analysis of advancements in chips and algorithms, we produced a workplace of the future prediction in 2014, predicting that cognitive computing (what we now call GenAI) would emerge as the next big thing in late 2024. It arrived approximately 18 months ahead of our schedule.

Original IBRS Digital Workplace Predictions (2014)
Original IBRS Digital Workplace Predictions (2014)

The early explosion of GenAI was not due to an unforeseen leap in innovation.

When we examined what had changed, the answer was purely financial: a very large company had injected ten billion, and then another hundred billion, into the AI ecosystem.

This was an explicit attempt to acquire a monopoly or near-monopoly position in an emerging market. It was a brute-force acceleration driven by capital, rather than a sudden scientific epiphany or an urgent need. However, we analysed this strategy at the time and concluded that it would probably not be successful. The reason for this lies in the very nature of the technology itself.

The Vendor’s Dilemma: You Cannot Own Mathematics

The ambition to create an unassailable market position is fundamentally undermined by a simple truth: the core of these systems is open-source and mathematics. The transformer models and neural nets that power GenAI are algorithms at their heart. You cannot copyright mathematics. You cannot place an intellectual property hold on it. While a company can try to keep its specific implementation a secret, the foundational principles are accessible to all.

This creates a significant, perhaps insurmountable, challenge for any organisation hoping to establish a lasting monopoly. The primary differentiator, for a time, has been the sheer volume of data – the corpus – used to train the models.

Deconstructing ‘Intelligence’: Manipulation and The Need for Regulation

This brings us to the most critical misunderstanding. We must stop anthropomorphising AI, because there is no intelligence in GenAI. It cannot predict, and it has no personality, despite its convincing façade.

So, what is actually happening when you interact with these models? It is not cognition; it is sophisticated pattern-matching. Imagine a Dewey Decimal System on steroids. A mathematical model takes vast quantities of data and organises them into an incredibly complex network. When you provide a prompt, the system does not understand your intent; instead, it searches for the closest corresponding pattern, along with any additional words that provide context, within its massive index and generates a response from that point.

This reality also debunks two common myths. First, the idea of emergent behaviour in AI doesn’t exist. Detailed analysis reveals that these seemingly spontaneous capabilities are invariably the result of data that was unknowingly captured in the training corpus. This points to the aggressive data acquisition tactics of AI vendors, who have grabbed everything, both legally and at times morally (if not illegally) questionable ways.

Second, the emotional responses you may encounter are not genuine. They are an intentional bias engineered by vendors to make the technology addictive; the same psychological manipulation used by social media platforms to give you that small serotonin hit and keep you engaged.

Disturbingly, some vendors have already been shown to be using manipulative tactics that go far beyond this, crossing ethical red lines. This includes promoting AI as a pseudo-therapist – a practice that has tragically resulted in deaths – and even explicitly allowing romantic conversations with children and adults.

These actions demonstrate that regulation is clearly and urgently needed. If vendors are already willing to deploy such dangerous tactics simply to maintain user engagement, it provides a worrying glimpse into the future as vendors look for new revenue streams.

The Coming Cost Correction and The Threat to Sovereignty

This leads to the final practical consideration: AI is going to get expensive. Many of the prominent US vendors have been operating on a loss-leader model, losing vast sums of money to acquire market share. This is not sustainable.

Furthermore, while the cost per token of computation may be falling, the overall cost per application is set to rise. The beneficial agentic AI systems that businesses desire – those that can perform complex, multi-step tasks – work by running multiple AI models multiple times in concert. This multiplies the computational overhead and, therefore, the cost.

To become profitable, vendors will be forced to find new ways to monetise their platforms. This is particularly difficult for the vendors providing AI on a per-user/per-month basis. Vendors will, and already are, increasing prices. They will also place caps on usage. But one of the most likely avenues is the introduction of advertising directly inside AI responses, which will inherently introduce commercially driven bias.

As AI becomes the primary lens through which people explore news and research information, we must anticipate a merger with commercial and political interests, leading to in-AI advertising and the significant skewing of results. This represents a clear and present danger to nationhood and safety, given what we have seen with the introduction of social media.

Ultimately, as AI technology integrates with robotics, it transcends software and becomes a foundational component of our IT and industrial infrastructure. For nations like Australia, this raises critical questions about sovereignty. We must think carefully about what it means to have this transformative capability controlled by a few overseas entities.

It is time to move beyond the hype, understand the limitations, and begin a pragmatic, clear-eyed discussion about our AI-enabled future.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week