VENDORiQ: AI Innovation Recap – October 2025

A recap of important vendor announcements this month, such as a product launches, M&A, or changes in licensing agreements.

Overview

Given the rapid evolution of the AI market, IBRS is providing monthly high-level summaries of the critical trends you need to be aware of. Each month, we track the major AI announcements, cut through the hype, and provide crucial insights for your strategic decision-making.

October saw a split in AI strategies from major hyperscale AI vendors and the start of what is being called the ‘AI browser wars’, and a return to sanity with smaller, ‘fit-for-purpose’ generative AI models

The Platform and Agent Divergence: OpenAI vs. Anthropic

The past month’s announcements from OpenAI and Anthropic reveal a fundamental divergence in their strategic approaches to agentic AI. 

OpenAI, with its launch of AgentKit and Apps in ChatGPT, is constructing a consumer-facing platform. This strategy mirrors the ‘app store’ model, providing a visual, integrated environment (AgentKit) that lowers the barrier for developers to build and distribute public-facing apps. The objective is to create a B2C marketplace that positions ChatGPT as the central operating system for a new generation of third party consumer agents.

Anthropic’s release of Agent Skills, by contrast, signals a dedicated B2B enterprise strategy. This is not a user-friendly, visual platform but a code-first, SDK-based framework. ‘Skills’ are designed to be auditable, version-controlled, and reusable enterprise assets. This approach prioritises control, security, and integration with existing corporate systems. Anthropic is positioning Claude not as a public marketplace but as a robust, governable ‘reasoning engine’ for companies to build their own internal, mission-critical agents, particularly in regulated fields that demand an ‘ask-before-act’ posture.

The two strategies are likely to sit side-by-side in the market with many organisations leveraging both, but as AI services become increasingly embedded in workflows (through low-code and integration within core business solutions) the Agentic Skills approach will likely provide a deeper long-term business impact. 

Specialisation and the Economics of AI Orchestration

Recent model releases indicate a significant market shift away from monolithic, ‘do-everything’ models towards a ‘great unbundling’ of specialised AI. 

This is a trend IBRS predicted shortly after the launch of the major hyperscale LLMs. Based on past experience with smaller, highly specialised, fit for purpose models dating back to the mid-2000s, and open research into performant models, it was clear that smaller models would be the future for business-oriented AI. 

The value of Anthropic’s Haiku 4.5 is primarily economic; its combination of high speed and low cost makes it a viable agentic router. It can affordably handle the 90 per cent of simple tasks in a complex workflow, such as routing queries, extracting data, or summarising text, reserving the expensive, high-reasoning models only for the critical 10 per cent of the task. This development changes the unit economics of scalable AI, making complex, multi-agent systems financially feasible.

Similarly, Cognition’s SWE 1.5 is specialised for a specific psychological threshold in programming. Its extreme speed, reportedly 13 times faster than some rivals, is not just an iterative improvement; it crosses the flow-state barrier for developers. A sub-five-second response time keeps the user in a state of creative iteration, whereas a 20-second wait breaks it. This demonstrates a co-design of the model, inference hardware, and agent harness to optimise for a specific user experience.

Finally, OpenAI’s GPT OSS Safeguard unbundles a different function: policy. Decoupling safety rules from the model weights allows an enterprise to load its own proprietary compliance policy at inference time. This transforms safety from a slow, expensive model-training problem into a fast, flexible configuration problem, removing a significant barrier to enterprise adoption.

The AI Browser Wars Begin

The AI features introduced in OpenAI’s Atlas, Perplexity’s Comet, and Microsoft’s Edge Copilot signal a paradigm shift from information retrieval to task execution. The browser is evolving from a neutral conduit to an active synthesiser. This creates a new layer of abstraction between the user and the internet, with different browsers specialising in different user intents. Atlas is positioned as a ‘task automator’ for executing multi-step actions. Comet is a ‘research annotator’ for synthesising and citing complex information. Edge is a ‘corporate integrator’ for connecting web context to the Microsoft 365 ecosystem.

The most significant second-order effect of this shift is the disintermediation of the long-tail web. As browsers become adept at providing comprehensive summaries, the user’s incentive to click through to the original source, and view its advertising, is substantially diminished. This poses a fundamental challenge to the economic model that supports a vast portion of the open internet.

IBRS predicts the AI browser wars will fizzle out within the next two years. There is no barrier to competition in the browser market or to AI integration. Most existing browsers can be supplemented with AI services through plug-ins. The real battle will be how organisations maintain privacy and data sovereignty as AI weaves its way into browsers. With the adoption of SaaS core solutions, the browser has become a significant point of information leakage, posing significant business risks and legal implications. 

AI Video: Competition Shifts to Workflow Integration

In the AI video domain, competition is no longer about raw fidelity but about directorial control and workflow efficiency. Google’s Veo 3.1 (with its ‘first/last frame’ control) and OpenAI’s Sora 2 (with ‘storyboards’) are evolving from shot generators into scene editors. Sora continues to excel at single-shot cinematic physics, while Veo is focusing on granting the user more granular control over narrative composition.

However, the open source LTX2 model demonstrates a more mature understanding of the actual creative process. Its two-model system, ‘fast’ for rapid ideation and ‘pro’ for polished review, is explicitly designed to mirror the iterative loop of a professional studio. By building a tool that aligns with the pipeline (ideate, iterate, align, deliver), LTX2 aims to reduce post-production friction, a more valuable proposition for creators than fidelity alone.

Platform Neutrality as a Defensive Strategy in Image Generation

The integration of Google’s Nano Banana and other third party models into Adobe Photoshop is a calculated strategic move. Adobe appears to have recognised that it could not win the generative model race against tech giants. Instead, it has pivoted to win the platform war. By making the underlying model a simple drop-down choice, Adobe effectively commoditises its rivals and neutralises their threat. This reinforces the primary value of its platform: the established, sophisticated tools, layers, and user base of Photoshop. It has abstracted away the model competition, making it a non-issue for its subscription-paying professionals.

In parallel, Microsoft’s MAI Image 1 represents a classic dual-sourcing strategy. It provides Microsoft with an in-house, high-end creative asset, hedging against its significant strategic and financial reliance on its partner, OpenAI, and its DALL-E models.

Governance and Digital Identity: A Pragmatic First Step

YouTube’s ‘Likeness Detection’ policy is a notable first-pass governance solution for the deepfake era. The critical insight is its legal framing: the tool funnels complaints through YouTube’s privacy process, not its copyright system. This is a pragmatic choice. It allows YouTube to address the immediate, pressing problem of creator backlash against unauthorised AI likenesses by using a simple, consent-based standard (“I do not consent to this use of my face”). It simultaneously avoids entanglement in the far more complex legal doctrines of ‘fair use’ and ‘parody’, which are central to copyright law. While this creates a new, large-scale moderation challenge, it solves the immediate political problem without setting a binding, complex legal precedent.

However, YouTube still needs to address its current algorithmic copyright kill switch, unapproved use of AI on creators’ content, and a myriad of moderation-related issues.  Without serious competition or market pressure to demand change, such efforts will remain off the table. 

The Fragmentation of the AI Developer Workflow

Finally, the AI coding market is segmenting into distinct, complementary roles that mirror a human software team. These tools are not necessarily in direct competition; they serve different stages of the development lifecycle. GitHub Copilot acts as the ‘pair programmer’, integrated into the IDE to handle the micro-task of real-time, line-by-line code completion. Anthropic’s Claude Code, now on the web, functions as the ‘code architect’, using its large context window to address the meso-task of understanding, debugging, and refactoring entire codebases. Lastly, agentic systems like Cognition’s SWE 1.5 are ‘task executors’ (akin to a junior developer), built to handle the macro-task of autonomously executing a feature request from a single prompt. The future developer workflow will likely integrate all three.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week