VENDORiQ: ServiceNow Unveils AI Experience – Repositioning it as the Unified Orchestration Layer for Enterprise AI

ServiceNow has announced the release of AI Experience, a unified interface that will span the entire ServiceNow platform and seek to position it as the unified orchestration layer for enterprise AI.

The Latest

On Sept. 30, 2025, ServiceNow announced the release of AI Experience, a unified interface designed to be the ‘intelligent entry point’ for enterprise AI. Built on ServiceNow’s generative AI toolset, Now Assist, AI Experience will extend across any workflow. With this release ServiceNow is offering its customers flexibility in model deployment, providing the choice to use ServiceNow’s native large language models (LLMs) or choose to plug in third party models (such as Azure OpenAI, Google Gemini, or Anthropic’s Claude) on a workflow-by-workflow basis, affirming that governed multi-model access is now a strategic necessity for top-tier enterprise platforms. Enhancements made to Control Tower with the release focus on providing a central, comprehensive hub for governing and managing a proliferating ecosystem of AI assets, particularly AI agents.

The release also includes AI voice agents for hands-free support and troubleshooting, AI web agents that complete tasks across third party apps without integrations (similar to robotic process automation), and an AI data explorer that connects insights from multiple sources for seamless analysis. AI Lens turns any interface into AI-powered actions, while an AI-powered CPQ solution will enhance ServiceNow’s CRM capability.

Why it’s Important

ServiceNow’s AI Experience is a strategic move aimed at addressing a major challenge in the modern enterprise: the fragmentation of work caused by siloed applications, disconnected AI bots, and disjointed workflows. It is an example of where agentic AI will be of the greatest value: sitting within automated processes and deeply embedded in business process workflows. However, unlike traditional ‘workflow and integration’ tools that that are rigidly defined, this agent is intended to both log and act on customer interactions,  leverage agentic reasoning1 to redirect workflows and gather information on an as-needed basis, and deliver personalised and contextually aware support.  Such agents will handle ‘mundane tasks’ like scanning tickets, auto-generating quotes, and searching for data with minimal staff input, freeing employees to focus on analysing the results and making complex decisions, as well as helping their customers. 

The ServiceNow approach, and similar approaches from other workflow automation and low-code vendors, will ultimately become the dominant, long-term method for deploying ‘customised’ (mission-specific) agentic AI solutions. This is directly counter to the current excitement around ‘agentic platforms’ and specialised products.  In short, the AI revolution will quickly see AI services become embedded within existing ERP and business platforms, and where mission-specific services are needed, existing workflow platforms will be the most effective and easy to govern tool.  In short, agentic AI is a capability to be embedded in existing platforms, not a product in its own right.

Evidence of ServiceNows understanding of where agentic AI will sit in the future is seen in these two key decisions: 

  • ServiceNow AI Experience has the capability for organisations to integrate third party LLMs, alongside native ServiceNow models. With this feature ServiceNow is embracing a leading industry trend toward model-agnostic, best-of-breed AI orchestration. This functionality allows organisations to select and deploy the best-fit specialised model for the distinct needs of each individual workflow, providing optionality and a clearer path to demonstrable return on investment (ROI). This mirrors the direction taken by other major vendors, such as UiPath, which enables its automation robots to embed and call various external LLMs, and Salesforce that leverages its Agentforce suite as the orchestration layer to deploy leading third party models.  
  • ServiceNow has enhanced Control Tower, just as Salesforce uses the Einstein Trust Layer, to act as the central governing hub for its multi-model environment. Its core function is to ensure that when organisations use a mix of native and third party LLMs, they maintain a unified security and regulatory posture. Control Tower maps and continuously assesses third party models, enabling better governance and monitoring compliance/security risks that could arise from sending proprietary data outside the core ServiceNow boundary for processing by external LLMs. 

These two key parts of ServiceNow’s strategy directly address positioning AI as a capability not a product. First, AI needs to be open and replaceable, much like any other APIs. Second, AI needs to be governable and auditable like APIs. Put simply, ServiceNow renders AI into little more than API service calls, which from an architecture perspective is the correct way of thinking about these services.

Who is Impacted

  • C-Suite
  • CIO and CTO
  • Enterprise Architects
  • IT Service Management (ITSM) and Customer Service Management (CSM) Leads
  • Security and Governance, Risk, and Compliance (GRC) Teams
  • ServiceNow Administrators

Next Steps

  • Evaluate Agentic and Workflow-Enhancing Capabilities: If you are a ServiceNow customer, assess how the new AI agent capabilities and CPQ functionality can address existing enterprise pain points, focusing on quantifiable impact rather than feature adoption alone.
  • Strategically Leverage Multi-Model Optionality: Model-agnosticism provides flexibility and future-proofs investments against the inevitable changes in the frontier AI landscape.  Develop a framework for model-agnostic, best-of-breed AI orchestration by aligning the specific LLM provider with the unique requirements of each workflow.
  • Enhance Governance: The capability of the larger platforms to integrate third party LLMs alongside native models requires a deliberate, risk-aware strategy. Critically consider data sensitivity, regional compliance, and data sovereignty risk when selecting models for different geographic or regulatory contexts. The principle should be to select the best-performing model that still adheres to the strictest necessary governance posture. It is important to note that tools such as Control Tower and Einstein Trust Layer do not eliminate the need for risk owners to manage assessments and responses via IRM/GRC tools.
  1. Understanding ‘Reasoning’ in Generative AI: A Misaligned Analogy to Human Thought, IBRS, May 2025 ↩︎

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week