Why it Matters
Risk Remediation for Agentic AI
The increasing deployment of agentic AI within enterprise environments necessitates a robust focus on the inherent security and privacy risks of the technology. The decentralised and autonomous (really directed randomness) nature of these systems introduces new vectors for compromise.
Recent industry incidents, though not always directly attributable to agentic AI in its most mature form, illustrate the fundamental challenges. For instance, inadvertent data leakage has occurred when developers or users input sensitive information into public large language models for processing, demonstrating a lack of controlled data flow. Similarly, prompt injection attacks highlight how malicious input can manipulate AI behaviour, leading to unauthorised data access or system manipulation, as seen in instances where chatbots were coerced into revealing internal system logic or sensitive training data. These scenarios underscore the imperative for vendors to implement preventative guardrails.
Features such as integrated data classification (e.g., ServiceNow Vault Console) and granular access controls for autonomous entities (e.g., Machine Identity Console) are critical. An AI control tower concept—emphasising enterprise-wide visibility and lifecycle governance—is a good start for monitoring agentic behaviours, ensuring compliance, and providing auditability.
ServiceNow Agentic Orchestration
The concept of orchestrated agentic AI represents a logical progression in generative AI deployment. IBRS has long stated that generative AI will evolve from prompting to agentic to smaller, specialised AI agentic services embedded within structured workflow orchestration, and vendors like ServiceNow and UIPath are driving in that direction.
Instead of singular, isolated AI instances, the orchestrated agentic approach involves multiple specialised AI agents working collaboratively under a central management layer. Each agent performs a specific task, and the orchestrator supervises their interactions, data flows, and adherence to overall objectives. This methodology enables more complex automation scenarios and greater business system resilience. However, embedding agentic AI into automated workflows also amplifies (and to some degree simplifies) the need for integrated security, as the attack surface expands across inter-agent communications and the orchestration layer itself. The emphasis on securing these multi-agent environments, as evidenced by the Zurich release’s security features, reflects an understanding of this emerging architectural trend.
Who’s Impacted?
- CISO (Chief Information Security Officer): Directly responsible for assessing the security implications of agentic AI deployments, ensuring data protection, and establishing governance frameworks for autonomous systems.
- Head of Data/Data Teams: Involved in the classification, protection, and responsible use of enterprise data, especially as it is processed by and exposed to AI agents within workflows.
Next Steps
- Evaluate the vendor’s stated security and governance capabilities for agentic AI within the context of the current organisational risk posture and compliance requirements.
- Investigate the architectural implications of orchestrated agentic AI for your enterprise, considering how a multi-agent system might be designed and secured.
- Review the proposed AI control tower framework to determine its operational efficacy in providing visibility, compliance, and governance across deployed AI agents. Even if you are not a ServiceNow customer, the concept of a central control panel for agentic AI should be considered within the broader scope of an enterprise AI strategy.