VENDORiQ: Are Your AI Agents a Security Achilles’ Heel? Microsoft’s To the Rescue?

Microsoft's new AI security tools, like Entra Agent ID, aim to embed security into AI agent development, tackling risks like prompt injection and data poisoning head-on.

The Latest

At the recent Build event, Microsoft announced security enhancements for artificial intelligence (AI) agent development. Key among these is the introduction of Entra Agent ID, designed to embed identity into AI agents created using Microsoft Copilot Studio and Azure AI Foundry. This system intends to manage agent permissions and govern their access to resources through policy enforcement. In addition, Azure AI Foundry will incorporate new capabilities to secure and govern AI workloads, integrating with Microsoft Purview for data security and compliance, Microsoft Defender for AI threat protection, and Microsoft Entra for identity management.

Why it Matters:

The drive to secure AI agents directly responds to the growing recognition that AI systems, particularly autonomous agents with access to organisational data and systems, present novel attack surfaces. As AI agents become increasingly integrated into business processes, their potential for exploitation by malicious actors increases, making robust security measures critical.

AI systems can become attack vectors in several ways. 

Infrastructural Attacks

  • Privileged access: This should be a key concern. When an AI agent possesses permissions to access sensitive data or execute transactions, its compromise can lead to significant breaches or operational disruptions. 
  • AI infrastructure: Attackers target the AI infrastructure itself, launching attacks to cripple AI-dependent operations or (potentially far worse) extracting proprietary models and related sensitive data and operational intelligence.

Model Attacks

  • Data poisoning: Attacks can corrupt an AI model’s training data, leading it to make incorrect decisions or exhibit biased behaviour that an attacker can predict or trigger.
  • Model evasion: Techniques allow attackers to craft inputs that are misclassified by the AI, potentially bypassing security filters or causing unintended actions.
  • Prompt injection: Is a concern for agents based on large language models. Carefully crafted inputs can trick the AI into ignoring its original instructions and executing malicious commands, potentially exfiltrating data or performing unauthorised actions. 

Cyber criminals have various avenues to monetise such attacks. 

Compromised AI agents could facilitate data theft, with sensitive corporate information, customer data, or intellectual property being sold on dark markets. If an agent has transactional capabilities, it could be manipulated to authorise fraudulent payments. Attackers might also deploy ransomware through a compromised agent, encrypting critical data it has access to and demanding payment for its release. The disruption caused by disabling or corrupting AI systems could itself be leveraged for extortion. 

Compromised AI could be used as a tool for further attacks, such as generating compelling phishing emails, creating deepfakes for sophisticated social engineering campaigns, or serving as a beachhead within a corporate network. 

The announcement of new tools, such as the AI Red Teaming Agent by Microsoft, underscores the industry’s awareness of these threats, aiming to help organisations proactively identify and mitigate such vulnerabilities. The integration of security tools into development platforms, such as Azure AI Foundry, aims to make security an integral part of the AI development lifecycle, rather than an afterthought.

Who’s Impacted?

  • Chief Information Officers (CIOs): Responsible for the overall technology strategy, including the secure adoption and deployment of AI. They need to understand the new risks and ensure appropriate governance.
  • Chief Information Security Officers (CISOs): Directly responsible for mitigating cyber threats. The security of AI agents, with their potential access to sensitive data and systems, falls squarely within their remit.
  • Security Operations (SecOps) Teams: Will need to monitor, detect, and respond to threats targeting AI systems, requiring new tools and understanding of AI-specific attack patterns.
  • Data Governance and Compliance Officers: Must ensure AI agents handle data according to regulatory requirements and internal policies, especially concerning sensitive information.

Next Steps

  • Evaluate current AI security posture: Organisations should assess the security implications of their existing and planned AI agent deployments, particularly those with access to sensitive data or critical systems.
  • Investigate new security tooling: Examine offerings such as Entra Agent ID and enhanced Azure AI Foundry security capabilities to understand how they can be integrated into the existing security architecture for AI.
  • Prioritise non-human identity and access management (IAM) and governance for AI agents: Treat AI agents as distinct identities requiring robust authentication, authorisation, and lifecycle management, similar to human or service accounts.
  • Implement AI-specific threat modelling: Conduct threat modelling exercises that consider attack vectors unique to AI, such as prompt injection, data poisoning, and model evasion. Microsoft’s introduction of tooling like an ‘AI Red Teaming Agent’ suggests a growing focus on adversarial testing.
  • Integrate security into the AI development lifecycle (DevSecOps for AI): Ensure that security considerations are incorporated into every stage of AI agent development, from design and training to deployment and operation.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week