VENDORiQ: Microsoft’s Copilot Gets a Sandbox – What Does ‘Computer Use’ Mean Now?

Microsoft's new sandboxed Copilot extends its reach from data synthesis to autonomous execution, creating governance and compliance challenges, especially around paywalled data and security.

The Latest

Microsoft has announced a significant extension to its Microsoft 365 Copilot agent: the Researcher with Computer Use. This new capability provides the Copilot with a secure, sandboxed virtual computer environment running within the Windows 365 environment.

This virtual computer provides the agent with two key tools: a full web browser and a command-line terminal. According to Microsoft, this allows the agent to move beyond the open web and interact with public, interactive, and gated (paid or otherwise protected) content. A key feature is its ability to interactively request credentials from a user for a specific session to access subscription-based resources. The user provides the credentials, grants access, and then hands control back to the agent. This capability is currently limited to organisations in the Microsoft Copilot Frontier Program, Microsoft’s early-access channel.

Why it Matters

This announcement signals a deliberate move by Microsoft to transition Copilot from a knowledge synthesiser into a more autonomous executing agent. The initial ‘Researcher’ was limited to synthesising an organisation’s internal data (via the Microsoft Graph) with information from the open web. This new feature attempts to breach the next barrier: the vast repository of high-value business intelligence locked behind paywalls. For example, IBRS clients would be able to run the new Researcher on IBRS’s massive, proprietary knowledge library and internal documents, going beyond what the IBRS AI can provide on its own.

The use of an isolated sandbox is a significant architectural choice. On one hand, it is a critical security control, designed to prevent malicious web content from accessing the user’s local machine or corporate network. On the other hand, it creates a new, albeit temporary, environment that security teams must govern. The core questions become: what data can enter this sandbox from the corporate environment, and what data (or AI-derived insights) can leave it?

The manual, session-based credential-passing mechanism is a noteworthy detail. It attempts to avoid the significant security pitfall of creating a centralised, AI-accessible password vault for third party sites. However, it still relies on the user to make a sound judgment about what the agent should be allowed to access. It also creates a blind spot for privileged access management (PAM) of user credentials.

The inclusion of a command-line terminal, not just a browser, is the most powerful and potentially concerning aspect. This suggests capabilities far beyond simple web page rendering. It implies the agent could run scripts for complex data extraction, interact with APIs, or perform tasks at machine speeds and scales.

Furthermore, this feature moves enterprise AI into a new legal grey area. Actively directing an AI to access and synthesise third party licensed content may have implications for a company’s subscription agreements, which often explicitly prohibit automated scraping, redistribution, or the creation of derivative works.

Ultimately, the utility of this powerful tool remains fundamentally constrained by an organisation’s existing information governance. An AI that can cross-reference internal strategy documents with paid external market analysis is only as safe as the access controls on those internal documents.

Who’s Impacted?

  • Chief Information Officers (CIOs), Chief Financial Officers (CFO), Human Resources Executives and Heads of Digital Workplace: You must evaluate if the potential productivity gains for knowledge workers justify the significant new governance and compliance overhead. This is a tool for a controlled pilot, not a general rollout.
  • Chief Information Security Officers (CISOs): Your focus must be on data governance, access control, and potential new exfiltration pathways. You must understand the auditing and logging capabilities of this sandbox to ensure it meets corporate policy.
  • Legal and Compliance Officers: You must now assess the terms-of-service (ToS) implications of using an automated agent to access data from paid, third party intelligence sources. This creates a new category of risk for data use and intellectual property.

Next Steps

  • Audit internal access controls first: Before considering any tool of this nature, ensure your organisation’s internal data permissions are correct. ‘Overpermissioning’ is the primary risk for all generative AI.
  • If you are in the Frontier Program: Designate a specific, low-risk test group. Work with your CISO and legal teams to define clear parameters for use, particularly which gated sites are permitted.
  • Audit third party data subscriptions: Review the terms of service for your key intelligence providers (e.g., market research, financial data) to understand their policies on automated access or ‘scraping’.
  • Demand technical details: Engage with Microsoft to seek detailed documentation on the security architecture of the ‘Computer Use’ sandbox, its data-handling policies, and how it interacts with tenant-wide data loss prevention (DLP) controls.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week