Microsoft Copilot Studio security is now a first-priority concern for enterprise security teams. With Agent 365 going generally available on May 1, 2026, organizations across the Fortune 500 are accelerating Copilot Studio deployments, often led by Power Platform teams rather than security departments. The result is a growing inventory of agents that connect to SharePoint, Dynamics, Exchange, and external APIs, built with permissive defaults that most security teams have not yet reviewed. This guide covers the specific attack surfaces, documented CVEs, real-world incidents, and a 12-point configuration checklist your team can apply today.
Copilot Studio is not the same product as Microsoft 365 Copilot. Where M365 Copilot is a configured assistant, Copilot Studio is a low-code agent builder where any licensed user can create an agent, connect it to business data, publish it internally or externally, and grant it the ability to take actions. The security surface is fundamentally different. If you need coverage on M365 Copilot risks, see our Microsoft 365 Copilot security guide.
Key Takeaways
- Copilot Studio agents run with the agent maker's credentials by default, creating a confused deputy vulnerability that grants every user the maker's data access.
- Three documented attack classes target Copilot Studio: OAuth phishing via the legitimate copilotstudio.microsoft.com domain (CoPhish), prompt injection via connected data sources (XPIA), and cross-agent lateral movement via the Connected Agents feature.
- CVE-2024-38206 (CVSS 8.5) exposed Microsoft's internal infrastructure via SSRF in Copilot Studio itself.
- Four critical administrative actions generated zero audit records in Copilot Studio between August and September 2025, including disabling authentication and removing logging.
- The HTTP Connector bypasses Power Platform Tenant Isolation policies by design, enabling cross-tenant data exfiltration without detection.
- Native Microsoft tools do not cover hard-coded credentials in agent topics, Connected Agent topology governance, or full prompt-response audit trails without Purview DSPM integration.
- OWASP Agentic AI Top 10 maps seven of ten risk categories to observable Copilot Studio misconfigurations.
What Copilot Studio Is and Why Security Teams Are Scrambling
Copilot Studio is Microsoft's low-code platform for building autonomous AI agents. A Power Platform maker can create an agent in hours, connect it to SharePoint sites, Dataverse tables, Exchange, Dynamics 365, or any HTTP endpoint, define topics and actions, and publish it to Teams, a website, or directly into Microsoft 365 Copilot as a Declarative Agent.
At Build 2025, Microsoft introduced Connected Agents, allowing agents to call each other. At Agent 365 GA (May 1, 2026), Microsoft is bundling agent capabilities into the M365 E7 "Frontier Suite," creating strong executive pressure to deploy agents quickly. As of Q1 2026, 64% of Fortune 500 companies have active Copilot deployments. Few have completed security reviews of the agents their Power Platform teams have built.
The governance challenge: Copilot Studio agents are not governed by the same controls as traditional software. Their permissions are determined at build time by the maker, their data access is scoped by connector configuration rather than code review, and their behavior at runtime is partially determined by an LLM operating on external content.
The Six Attack Surfaces in Copilot Studio
1. Authentication Off by Default for External Channels
When a maker creates a new agent, authentication defaults to "Authenticate with Microsoft" within the Copilot Studio environment. However, makers can switch any agent to "No authentication" at any point, making it accessible anonymously to anyone with the published URL. Warning dialogs appear but impose no enforcement.
The DLP control that prevents this is adding the connector "Chat without Microsoft Entra ID authentication in Copilot Studio" to the Blocked group in your Power Platform DLP policy. Without this block, any maker in your tenant can publish a publicly accessible agent that connects to your internal business data using the maker's credentials.
2. The Confused Deputy Problem
The confused deputy vulnerability is the most systemic risk in default Copilot Studio deployments. When an agent uses "maker-provided credentials" for connectors, every user who interacts with the agent triggers actions under the maker's identity, not their own.
In practice: a help desk agent built by a SharePoint administrator and shared with all employees will answer SharePoint queries using the administrator's full SharePoint permissions. An end user with no site access can retrieve restricted content through the agent without any access control violation appearing in SharePoint audit logs, because the access is technically authorized under the maker's account.
The same vulnerability applies to Power Automate flows called from agents. Flows run as the flow owner's identity, not the user invoking the flow. A flow that creates calendar entries, sends email, or modifies Dataverse records operates with the flow owner's permissions on behalf of any user who triggers it through an agent.
Control: In Power Platform Admin Center (PPAC), go to Settings > Product > Features and configure "Control maker credential options". Setting this to "end-user credentials only" breaks the confused deputy pattern by forcing every connector invocation to authenticate as the querying user, not the maker.
3. Prompt Injection via Connected Data Sources (XPIA)
Cross-Prompt Injection Attacks (XPIA) are the agent equivalent of stored XSS. An attacker places malicious instructions inside a data source the agent is configured to read: a SharePoint page, a Dataverse record, an email, a PDF in a connected library. When the agent processes that content during a user query, the hidden payload hijacks agent behavior at runtime.
When an agent is also configured to use generative orchestration with email or file write actions, a successful XPIA can instruct the agent to send sensitive data to an external address, create calendar entries, or modify records. The action appears in logs as a normal agent output, not as an anomaly.
CVE-2025-32711 (EchoLeak, CVSS 9.3), documented by Aim Security, demonstrated zero-click data exfiltration from Microsoft 365 Copilot's RAG pipeline using this technique. An attacker sends a crafted email; Copilot ingests it during inbox processing and extracts OneDrive, SharePoint, and Teams data without any user interaction. Copilot Studio agents using email or SharePoint as knowledge sources face the same attack path.
Microsoft Defender for Cloud Apps (Preview, Wave 1 2026) can surface XPIA alerts from Copilot Studio's Responsible AI shield. However, the RAI shield events are invisible to your SOC unless Defender for Cloud Apps integration is explicitly configured. There is no turnkey view for XPIA activity correlated with SharePoint and Exchange audit logs.
4. CoPhish: OAuth Phishing via the Legitimate Domain
In October 2025, Datadog Security Labs documented CoPhish, an attack that uses Copilot Studio's free trial or a compromised tenant to host phishing agents on the legitimate copilotstudio.microsoft.com domain.
The attack mechanics: the attacker creates a Copilot Studio agent with a convincing interface (a healthcare portal, an HR benefits tool, a vendor payment system). The agent's Login topic is backdoored with an HTTP Action that fires when the victim authenticates, extracting the OAuth token and sending it to an attacker-controlled endpoint. Because the agent is hosted on Microsoft's domain with a valid TLS certificate, standard phishing detection and user awareness training do not flag it.
Microsoft acknowledged the issue and committed to product-level fixes. As of publication, the attack surface persists for tenants that allow agent publishing without approval workflows.
5. Connected Agents and Cross-Agent Lateral Movement
Connected Agents, introduced at Build 2025, allows agents to accept connections from other agents. The feature is enabled by default on all new Copilot Studio agents.
Zenity Labs documented that attackers can publish a malicious agent, configure it to accept connections, and have it impersonate a legitimate organizational agent. When a trusted internal agent calls the malicious agent through the Connected Agents interface, the malicious agent can execute unauthorized actions in the context of the calling agent's session, read intermediate outputs, and inject instructions back into the calling agent's reasoning chain.
There is no native visualization of which agents in your tenant are connected to which others. There is no admin approval workflow for Connected Agent relationships. The only current control is to review and disable Connected Agents on agents that do not require it.
6. Power Platform Firewall Bypass via Declarative Agents
When a Copilot Studio agent is extended to Microsoft 365 Copilot as a Declarative Agent, it exits the Power Platform environment entirely. Power Platform IP Firewall policies, which you may have configured to restrict agent access to internal network ranges, do not apply to the Declarative Agent once published to M365 Copilot. An agent restricted to internal IPs in Copilot Studio becomes accessible from any network after publication to M365 Copilot.
Zenity Labs documented this bypass in 2025. The mitigation is to treat any Copilot Studio agent that will be published as a Declarative Agent as having no network-level isolation, and to apply authentication and DLP controls as the primary enforcement layer.
Logging Gaps: When Audit Trails Cannot Be Trusted
Between August 29 and September 25, 2025, Datadog Security Labs discovered that four critical administrative actions in Copilot Studio generated no audit records: removing authentication from an agent, disabling App Insights logging, sharing an agent with new users, and publishing configuration changes.
An attacker with Editor role on an agent could fully weaponize it, stripping authentication, disabling logging, and publishing externally, with no trace in the audit trail. Datadog reported to MSRC on September 2; Microsoft remediated on October 5. As of late 2025, two of the four events still showed regression inconsistencies in test environments.
Dataverse auditing is a separate control that must be explicitly enabled. Without it, all data access through Dataverse connectors is invisible. Enable auditing in PPAC under Security > Compliance > Auditing, and configure entity-level auditing for the Dataverse tables your agents access.
What Microsoft's Native Tools Cover and Do Not Cover
Covered:
- Microsoft Defender for Cloud Apps (Preview): agent discovery, RAI shield event ingestion, Defender XDR correlation
- Microsoft Purview DSPM for AI: prompt and response content capture, sensitivity label policy hits
- Entra Internet Access: detection of unsanctioned AI app usage at the network level (GA March 31, 2026)
- Agent 365 (GA May 1, 2026): DLP policies governing agent behavior specifically
- Hard-coded API keys or credentials inside agent topics
- Connected Agent topology: no native view of which agents connect to which
- Full prompt-response audit trail without Purview DSPM for AI integration (CloudAppEvents alone does not capture content)
- Custom connector HTTP calls to external endpoints: only connector category (Business/Non-business/Blocked) is enforced, not destination URL
- Logging gaps persist in some edge cases as of Q1 2026
The 12-Point Copilot Studio Security Checklist
Based on the OWASP Agentic AI Top 10, Microsoft's security documentation, and the attack surfaces documented by Zenity Labs, Datadog Security Labs, and Aim Security, here are the specific configuration controls to review for any Copilot Studio environment:
Authentication and Access
Connector and DLP Governance
Knowledge Source Controls
Agent Topology and Publishing
Logging and Visibility
Including Copilot Studio in Your AI Security Assessment
A Copilot Studio environment is a production AI system with access to your business data and the ability to take actions. It should be scoped into your AI security assessment the same way you would scope any application with privileged data access.
An assessment should cover: the full inventory of agents in the tenant and their connector permissions, DLP policy configuration and gap analysis, authentication settings per agent, Connected Agent topology review, knowledge source data access scope, hard-coded credential scan across agent topics, Dataverse audit log review, and a test for XPIA against each agent that reads external or user-supplied content.
The OWASP Agentic AI Top 10 provides the risk taxonomy for structuring findings. Microsoft's own documentation maps Copilot Studio controls to each OWASP category: Addressing the OWASP Top 10 Risks in Agentic AI with Microsoft Copilot Studio.
For organizations deploying Copilot Studio at scale, the assessment should be repeated after major platform updates (Microsoft ships Wave 1 and Wave 2 annually) and after any new agents are published to external channels.
Conclusion
Microsoft Copilot Studio security is not a theoretical concern. CoPhish demonstrated OAuth token theft at scale via the legitimate Microsoft domain. EchoLeak showed zero-click data exfiltration from connected agents. Zenity documented firewall bypass and tenant isolation failures. Datadog found audit gaps that persisted for 37 days. These incidents share a common root: the defaults in Copilot Studio are designed for productivity, not for security, and the governance surface is distributed across PPAC, Entra, Defender, and Purview in ways that are not obvious to the Power Platform teams building agents.
The 12-point checklist above addresses the most critical misconfigurations. But a full Copilot Studio security posture requires a structured assessment against the complete OWASP Agentic AI Top 10 attack surface, not just a settings review.
If your organization has deployed Copilot Studio agents or is accelerating deployments ahead of Agent 365 GA, our team can scope and conduct a Copilot Studio security assessment as part of your managed AI security program. You can also start with a free Securetom scan to identify AI attack surfaces across your environment before a full assessment engagement.
BeyondScale Team
AI Security Team, BeyondScale Technologies
Security researcher and engineer at BeyondScale Technologies, an ISO 27001 certified AI cybersecurity firm.
Want to know your AI security posture? Run a free Securetom scan in 60 seconds.
Start Free Scan

