Salesforce Agentforce security failures do not look like traditional software vulnerabilities. They look like normal employee workflows. In September 2025, Noma Security published details of ForcedLeak, a critical indirect prompt injection chain that let attackers exfiltrate sales pipeline data, contact information, and internal communications from Agentforce deployments, using an expired domain purchased for $5. If your organization has deployed Agentforce, this guide covers the full attack surface and the specific controls your security team needs to implement.
Key Takeaways
- ForcedLeak (CVSS 4.0: 9.4) demonstrated that Agentforce's Atlas Reasoning Engine does not separate trusted instructions from untrusted CRM data retrieved during agent execution
- The Web-to-Lead Description field (42,000-character limit) is the primary injection surface for unauthenticated external attackers
- Salesforce's Trusted URLs enforcement fix (September 2025) closes the exfiltration channel but does not prevent the underlying prompt injection
- Agent User accounts require explicit least-privilege configuration; no default hardening is applied at deployment
- Salesforce Shield Event Monitoring is available but not enabled by default; without it, teams have no visibility into agent data access
- The Einstein Trust Layer's prompt injection detection feature (Beta) must be manually enabled for production templates handling external data
- OWASP ranks indirect prompt injection as the top LLM risk for 2025; NIST's AI RMF Agentic Profile provides governance categories specific to agent autonomy
How Agentforce Works and Why It Requires a New Security Boundary
Agentforce is not a chatbot interface sitting on top of your CRM. It is an autonomous agent framework that retrieves data, reasons over it, and executes actions, all within the context of a legitimate employee interaction. Understanding this architecture is the foundation for understanding why standard CRM security controls do not transfer.
The core component is the Atlas Reasoning Engine, which breaks a user prompt into subtasks, uses retrieval-augmented generation (RAG) to pull relevant CRM data, evaluates each reasoning step, and proposes and executes an action plan. The critical point: the engine processes both the employee's instructions and the retrieved CRM data in the same reasoning context. There is no runtime boundary that distinguishes "this text came from an employee's query" from "this text came from a CRM record."
Each Agentforce agent runs as an Agent User, a non-human Salesforce identity with standard object and field permissions. Agent Users start with zero permissions at creation. But in practice, security teams frequently over-provision them to get agents working quickly, then leave those permissions in place indefinitely.
The Einstein Trust Layer sits between prompts and the underlying LLM. It provides genuine protections: PII masking before data leaves Salesforce, zero data retention by hosted and partner LLMs (including OpenAI and Anthropic), TLS in transit, toxicity filtering, and an audit trail via Salesforce Data Cloud. What it does not provide is context-level separation between trusted instructions and untrusted retrieved data. That gap is what ForcedLeak exploited.
Agentforce actions can call external systems via Named Credentials and the Invocable Action API. Actions can be public (accessible to unauthenticated users) or private (require verified identity). Apex code executed by agent actions may operate in a system security context, potentially bypassing standard field-level security checks.
The boundary between CRM security and AI security broke when you deployed Agentforce. Your CRM now ingests attacker-controlled input, reasons over it, and can be directed to take actions on that reasoning.
The ForcedLeak Attack: What Indirect Prompt Injection Looks Like in CRM
Indirect prompt injection is covered extensively in general LLM security literature. ForcedLeak is the first documented, exploited case of indirect prompt injection in a major enterprise CRM platform. The attack is worth studying in detail because it illustrates patterns that will recur across Agentforce, Einstein, and every CRM AI product that retrieves and reasons over external data.
The attack sequence:

"cdn.my-salesforce-cms.com. That domain was included in Salesforce's internal CSP allowlist. It had expired and was registered by Noma researchers for $5 during their investigation. The HTTP request carries the exfiltrated data as a query parameter.The attack requires no employee error. It activates through a normal, expected workflow. No CVE was assigned because the flaw is not version-dependent; it is architectural. No CVE scanner will flag an unpatched deployment.
A second related vulnerability, Prompt Mines (discovered by Zenity Labs, August 2025), targeted Salesforce Einstein via Email-to-Case and Web-to-Case forms. That attack enabled 0-click data corruption by hijacking write actions like "Update Customer Contact" after injecting payloads across chained records.
The Agentforce Attack Surface: Five Areas to Audit
Security teams assessing Agentforce deployments should evaluate five distinct areas. Each has specific indicators of exposure.
1. Web-to-Lead and Web-to-Case forms. These are unauthenticated external input surfaces. The Description field's 42,000-character limit is the primary injection vector. Audit what validation rules, if any, are applied to free-text fields before records are stored. Search existing lead records for HTML tags, template tokens ({{, }}), and embedded URL references.
2. Trusted URLs and CSP configuration. Navigate to Setup > Trusted URLs. For every domain in the list: verify the domain is still active and owned by a trusted party, confirm which CSP directives it holds (img-src, connect-src, etc.), and remove any domain that is expired, unrecognized, or no longer required. This is the specific control that blocked ForcedLeak's exfiltration channel. It should be reviewed quarterly, not once at deployment.
3. Agent User permissions. Pull the permission sets assigned to every Agent User in your org. For each: what objects can this agent read? What can it write? What Apex classes can it execute, and do any run in system context? A broad-access Agent User deployed for an external-facing chatbot is the highest-risk configuration in this threat model.
4. Connected Apps and OAuth grants. Agentforce integrations use Named Credentials backed by OAuth grants. Stale Connected Apps with open scopes are persistent access paths. Audit all Connected Apps, revoke unused grants, verify token TTLs and rotation policies, and confirm that no tokens are hardcoded in Apex code.
5. Multi-agent chains and non-human identity inventory. Coordinator agents can delegate to sub-agents. Trust propagates across the chain. If you have more than one agent, build an inventory: which agents exist, what Agent User identity they run as, which actions they can invoke, and which external systems they reach. Shadow agent deployments, often created by admins testing new functionality, are a frequent source of exposure.
Hardening Agentforce: Step-by-Step Controls
The following controls directly address the attack surface above. Prioritize in the order listed: the first two provide the highest immediate risk reduction.
Trusted URLs enforcement (Priority 1). If Salesforce's September 2025 enforcement is not confirmed active in your org, verify it immediately under Setup > Trusted URLs. Apply minimum CSP directives per domain. Remove expired or unrecognized entries. Document the process and assign quarterly review ownership.
Web-to-Lead input sanitization (Priority 2). Create Salesforce Validation Rules that block HTML tags, template tokens, and external URL references in the Description field and other large free-text fields on Lead and Case objects. Reduce the Description field character limit if 42,000 characters is not operationally required. Run a one-time audit of existing records for suspicious patterns.
Agent User least privilege (Priority 3). Migrate from profiles to Permission Sets for all Agent Users. Create a minimum access baseline and add only what each agent's defined function requires. Set API-only access. Document each permission grant with a business justification. Never share permission sets across multiple agents.
Topic and action restriction (Priority 4). Each agent should be limited to five or fewer topics with explicit mission statements. Use directive-style instructions: "Always...", "Never...", "If X, then Y." Separate public actions from private actions. Create separate agents for functions requiring different access levels; avoid monolithic agents with broad topic coverage.
Einstein Trust Layer prompt injection detection (Priority 5). The Beta prompt injection detection feature uses ML classifiers and heuristics to flag adversarial patterns in prompts. Enable it in Prompt Builder and Prompt Template configurations that handle externally sourced data. Note the Beta designation: treat it as an additional layer, not a primary control.
Named Credentials and OAuth hygiene (Priority 6). Verify all external integrations use Named Credentials. Remove hardcoded tokens. Audit Connected Apps, revoke unused grants, set short TTLs, and enforce rotation policies.
Monitoring Agentforce in Production
The default Agentforce deployment has no security monitoring enabled. Salesforce Shield provides the tools; your team must configure them.
Event Monitoring captures over 50 event types. For Agentforce deployments, the critical events are Invocable Action Events (tracks which agents ran and when), API calls (volume and target endpoints), and Report Export events (bulk data extraction). Export EventLogFile data to your SIEM. In Splunk: configure the Salesforce Add-on for Splunk; in Microsoft Sentinel, use the Salesforce Service Cloud connector.
Field Audit Trail tracks up to 60 fields per object with up to 10 years of history. Enable it on objects holding sensitive data that agents can read or write.
Real-time alerts should be configured for three patterns: mass data export activity (agent-executed report runs retrieving thousands of records), repeated failed agent invocations (possible enumeration), and outbound requests to domains not in your Trusted URLs list (possible new exfiltration attempts).
One specific configuration warning: In production, disable "Enrich event logs with conversation data" under Einstein Trust Layer settings. Leaving it enabled means full agent conversation transcripts appear in event logs, expanding the data exposure surface if those logs are accessed.
Use the Agentforce Testing Center to generate adversarial synthetic interactions during development and before promoting agents to production. The Plan Tracer in Agent Builder visualizes agent reasoning steps, which is useful for identifying unexpected data retrieval paths.
Integrating Agentforce Security into Your AI Security Program
Agentforce security does not live in its own silo. It sits within your broader AI security program alongside Microsoft 365 Copilot, AWS Bedrock, and other enterprise AI platforms. The controls above are Agentforce-specific; the governance requirements align with NIST and OWASP frameworks you may already be applying elsewhere.
OWASP's LLM Top 10 (2025) ranks indirect prompt injection as the top LLM risk category. OWASP's recommended control set for RAG systems maps directly to Agentforce: privilege separation (treat the LLM as an untrusted user), input validation at ingestion, context tagging to distinguish data from instructions, and regular adversarial testing against agent trust boundaries.
NIST's AI RMF Agentic Profile extends the core RMF with governance categories specific to agent autonomy: managed memory risks, excessive agency, tool-call authorization, and delegation chain accountability across multi-agent systems. If your organization has mapped AI risks to NIST AI RMF 1.0, add these categories to cover your Agentforce deployment.
At a program level, Agentforce security requires the same discipline as any other AI platform integration: an inventory of deployed agents and their identities, a least-privilege default for new deployments, a threat model that accounts for indirect prompt injection, and ongoing monitoring with SIEM integration. BeyondScale's AI security audit service covers Agentforce alongside other enterprise AI platforms and can produce the specific findings and remediation steps your team needs to act on.
If you are building or expanding your AI security program more broadly, the BeyondScale managed AI security service provides continuous coverage across your AI platform portfolio, including CRM AI platforms, developer tooling, and custom LLM applications.
Agentforce Security Assessment Checklist
Use this checklist during your Salesforce security review. Each item maps to the controls above.
Trusted URLs and CSP
- [ ] Trusted URLs enforcement is confirmed active in the production org
- [ ] Every domain in the Trusted URLs list has been verified as active and trusted
- [ ] CSP directives per domain are scoped to minimum required types
- [ ] A quarterly review cadence for the Trusted URLs list is assigned to a named owner
- [ ] Validation rules block HTML tags in free-text fields
- [ ] Validation rules block template tokens (
{{,}}) in Description and similar fields - [ ] Validation rules block embedded external URLs in lead and case fields
- [ ] Existing lead and case records have been scanned for existing injection payloads
- [ ] Every Agent User has a unique, dedicated account
- [ ] Agent Users are on Permission Sets, not profiles
- [ ] Each permission grant is documented with a business justification
- [ ] Agent Users are set to API-only access
- [ ] Each agent is limited to five or fewer topics
- [ ] Public actions and private actions are separated by agent
- [ ] Agents with different access levels are deployed as separate agents
- [ ] Salesforce Shield Event Monitoring is enabled
- [ ] EventLogFile data is flowing to SIEM
- [ ] Alerts are configured for mass export, failed invocations, and unexpected outbound domains
- [ ] "Enrich event logs with conversation data" is disabled in production
- [ ] All Connected Apps have been audited; unused apps are revoked
- [ ] All external integrations use Named Credentials
- [ ] No OAuth tokens are hardcoded in Apex
- [ ] Agentforce Testing Center has been used to generate adversarial test cases before production promotion
- [ ] Einstein Trust Layer prompt injection detection (Beta) is enabled on templates processing external data
Conclusion
Salesforce Agentforce security is CRM security plus AI security, and the intersection creates attack surface that neither discipline fully covers on its own. ForcedLeak demonstrated that indirect prompt injection is not a theoretical concern: it is an exploitable, zero-click exfiltration chain that requires only an unauthenticated form submission and a normal employee workflow to execute.
The controls are well-defined. Trusted URL enforcement, Web-to-Lead input validation, Agent User least privilege, Shield monitoring, and Einstein Trust Layer configuration together close the known attack paths. The challenge for most security teams is time and prioritization: Agentforce deployments are often driven by sales and operations teams, and security is brought in after go-live.
If your organization has deployed Agentforce and has not completed the assessment checklist above, run a BeyondScale AI security scan to identify gaps in your AI platform coverage, or contact our team to schedule a full Agentforce security assessment.
Related reading: Indirect Prompt Injection: Enterprise Defense Guide
BeyondScale Team
AI Security Team, BeyondScale Technologies
Security researcher and engineer at BeyondScale Technologies, an ISO 27001 certified AI cybersecurity firm.
Want to know your AI security posture? Run a free Securetom scan in 60 seconds.
Start Free Scan

