NHIs — non-human identities — now outnumber human users 82-to-1 in enterprises that have deployed AI agents, according to Rubrik Zero Labs research published in November 2025. That ratio will only grow as organizations expand their agentic deployments through 2026.
Every AI agent in your environment authenticates to at least one external system. Often more. That authentication is an NHI: an API key, OAuth token, service account credential, or session token that identifies the agent as a trusted entity. Each NHI is an attack surface. And unlike a human identity, it cannot be challenged with MFA, cannot recognize a phishing attempt, and does not clock out at the end of the day.
This guide covers the NHI security problem from a red-team perspective: how attackers exploit AI agent credentials, what the four root-cause failure modes look like in practice, and what a rigorous NHI security audit produces for an agentic deployment.
Key Takeaways
- AI agents create NHIs at machine speed, outpacing any manual governance process by design
- 97% of NHIs in enterprise environments carry excessive privileges — a statistic that gets worse, not better, as agentic deployments scale
- Four failure modes account for most NHI compromises: over-permissioned scopes, long-lived secrets, zombie credentials, and missing kill switches
- A compromised agent API key can yield full environment access in seconds, not hours
- NHI governance requires dedicated tooling and lifecycle processes that most IAM stacks were not designed to provide
What Are Non-Human Identities and Why Agentic AI Explodes the Problem
A non-human identity is any credential that authenticates a machine or automated process rather than a person. The category includes service accounts, API keys, OAuth tokens, certificates, and session tokens — anything a system uses to prove its identity to another system.
NHIs are not new. Enterprises have managed service accounts for decades. What is new is the rate of NHI creation that agentic AI introduces.
A single AI agent deployed to automate a business workflow might create or consume credentials across:
- The LLM API it uses for reasoning (OpenAI, Anthropic, or a hosted model endpoint)
- The vector database it queries for context
- The CRM or ERP system it reads for customer data
- The email or Slack integration it uses to send outputs
- The code repository it commits to if it has development capabilities
- The cloud storage bucket it uses to persist artifacts
- Other agents it delegates subtasks to in a multi-agent architecture
The governance gap is severe. A 2025 survey found that 89% of enterprises have incorporated AI agents into their identity infrastructure, but identity and security teams are only beginning to extend their lifecycle management processes to cover agent credentials. The result is a large and growing inventory of ungoverned NHIs sitting in production environments.
How AI Agents Create, Consume, and Leak Credentials at Machine Speed
The NHI lifecycle for an AI agent looks nothing like the human identity lifecycle that IAM processes were designed to handle.
Creation is informal. A developer building an agent prototype creates an API key through a service's web console, stores it in an environment variable or .env file, and deploys the agent. The key is never logged in the organization's secrets management system. No expiration is set. No owner is recorded beyond the developer's mental note.
Scope is inherited, not granted. When an agent is built on a general-purpose framework like LangChain, CrewAI, or LangGraph, it often inherits the credentials of the service or developer account that created it. That service account may have broad permissions accumulated over time — not because the agent needs them, but because nobody scoped the credentials down before handing them to the agent.
Rotation is skipped. A 2025 analysis found that 71% of NHIs are not rotated within recommended timeframes. For AI agents, the rotation problem is worse: agents that are always-on cannot easily be rotated without downtime, so teams defer rotation indefinitely.
Leakage is a constant risk. Agents that have code-generation or repository-access capabilities can accidentally commit credentials to version control. Agents with logging enabled may write credentials to log files that are less protected than the secrets store. Agents running in shared compute environments may be vulnerable to cross-agent credential inspection. Research consistently finds that 62% of secrets are duplicated across multiple storage locations — each duplicate is an additional exposure point.
The Four NHI Failure Modes in Agentic Deployments
From red-team assessments of agentic AI deployments, four failure modes account for the majority of exploitable NHI vulnerabilities.
1. Over-Permissioned Scopes
What it looks like: An agent's API key grants read-write access to a database when the agent only reads from it. An OAuth grant includes all available scopes because the developer selected them all during setup. A service account has storage.admin when storage.objectViewer would suffice.
Why it happens: Agentic frameworks often request maximum permissions to avoid runtime errors. Developers grant broad access during prototyping and never restrict it before production. The agent "works," so nobody revisits the permission model.
Why it matters: Over-permissioned scopes are the direct enabler of lateral movement. When an attacker compromises an agent's NHI, they inherit everything that NHI can do. A read-only NHI limits the blast radius. A fully-permissioned NHI provides a launchpad for full environment compromise.
2. Long-Lived Secrets
What it looks like: API keys with no expiration date. OAuth tokens that refresh indefinitely. Service account passwords that were set at initial configuration and never rotated.
Why it happens: Rotation requires coordination between the team managing the secret and the system consuming it. For AI agents, this coordination is easy to skip — the agent is automated, always-on, and nobody is actively managing its lifecycle.
Why it matters: Long-lived secrets give attackers unlimited time to exploit a compromised credential. A key stolen via a repository exposure today remains valid for a breach months later. Research finds that 97% of NHIs carry excessive privileges, and those privileges persist indefinitely when secrets are not rotated.
3. Zombie Credentials
What it looks like: API keys for agents that have been decommissioned but whose credentials were never revoked. Service accounts belonging to AI systems that were replaced, but the old accounts still have active access. OAuth grants made during a pilot that was cancelled, but the grant was never withdrawn.
Why it happens: Agent decommissioning is even less formalized than agent provisioning. A developer deletes the agent code and considers the job done. The credential that agent used — still active in an external system — is forgotten.
Why it matters: Zombie credentials are fully valid attack vectors with no legitimate corresponding usage. Any access against a zombie credential is anomalous by definition, but without an inventory of what NHIs should and should not be active, security teams cannot detect the anomaly. Attackers who find zombie credentials in exposed configuration files or code repositories get valid access to production systems with no friction.
4. Missing Kill Switches
What it looks like: No mechanism to revoke an agent's credentials quickly if the agent behaves anomalously or if a compromise is detected. Agent credentials spread across multiple systems with no central revocation point. No documented incident response procedure for "AI agent credential compromise."
Why it happens: Kill switch design requires thinking about failure modes before deployment. Most agentic projects focus on capability development, not on what happens when the agent's credentials are stolen.
Why it matters: Response speed matters in NHI incidents. An AI agent operating with compromised credentials can execute actions at machine speed — a human attacker moving manually through a network might take hours to exfiltrate significant data; an agent can do it in seconds. Without a kill switch, the window between detection and containment is defined by manual revocation processes that cannot keep pace.
Real Attack Chain: From Stolen Agent API Key to Full Environment Compromise
Here is how a realistic NHI compromise unfolds in an agentic deployment.
Stage 1: Discovery. An attacker runs an automated secret scanner against public code repositories or exposed deployment artifacts. They find a hardcoded API key in a repository for a customer support AI agent. The key was committed by a developer during a debugging session three months ago.
Stage 2: Scope enumeration. The attacker uses the key to call the API's metadata endpoints. They discover the key has read access to the customer database, write access to the support ticketing system, and — because the agent was built with a general-purpose service account — read access to internal S3 buckets that contain unrelated internal documents.
Stage 3: Lateral movement. The agent's service account is in the same identity provider as the company's internal tools. The attacker queries the identity provider's group memberships and discovers the service account has been added to a dev-tools group that grants access to the CI/CD pipeline. They use the agent's credentials to access the CI/CD system and retrieve additional secrets from the build environment.
Stage 4: Escalation. The CI/CD environment contains deployment credentials for production infrastructure. The attacker now has production database access, cloud storage access, and enough lateral reach to move to additional services. The initial vector — a single agent API key — has yielded full environment compromise.
The total elapsed time from discovery to escalation: under thirty minutes in a real assessment. No human attacker intervention was required after the initial key was found.
NHI Security Audit Checklist: 12 Controls to Validate
When BeyondScale conducts an NHI audit of an agentic deployment, these are the twelve controls we validate:
Most organizations we assess pass fewer than half of these controls in their initial agentic deployments.
Governance Framework: NHI Lifecycle Management for AI Agent Fleets
Sustainable NHI security requires a lifecycle process, not one-time remediation. The lifecycle should cover five phases.
Provisioning. Every NHI request for an AI agent should go through an approval workflow that specifies the agent's purpose, required permissions, owner, and expiration. The workflow should enforce least-privilege scoping and reject requests for broad permissions without justification. Credentials should be provisioned programmatically through a secrets manager, not manually copied.
Discovery and inventory. Continuously scan your environments for NHIs that exist outside the governed provisioning process. This includes secret scanning of code repositories, monitoring of identity provider accounts for service accounts not in the approved inventory, and API gateway logging for calls made with unrecognized credentials.
Rotation. Automate credential rotation on a defined schedule. For most AI agent NHIs, monthly rotation is the minimum acceptable policy; weekly is better. High-privilege NHIs should use ephemeral JIT credentials rather than rotating persistent keys.
Monitoring. Establish behavioral baselines for each agent's NHI usage: typical call volumes, accessed resources, time-of-day patterns, and source IPs. Alert on deviations. Integrate NHI monitoring into your SIEM so that anomalous agent activity is correlated with other signals.
Decommissioning. When an agent is retired, trigger an automatic decommissioning workflow that revokes all associated NHIs, removes the agent's service account from all groups and policies, and archives the NHI record for audit purposes. Decommissioning should be a mandatory step in your agent retirement process, not an optional cleanup task.
How BeyondScale Audits NHI Posture in Agentic AI Systems
BeyondScale's AI security audit process includes a dedicated NHI posture assessment for organizations with agentic deployments. Our approach covers:
Discovery phase. We enumerate all NHIs associated with your agent fleet, including credentials you may not know exist. We scan code repositories, review IAM policies and service account assignments, query secrets managers for scope and rotation status, and interview development teams to surface informal credential usage.
Red-team validation. We test the twelve controls above against your actual environment, identifying which controls pass, which fail, and what the exploitability of each failure is in your specific configuration.
Blast radius mapping. For each identified NHI, we map the maximum blast radius: what an attacker could access if that NHI were compromised, including lateral movement paths through shared identity infrastructure.
Remediation roadmap. We produce a prioritized remediation plan that addresses the highest-risk NHIs first and establishes the governance processes needed to prevent the same issues from recurring as your agent fleet grows.
Integration guidance. We advise on tooling selection for NHI lifecycle management — including secrets managers, cloud PAM solutions, and agent-specific identity platforms — and provide implementation guidance for integrating NHI governance into your existing security stack.
The organizations that have deployed AI agents without addressing NHI security are accumulating identity debt at machine speed. The question is not whether a compromised agent credential will become an incident — it is whether you have the visibility and controls to detect and contain it before it becomes a breach.
Ready to understand your NHI attack surface? Book an AI security assessment and we will map every credential your AI agents hold, assess the blast radius of each one, and deliver a prioritized remediation plan — before an attacker does it for you.
Or start with an automated scan of your AI systems via Securetom to get an immediate baseline of your agentic security posture.
Further reading: Multi-Agent Systems Architecture Patterns | OWASP Agentic Top 10 Guide | What Are AI Agents: Enterprise Guide | AI Security Audit Guide
AI Security Audit Checklist
A 30-point checklist covering LLM vulnerabilities, model supply chain risks, data pipeline security, and compliance gaps. Used by our team during actual client engagements.
We will send it to your inbox. No spam.
BeyondScale Team
AI Security Team, BeyondScale Technologies
Security researcher and engineer at BeyondScale Technologies, an ISO 27001 certified AI cybersecurity firm.
Want to know your AI security posture? Run a free Securetom scan in 60 seconds.
Start Free Scan