3 May 2026 · Identity · Security

Non-Human Identity: The Problem Nobody in Enterprise AI Is Talking About

When an AI agent calls an API, rotates a secret, or delegates a task — whose identity does it use? Why OAuth/OIDC wasn't built for agents, and what non-human identity frameworks are emerging to fix it.

There’s a quiet assumption baked into most enterprise AI agent deployments: that the identity model designed for humans will work fine for agents. It won’t.

Here’s why, and what I think the path forward looks like.


The problem in one sentence

OAuth and OIDC were designed to answer the question: is this human who they say they are? Autonomous agents ask a different question: is this agent authorised to take this action, on behalf of this task, right now?

Those are fundamentally different questions, and the gap between them is where enterprise AI security incidents will come from in the next three years.


What we use today

Most enterprise AI implementations authenticate agents using one of three approaches:

Service principals. A static identity tied to an application registration. Every agent runs under it. The problem: you can’t distinguish between your log analysis agent and your deployment agent — they look identical to your audit logs.

Managed identities. Better. Azure, GCP, and AWS all support identities tied to compute resources rather than secrets. But managed identities are still resource-scoped, not task-scoped. An agent running on a VM has the same identity whether it’s doing something benign or something catastrophic.

API keys in environment variables. Still depressingly common. No expiry, no scope, no auditability. Every penetration tester’s favourite.


What’s actually needed

The mental model that maps closest to what we need is workload identity federation — the idea that an agent’s identity should be tied to what it is and what it’s doing, not just where it’s running.

Concretely, this means:

Short-lived credentials per task. An agent should get a credential when a task starts and it should expire when the task ends. Microsoft Entra ID’s workload identity federation can do this today for Azure workloads — but wiring it into an LLM agent pipeline takes deliberate architecture decisions most teams aren’t making.

Scope-pinned tokens. The credential for a “read logs” task should not be able to trigger deployments. This sounds obvious. In practice, most agent deployments use a single service principal with permissions scoped to “whatever we needed during development.”

Attestation chains. When Agent A delegates to Agent B via A2A, Agent B’s actions should be traceable back to the originating task and user. Today, most A2A implementations don’t carry this context through the delegation chain.


The emerging landscape

A few things worth tracking:

Microsoft’s Entra Workload ID is the most mature enterprise offering. It supports federated credentials, conditional access for non-human identities, and integration with Azure AI services.

SPIFFE/SPIRE is the CNCF standard for workload identity in cloud-native environments. More commonly used in Kubernetes-heavy stacks. If you’re running agents on K8s, this is worth understanding.

The OpenID Foundation’s WIMSE working group (Workload Identity in Multi-System Environments) is actively developing standards for exactly this problem. It’s early, but it’s the right group to watch.


What I’d do today

If you’re building an enterprise agent system right now:

  1. One service principal per agent role, minimum. Not one for everything.
  2. Use Azure Managed Identity or equivalent. No secrets in environment variables.
  3. Enable diagnostic logging on every identity. Agents should leave an audit trail that maps to task IDs, not just timestamps.
  4. Scope permissions to the tightest reasonable boundary. Revisit every six months.
  5. Treat the orchestrator’s identity as the highest-risk component. It has the broadest permissions by definition.

None of this is novel security thinking. It’s just applying established principles to a new class of principal that most security teams haven’t had to think about before.

The hardest part isn’t the technology. It’s explaining to a security review board why your AI agent needs its own identity separate from the application it runs inside. That conversation is worth having early.


Working on agent identity in your organisation? I’ve been deep in this at Shell for the past year — happy to compare notes on LinkedIn.