Blog
Mar 23, 2026 | 5 min

Introducing The Agentic Pulse: What We’re Learning About AI Agents (and Why It Matters)

Over the past year, something fundamental has shifted in enterprise environments. AI is no longer confined to generating content or assisting with isolated tasks. It is beginning to operate inside systems in ways that resemble real work: retrieving data, executing workflows, writing code, and, increasingly, making decisions.

This shift is exactly why we built The Agentic Pulse. It is an ongoing effort to track how AI agents are actually being deployed inside enterprises, how their identities and permissions are evolving, and where new categories of risk are emerging as AI agents continue to gain more access and autonomy.

At a glance, this shift in how AI is being used looks like a natural progression. Organizations adopt a chatbot to improve knowledge access, introduce coding assistants to accelerate development, and experiment with workflow automation to reduce manual effort. Each step, in isolation, feels incremental and manageable.

But when you examine these AI initiatives collectively, across environments and use cases, a different pattern emerges. AI agents are no longer simply responding to prompts. They are authenticating into services, interacting with internal and external systems, and taking actions that have real operational consequences. In many cases, they are doing so with limited visibility and minimal oversight.

At that point, it becomes difficult to describe them as tools in the traditional sense. They begin to behave like identities operating at machine speed. And once you recognize that, the problem changes. The Agentic Pulse is our attempt to define that new problem and make it measurable.

The Emergence of an Identity Problem No One Designed For

Modern security programs are built on a relatively stable model. Humans have identities that are governed through identity providers and access controls. Services, applications, and workloads have non-human identities that are provisioned, scoped, and monitored. Over time, organizations have developed processes to manage both.

AI agents do not fit cleanly into either category. They are often created by business users, yet can operate with capabilities that resemble backend services. They may inherit user permissions, but execute tasks independently and at scale. In production environments, they are deployed as part of infrastructure, yet behave in ways that are far less deterministic than traditional software.

As a result, organizations are deploying AI agents that function as identities without consistently treating them as such. This creates a gap that is not just technical, but conceptual. The controls that exist today were not designed for entities that combine access, autonomy, and adaptability in this way.

This gap is already visible in practice. In many environments, there is no reliable inventory of AI agents. Their permissions are often implicit or inherited rather than explicitly defined. Their level of autonomy is poorly understood, and ownership is frequently ambiguous. At the same time, the barrier to creating and deploying AI agents continues to fall, enabling rapid, decentralized adoption across the organization.

The result is a familiar pattern in a new form: systems with meaningful access and authority are proliferating faster than they can be governed.

Making Sense of AI Agent Risk: Access and Autonomy

As we analyzed how AI agents are being deployed in real environments, the initial landscape appeared fragmented. Agents varied widely in their purpose, architecture, and level of sophistication. However, a consistent structure emerged when we focused on two underlying dimensions: access and autonomy:

  • Access defines what an agent can interact with: services, applications, data, APIs, and infrastructure. It determines the potential impact of any action the agent takes, whether intentional or erroneous.
  • Autonomy defines how independently the agent can operate. It reflects the degree to which the agent can make decisions, chain actions together, and execute tasks without human intervention.

These two dimensions are independent but deeply interrelated in how they shape risk. An agent with limited access but high autonomy may act unpredictably, but its impact is constrained. An agent with broad access but low autonomy may be powerful, but remains largely controlled. It is when both access and autonomy increase together that risk becomes difficult to reason about and harder to contain.

This model, risk as a function of access and autonomy, is the foundation for how we think about AI agents. It provides a way to move beyond surface-level classifications and instead evaluate the properties that actually determine risk in practice.

Understanding Agent Risk: Access × Autonomy

Agent Risk Matrix

Risk increases as agents gain both autonomy and access. The most dangerous quadrant combines high autonomy with broad system permissions.

Low Autonomy High Autonomy
High Access
Elevated Risk
Human-triggered agents with broad system access. Risk is driven by over-permissioning and misconfiguration.
Critical Risk
Fully autonomous agents with broad access are at the highest risk, as errors and exploits can cascade across systems without intervention.
Low Access
Low Risk
Assistive agents with minimal scope and risk. Errors are contained and observable.
Moderate Risk
Autonomous but restricted agents. Mistakes may occur frequently but have limited impact.
High Access • High Autonomy
Critical Risk
Fully autonomous agents with broad access are at the highest risk, as errors and exploits can cascade across systems without intervention.
High Access • Low Autonomy
Elevated Risk
Human-triggered agents with broad system access. Risk is driven by over-permissioning and misconfiguration.
Low Access • High Autonomy
Moderate Risk
Autonomous but restricted agents. Mistakes may occur frequently but have limited impact.
Low Access • Low Autonomy
Low Risk
Assistive agents with minimal scope and risk. Errors are contained and observable.

The Three Types of AI Agents in the Enterprise

While access and autonomy provide a unifying model, most enterprise agents today fall into three broad categories. Each represents a different stage of adoption and introduces a distinct set of security challenges.

Agent Type Where They Run How They Get Access Level of Autonomy Primary Risk Pattern
Agentic Chatbots Managed AI platforms (ChatGPT, Copilot, Gemini) API keys, OAuth tokens, uploaded knowledge Low to moderate (human-triggered) Overexposure of data and systems through sharing and misconfigured access
Local Agents Employee endpoints (laptops, dev environments) Inherit user identity and permissions Moderate to high (multi-step task execution) Invisible access sprawl and user-driven over-permissioning
Production Agents Cloud infrastructure / backend systems Dedicated service identities and credentials High (event-driven, autonomous workflows) Compounding risk from autonomy, delegation, and external input
Agentic Chatbots
Where They Run
Managed AI platforms (ChatGPT, Copilot, Gemini)
How They Get Access
API keys, OAuth tokens, uploaded knowledge
Level of Autonomy
Low to moderate (human-triggered)
Primary Risk Pattern
Overexposure of data and systems through sharing and misconfigured access
Local Agents
Where They Run
Employee endpoints (laptops, dev environments)
How They Get Access
Inherit user identity and permissions
Level of Autonomy
Moderate to high (multi-step task execution)
Primary Risk Pattern
Invisible access sprawl and user-driven over-permissioning
Production Agents
Where They Run
Cloud infrastructure / backend systems
How They Get Access
Dedicated service identities and credentials
Level of Autonomy
High (event-driven, autonomous workflows)
Primary Risk Pattern
Compounding risk from autonomy, delegation, and external input

This categorization is useful not because it simplifies the landscape, but because it highlights how differently risk manifests depending on where and how agents operate.

Agentic chatbots are the most accessible and widely adopted. They are often introduced without friction and integrated into daily workflows quickly. However, their risk lies in how access is configured and shared. In practice, chatbot creators frequently become de facto access administrators, granting visibility into systems and data without centralized oversight. What appears to be a simple assistant often becomes a shared interface to a privileged identity.

Local agents introduce a more complex challenge. Because they inherit the user’s identity, their access is effectively unconstrained within the bounds of that user’s permissions. They operate across systems, execute commands, and adapt dynamically to different tasks. From a security perspective, they are difficult to track and even harder to govern. Their configurations are distributed, their behavior blends with normal user activity, and their access decisions are made at the individual level.

Production agents represent the most advanced and most critical stage of adoption. These agents operate within infrastructure, often autonomously, and are triggered by events rather than direct human input. They rely on dedicated identities, which enables more structured access control but also introduces new challenges related to identity management, delegation, and trust boundaries. Their ability to process external inputs further increases their exposure, making them particularly susceptible to cascading failures.

Why We Built The Agentic Pulse

The Agentic Pulse was created in response to this shift. As we examined real-world deployments, it became clear that existing frameworks were insufficient to describe what was happening. Organizations were adopting AI agents rapidly, but lacked a coherent way to measure, compare, and prioritize the risks associated with them.

Our goal with The Agentic Pulse is to provide that clarity. It is designed to track how AI agents are being deployed, how their identities and permissions are configured, and how their level of autonomy is evolving over time. More importantly, it aims to translate these observations into a practical understanding of risk, one that security teams can use to make informed decisions.

Rather than focusing on hypothetical scenarios or future possibilities, The Agentic Pulse is grounded in what we are already seeing across environments. It reflects the current state of adoption, including the trade-offs organizations are making, intentionally or not, as they integrate AI agents into their operations.

Rethinking How We Secure AI Agents

One of the most important insights from this work is that securing AI agents does not require limiting their usefulness. In many cases, it requires focusing on a different control surface.

Autonomy is often treated as the primary concern because it is the most visible aspect of agent behavior. However, autonomy is also what makes these systems valuable. Attempting to reduce it can undermine the very benefits organizations are trying to achieve.

Access, on the other hand, is both more controllable and more consequential. By enforcing least privilege, scoping permissions appropriately, and continuously reviewing what agents can interact with, organizations can significantly reduce risk without limiting functionality. This approach aligns with established identity security principles, but must be applied in a context where identities are more dynamic and less clearly defined.

In practice, this means treating AI agents as first-class identities within the security architecture. It requires visibility into where they exist, clarity about what they can access, and mechanisms to ensure that access remains aligned with AI agent intent over time.

Where This Is Going

AI agents are quickly becoming part of the operational fabric of the enterprise. They are moving from assistive tools to active participants in workflows, from isolated experiments to integrated systems that support business-critical functions.

As this transition continues, the gap between adoption and security will become more pronounced. The challenge is not simply that organizations are moving quickly, but that the underlying model for understanding and governing these systems is still evolving.

The Agentic Pulse exists to help close that gap. By providing a clearer view of how agents are being used and where risk is emerging, it offers a foundation for more effective security strategies in an environment that is changing rapidly.

For organizations adopting AI agents today, the most important question is no longer what these systems can do. It is what they represent within the broader identity landscape and whether that representation is understood, governed, and secured appropriately.

That is the question The Agentic Pulse is designed to answer.

Discover other articles

Be the first to learn about Machine-First identity security