Blog
Dec 01, 2025 | 3 min

Securing Agentic AI: Defining Permissions for Unpredictable AI Agents

A Comprehensive Guide to Agentic AI Security and Permission Controls

In the session Securing Agentic AI: Defining Permissions for Unpredictable AI Agents,” Token Security CEO Itamar Apelblat and Webflow CISO Ty Sbano covered how Artificial Intelligence (AI)-driven autonomy is reshaping identity security, why traditional access models are failing, and how organizations can build trust and guardrails for this new identity class.

The Rise of Agentic AI and Its Security Implications

What Is Agentic AI?

Agentic AI systems can plan, reason, and act independently, handling business-critical tasks end-to-end completely.

How Agentic AI Differs From Other Systems

  • Rule-Based Automation: Fixed scripts; agentic AI adapts and chooses its own steps.
  • Generative AI: Creates content; agentic AI acts on it autonomously.
  • Human Workflows: Humans decide and act; agentic AI does both on its own.

Agentic AI vs Traditional AI vs Generative AI

This is how the most common AI models stack up. 

Capability Traditional AI / Rule-Based Automation Generative AI Agentic AI

Autonomy Level

Low: follows fixed rules

Moderate: creates content but can’t act

High: plans and acts independently

Type of Decision-Making

Deterministic, rule-based

Pattern-based, no autonomous planning

Dynamic, context-aware, multi-step

Governance Complexity

Low: predictable

Medium: needs moderation and policies

High: needs guardrails and trust checks

Security Risk

Low: limited attack surface

Medium: hallucinations, misuse, leakage

High: new agentic attack vectors

Itamar opened the discussion with a key question: What is an AI agent, and how does its identity differ from a human or workload?

AI agents mix human-like reasoning with machine-scale autonomy, making them powerful but unpredictable.

“Agent identities are a hybrid,” Itamar said. “They have the creativity of humans but the continuous action of machines. That’s why we need to treat them as a new class of identity.”

Why More Autonomy = More Unpredictability in Agentic AI

As agentic AI acts independently, unpredictability grows, raising the risk of errors and cascading failures. 

Risks: Loops, goal overshooting, unsafe API chains, permission bypasses, context drift.

Examples: LangChain mis-edits files, HuggingFace picks wrong tools, and over-permissioning enables harmful actions.

Moving from Action-Based to Intent-Based Permissions

Ty noted that traditional identity and access management (IAM) frameworks built for humans and deterministic services no longer fit agent behavior.

“We’ve built identity on a transactional model,” he said. “Itamar did this. He logged in. He did that. But in the agentic world, we have to think about motivation, intent, and context.”

Access must shift from rule-based to intent-aware, using fine-grained permissions, behavioral analytics, and contextual reasoning for Zero Trust in AI.

Designing Permission Frameworks for Agents

Strong permission models are essential for agentic AI, defining access, tool use, and autonomy to limit blast radius and keep behavior predictable.

Common models:

  • Role-Based Access Control (RBAC): Role-based, simple, but rigid.
  • Attribute-Based Access Control (ABAC): Context-driven, flexible for dynamic workflows.
  • Policy-Based Access Control (PBAC): Policy-based and highly granular

When Identities Multiply: Accountability and Lifecycle Management

As organizations adopt AI, they generate many new agent identities, making visibility and lifecycle management major challenges.

“We see agents that were created, tested, and then forgotten,” Itamar said. “They still have access, but no one knows who owns them or what they’re doing.”

Ty agreed: “We’re all becoming managers of agents. We need to know who’s responsible for them and govern that responsibly.”

Both emphasized the need for stronger frameworks as AI usage grows.

Governance Models for Agentic Systems

Governance keeps agentic AI safe by defining what agents can access and execute so they stay within scope.

RBAC vs ABAC vs PBAC Governance Models

Model How It Works Best For Strengths Limitations
RBAC Role-based permissions Stable fixed functions. Simple and auditable. Too rigid for dynamic agents
ABAC Attribute-based access Context-heavy environments Flexible and fine-grained Harder to configure; error-prone
PBAC Policy-driven access rules Large or fast-changing ecosystems. Highly adaptable; risk-aware Needs strong governance and upkeep

Security’s Balancing Act: Innovation vs Control

A key theme was balance. Ty described the rush to adopt AI at Vercel and Webflow: “It was ‘AI or die.’ But we had to ask what secrets we’re protecting and our comfort with data exposure.”

His guidance: don’t block AI. Instead, set boundaries. “If you stop your teams from innovating,” he warned, “they’ll do it elsewhere without visibility.”

Compliance, Context, and the Future of Identity

They discussed frameworks like ISO 42001, though Ty noted “great security doesn’t always mean compliance.”

Ty and Itamar agreed that organizations should use contextual data from IAM, Software-as-a-Service solutions, and monitoring tools to better understand agent behavior.

“It’s a shared responsibility,” Itamar said. “We need to connect authentication, authorization, and intent.

Observability and Auditing Agent Actions

As agentic AI gains autonomy, observability is essential. Logs and real-time tracing show actions, while auditing ties behavior to policy for accountability.

Key Takeaways:

  • AI agents are autonomous and unpredictable.
  • Intent-based permissioning is the future of IAM.
  • Visibility and lifecycle management are urgent.
  • Track all agents, owners, and data access.
  • Accountability must evolve as everyone manages agents.
  • Security should enable innovation with clear guardrails.

Conclusion

Agentic AI is a powerful tool, but it comes with risk. Controls, real-time auditing, and strong permissioning are the foundation of safe, scalable agentic AI.

FAQ: Agentic AI Security & Permissioning

1. What is Agentic AI, and how is it different?

Agentic AI is a form of AI focused on autonomous decision-making and action. Unlike generative AI or automation, agentic AI can plan, reason, and act on its own.

2. Why do autonomous AI agents need special permissioning?

Without them, agents can exceed their scope or be manipulated into unsafe actions.

3. What are the major security risks?

Risks include prompt injection, API misuse, recursive loops, output poisoning, multi-agent collusion, goal overshooting, and context drift

4. How can organizations monitor and audit agent actions?

AI observability detects anomalies, supports investigations, and verifies agent behavior.

5. Which governance models best secure Agentic AI?

RBAC is preferred for role limits, ABAC for context-based control, and PBAC for detailed policy enforcement.

Watch the Full Discussion

This conversation only scratches the surface of securing the next generation of digital identities. Watch the full recorded webcast on BrightTalk: Securing Agentic AI: Defining Permissions for Unpredictable AI Agents

Discover other articles

Be the first to learn about Machine-First identity security