3 Key Takeaways on AI Security from CISOs of T-Mobile and Cisco

Three enterprise CISOs share what AI agent security looks like in practice — from nonhuman identity gaps to sub-60-second credential revocation. Watch the session on demand.

AI agents are already inside your organization. The question isn't whether to govern them — it's whether you're moving fast enough. 

We brought together Mark Clancy (SVP Cybersecurity, T-Mobile), Jason Lish (Global CISO, Cisco), and Ryan Knisley (former CISO, Walt Disney and Costco) for a candid conversation on what enterprise AI agent security actually looks like in practice. Here's what stood out.

1. Agents operate outside security controls built for humans

Agents are already being built and deployed by business teams, without security's involvement and without appearing anywhere in your identity lifecycle. They inherit credentials from the people who built them and use OAuth tokens and API keys to integrate directly with applications, bypassing your network gateways entirely. Traditional monitoring won't flag agents because they never traverse the controls you've built to catch threats.

As Mark Clancy put it, security's job is no longer to gate AI adoption but to enable it without losing visibility. Most enterprise security controls are designed around human identities, which means your existing monitoring, access policies, and response playbooks may not apply to agents at all. T-Mobile's answer has been to eliminate standing credentials altogether. By moving to passwordless infrastructure and OAuth-based enforcement points, every agent is required to authenticate on its own terms, with its own identity — giving security teams something to monitor and revoke.

2. Behavioral monitoring has to happen at runtime

Post-event log analysis doesn't work when agents act at machine speed and scale. Jason Lish framed it clearly: the risk isn't about what access an agent has, it's about how it behaves, how fast it acts, and how broadly it can scale those actions. By the time an alert surfaces and someone begins investigating, the data is already gone. At machine speeds, investigation and response is too slow. The only effective approach is enforcement at runtime, intercepting actions that violate policy before they execute, not after.

3. Governance should accelerate adoption, not slow it down

Security has historically been seen as a roadblock, which is why users go around IT to procure and deploy their own tools. AI governance can't follow the same pattern. Rather than acting as a "no and slow" function, security needs to be the enabler that makes confident AI adoption possible. Cisco took this seriously and rebuilt its AI governance model as a self-service motion, giving employees immediate answers on what tools exist and how to use them securely. The goal is to remove friction from the business while maintaining oversight. Governance as a phase gate is already obsolete.

AI agent risk is already running inside your environment. The organizations getting ahead of it share one thing in common: they stopped being reactive and started building for the threat they have today. That means identity architecture designed in from the start, real-time controls, and governance built to accelerate adoption rather than slow it down.

Missed the live session? Watch the recording →

Watch Now

Frequently Asked Questions (FAQs)

You May Also Like

Get Started

Start in minutes and secure your critical SaaS applications with continuous monitoring and data-driven insights.

get a demo