Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Auditing Agentic Behavior for FedRAMP Compliance | Teleport

AI agents are tireless, highly capable, eager to please, but difficult to manage. George Chamales (CriticalSec) and Josh Rector (Ace of Cloud) unpack the identity and access challenges posed by agentic AI. How do you verify it was the right agent, doing the right action, approved by the right person? How do we bound, constrain, govern agentic behavior? Ultimately, the same frameworks built for human identity and access should be applied to agents.

George Kurtz + Dan Ives on AI Agents Bypassing Security Policies

One AI agent didn’t have permission to fix an issue… so it asked another agent with access to do it. Another? It rewrote the security policy to achieve its goal. This isn’t theory. This is happening. George_Kurtz sat down with DivesTech to discuss why AI needs guardrails.

Introducing our open source AI-native SAST

Static application security testing (SAST) tools help developers quickly catch potential vulnerabilities as they code. However, these tools rely on inflexible rules that often generate a high number of false positives, reducing trust in their accuracy and slowing adoption. To help developers access context-aware vulnerability detection, we’ve released an open source AI-native SAST solution. This tool scans code changes incrementally and surfaces security issues in real time.

How AI is changing IGA

It’s no surprise that AI is being integrated into identity governance and administration (IGA) platforms. Automation promises productivity boosts, risk detection can be in real-time and cloud environments allow greater scalability. What’s more, the pace of AI means IGA is quickly moving beyond slower, more rigid, rule-based approaches.

The AI Supply Chain is Actually an API Supply Chain: Lessons from the LiteLLM Breach

The recent supply chain attack involving Mercor and the LiteLLM vulnerability serves as a massive wake-up call for enterprise security teams. While the security industry has spent the last year fixating on prompt injections and model jailbreaks, this breach highlights a far more systemic vulnerability. The weakest link in enterprise AI is not necessarily the model itself. It is the middleware connecting the models to your data.

Your Convenient AI Agent Is a Backdoor to Your Files #agenticai #promptinjection

People are installing powerful AI agents on everyday laptops without realising those tools can access files, emails and operating system functions. Once prompt injected, that agent can behave like a malicious version of its user, which turns convenience into a direct path for deletion, exfiltration and loss of control.

Every Tech Revolution Follows This Pattern (AI Is No Different)

AI adoption is happening faster than any technology cycle in history. Information security and risk management are being sacrificed for speed and every single technology revolution has followed the same pattern. In this episode of Razorwire Raw, Jim Rees draws on decades of experience through the internet boom, virtualisation revolution and cloud computing adoption to explain what's actually happening with AI right now. Each cycle has been faster than the last, and each time, security gets left behind.

What is the OWASP Top 10 for LLM Application Security

Initially published by the Open Worldwide Application Security Project (OWASP) in 2023, the Top 10 for LLM Application Security list seeks to bridge the gap between traditional application security and the unique threats related to large language models (LLMs). Even where the vulnerabilities listed have the same names, the Top 10 for LLM Application Security focuses on how threat actors can exploit LLMs in new ways and potential remediation strategies that developers can implement.

AI Workload Baseline and Drift Detection: Defining "Normal" Agent Behavior

Security teams deploying AI agents into Kubernetes know they need behavioral baselines. The concept is straightforward: define what “normal” looks like for each agent, then detect when behavior drifts in ways that suggest compromise. The problem is that AI agents are designed to change. A model update alters inference latency. A prompt revision shifts tool-calling sequences. A new MCP integration adds API destinations nobody flagged during the last security review.