Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Agentic AI in security operations: Friend, risk, or both

Agentic AI is forcing a hard question on every security leader: when your SOC is full of autonomous “doers” instead of just dashboards and scripts, is that your new best friend or a brand‑new risk surface you barely understand? The honest answer is both, and the way you design, govern, and deploy these systems will decide which side wins.

AI Security and Trust: Why SOC Teams Don't Trust AI

See how Torq harnesses AI in your SOC to detect, prioritize, and respond to threats faster. Request a Demo 92% of security leaders say something is actively reducing their trust in AI within the SOC. These aren’t skeptics, they’re people who have already adopted AI and believe in its ability to enhance security operations. We know from the 2026 AI SOC Leadership Report that AI is already widely adopted in the SOC, with 94% of organizations using it in some capacity.

Credential management for AI agents

The proliferation of credentials outside centralized visibility and control is known as “credential sprawl,” and attackers are eager to take advantage of it. Unfortunately, credential management is a broad problem that only grows in complexity as organizations add new tools, employees, and partners.

How to Detect Shadow AI

In 2026, the gap between AI adoption and AI oversight has become a primary boardroom concern. While generative AI has supercharged productivity, it has also introduced Shadow AI: the unmanaged, invisible use of unauthorized AI apps and autonomous agents that operate outside the view of traditional IT security. In this guide, you’ll learn why Shadow AI is exponentially harder to detect than Shadow IT and, more importantly, how to build a modern detection framework. We’ll explore.

AI SOC vs. white box AI: Why black boxes fail in the real world

There’s a growing wave of “AI SOC” startups promising autonomous everything. They’ll triage your alerts, investigate threats, and even run your playbooks. Push a button, let the machine handle the mess, and enjoy the magic. It sounds great until the moment something breaks. Then everyone, not just security, asks the same question: “What exactly did it do?” And that’s when these systems turn into a liability.

Introducing early access for Case Review Agents: AI decisioning for high-stakes identity decisions

Every day, your review team makes hundreds of decisions that determine who gets access to your platform. These decisions carry a lot of weight. Get them right, and you protect your business while delivering a seamless user experience. Get them wrong, and you either block legitimate users or open the door to fraud. As your business scales, these decisions get harder to manage. Case volume climbs, fraud tactics shift, and regulatory expectations evolve.

The Partnerships Taking on AI Security: Daniel Bernard, CrowdStrike Chief Business Officer

The previous episode of the Adversary Universe podcast explored the “vuln-pocalypse” and the implications of advanced AI models accelerating vulnerability discovery and exploitation. Now, we’re diving into how companies are working together to face these evolving security risks. CrowdStrike Chief Business Officer Daniel Bernard spends much of his time talking with partners and customers about how to address their growing concerns: Is their business protected? Do they know which vulnerabilities are in their environment? What do they do about them?

Donuts and Beagles: Fake Claude site spreads backdoor

A malicious imitation of Anthropic’s Claude site leads to DLL sideloading – and a backdoor As we reported on social media recently, Sophos X-Ops has been investigating reports of a fake Claude AI website distributing malware. Like other researchers, we thought this might be a PlugX-like campaign, given that the attack chain shares several characteristics with observed PlugX attacks.

Meet GitGuardian's AI Assistant: Natural Language Queries Across All Your Incidents

See how the GitGuardian Assistant helps teams investigate, understand, and remediate secret incidents directly from the GitGuardian workspace. In this preview, Mathieu and Dwayne walk through how the assistant uses incident context, workspace details, and GitGuardian documentation to answer questions, suggest next steps, and help manage incidents through natural language. It can explain threat patterns, assess scope and impact, recommend remediation steps, assign incidents, update tags, and propose changes to incidents.