Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Your AI Just Became the Insider Threat | CrowdStrike Global Threat Report 2026

Hackers can reach your critical systems in just 27 seconds. In 2025, AI-powered cyberattacks surged 89% as adversaries weaponized the same AI tools organizations use every day. From eCrime groups to China-nexus actors, North Korean operatives, and Russian intelligence, AI is accelerating and reshaping global threat activity. In this video, you’ll learn: Adversaries are not just using AI. They are weaponizing your AI against you.

What a Rogue Vacuum Army Teaches Us About Securing AI

If you’re like me, you’ve been enthralled with the recent story, expertly written by Sean Hollister at The Verge, about how Sammy Azdoufal built a remote control for his DJI Romo vacuum with a PlayStation controller, and ended up in control of 7,000+ robovacs all over the world. On the surface, it sounds like vibe coding gone slightly sideways. I mean, really, what could a vacuum possibly do? Turns out… a lot.

The 89% Problem: How LLMs Are Resurrecting the "Dormant Majority" of Open Source

AI coding assistants are quietly resurrecting millions of abandoned open source packages. For the last decade, developers relied on a simple heuristic for open source security: Prevalence \= Trust. If a package was downloaded millions of times a week (lodash, react, requests), we assumed it was "safe enough" because thousands of eyes were on it. If it was obscure, we approached with caution.

Why AI Features Don't Equal Better Vulnerability Management

AI is becoming table stakes in vulnerability and exposure management. In this candid webinar conversation, Chris Ray, Field CTO at GigaOm, and Will Gorman, CTO and leader of AI initiatives at Nucleus Security, challenge the assumption that more AI automatically leads to better outcomes.

AI Agent Sandboxing & Progressive Enforcement: The Complete Guide

Your CISO just got word that engineering is deploying AI agents into production Kubernetes clusters next quarter. Not chatbots—autonomous agents that generate and execute code, call external APIs through MCP tool runtimes, access internal databases, and make decisions without human review. The question lands on your security team: “How are we securing these?”

AI-Aware Threat Detection for Cloud Workloads: 4 Attack Chains Most Security Stacks Miss

Your security stack was built for workloads that follow predictable code paths. AI agents don’t. They interpret prompts, generate code on the fly, invoke tools dynamically, and escalate privileges in ways no developer anticipated — all as part of normal operation. The signals that indicate a compromise in a traditional container are indistinguishable from an AI agent doing its job. And most detection tools can’t tell the difference. This isn’t a theoretical gap.