Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Exploitability Intelligence Gap: What Security Teams Can Know Before CISA KEV

In this webinar, Nucleus Security CEO Steve Carter and Product Marketing Lead Tally Netzer break down the growing “exploitability intelligence gap” and what it means for modern vulnerability and exposure management programs. Drawing from six months of research and real-world vulnerability data, they explore how attacker timelines have compressed, why traditional reactive workflows are struggling to keep pace, and where organizations are missing critical signals before exploitation begins.

Ep. 57 - Russia's Proxy Bridge: BlackCat, Scattered Spider, and the Kremlin

In Part 4 of our Russian intelligence series, host Tova Dvorin and Adrian Culley map the proxy bridge between Western teenage hackers and Moscow. BlackCat (ALPHV) ransomware-as-a-service is the operational hinge: Scattered Spider breaks in, BlackCat encrypts, and the FSB watches the dashboard. Hear how the Kremlin earns plausible deniability, why a $115M extortion stream self-funds Russian intelligence, and what MI6's new "hybrid shadow war" warning means for defenders simulating Rust-based ransomware in their own networks.

AI Is Replacing Security Dashboards (Headless Cloud Security Explained)

AI is changing cloud security—and dashboards might be next to go. In this video, we introduce headless cloud security: a new model where AI agents, not humans, operate security systems. Instead of dashboards and manual triage, security becomes API-driven, automated, and built for autonomous execution. This shift redefines DevSecOps, cloud security, and AI security workflows—moving humans from operators to orchestrators.

2026 Public Sector Cyber Attacks and Data Breaches

In 2026, the public sector continues to face numerous cyber attacks, with data breaches often exposing sensitive information, disrupting essential services and undermining public trust. From municipal governments to federal agencies, public sector organizations of all sizes face challenges from threat actors exploiting outdated systems, human error and expanding digital footprints. These incidents are more than isolated security failures.

Turn Busywork Into Real Work With Egnyte's AI

It’s Friday afternoon, and you need a quick team update. Five minutes, tops, right? You ping Slack. A few people reply, a few don’t. So, you schedule a “quick sync” to get everyone on the same page. Two hours later, you’ve spent your afternoon chasing updates instead of doing actual work. And you’ll do it all over again next week. Now picture this. You’re collecting product demo videos for an agency.

Remote Penetration Testing in 2026: A CTO & CISO Guide

Your presence here, reading this, insinuates that something is nagging at you. Maybe it’s the Ivanti headline you saw last week or the fact that half your engineering team works from cafés, co-working spaces, and home offices you’ve never set foot in. Maybe it’s the audit coming up and that one checklist item about remote access controls you’ve been putting off. No, you’re not being paranoid. We have numbers that justify your burgeoning anxiety.

AI Agent Incident Response in Cloud-Native Environments: A Playbook for Modern SOCs

It’s 2 a.m. and the SOC has a Tier 3 page. A customer-service agent on the production cluster has just wired refund payments to seven addresses outside the approved disbursement list. The runbook is unambiguous: isolate the pod, image the disk, image the memory, root-cause within 48 hours.

AI Agent Security Performance: Framework for Evaluating Latency, Throughput, and Observability Overhead

Every AI workload security PoC reaches the same conversation. Platform engineering pushes back: the AI team won’t accept extra latency on inference. The security engineer hunts for benchmarks and finds a contradiction. Langfuse publishes 15% overhead. AgentOps publishes 12%. The security vendor quotes 1–2.5%. None is lying. They measure different layers.

How to Harden AI Agents in Cloud Environments: The 9 Capabilities Your Stack Must Provide

Most “hardening” advice for AI agents is a checklist of things to configure before the agent runs. CIS Kubernetes Benchmark gates. Pod Security Standards baselines. NetworkPolicy templates. None of it’s wrong — it’s just one of four phases, the one your stack already covers. The other three are Observe, Enforce, and Reconcile. They’re where AI agents actually get breached, and they’re where most stacks have nothing.