Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Splunk report: Agentic AI takes centre stage in CISOs' path to digital resilience

Nearly all CISOs report they are now responsible for AI governance and risk management, cite the growing sophistication of threat actor capabilities as their greatest risk. Vast majority say AI enables more security events to be reviewed.

Endpoint AI Agents Don't Ask Permission. For Better or Worse, They Operate Like Employees

The next major security problem enterprises will face won’t originate in the cloud. It will emerge on endpoints, where agentic AI is already operating with autonomy, authority, and access to sensitive data.

AI isn't replacing SOC teams. It's elevating them.

AI has radically transformed the way SOC teams operate, but how is it affecting the people behind the work? For our recent Voice of Security 2026 report, we surveyed over 1,800 global security professionals to find out. We wanted to understand not only AI’s impact on security careers, but how teams really feel about these shifts. The results show that despite rising workloads and widespread burnout across security teams, sentiment toward AI is largely positive.

CrowdStrike 2026 Global Threat Report: The Evasive Adversary Wields AI

As cyber defenses become stronger, adversaries continue to evolve their tactics to succeed. In 2025, the year of the evasive adversary, the threat landscape was defined by attacks that targeted trusted relationships, demonstrated fluency with AI tools, and incorporated tradecraft tailored to exploit security blind spots.

Introducing the AIDA Orchestration Agent: Always-On Human Risk Management Has Arrived

Social engineering remains the most reliable way into an organization—and attackers are getting better at it every day. According to the 2025 Verizon Data Breach Investigations Report, up to 68% of breaches involve social engineering. AI has only widened the gap. More than 95% of cybersecurity professionals say AI-generated phishing is harder to detect, and Microsoft reports that AI-generated phishing emails are 4.5x more successful than manually created ones.

Protecting Against Prompt Injection at the Data Layer, Not the Prompt Layer

Most teams try to fix prompt injection in the prompt itself. They add guardrails. They rewrite system messages. They stack more instructions on top of instructions. It feels productive. It is also fragile. Prompt injection is not just a prompt problem. It is a data problem. And if you treat it like a wording problem instead of a data control problem, you will keep playing defense. Let’s unpack why.

AI Data Governance Framework: A Step-by-Step Implementation Guide

AI data governance is the structured framework that ensures sensitive data remains protected when artificial intelligence systems are used. Traditional data governance focuses on data at rest. It manages databases, access controls, storage policies, and compliance documentation. AI fundamentally changes the environment, and hence, understanding AI data and privacy is crucial. When organizations use large language models, AI agents, or retrieval-based systems, data flows dynamically.

Introducing Forescout VistaroAI | The First SkillsBased Agentic AI for Cybersecurity

Meet Forescout VistaroAI, the first skills‑based agentic AI for cybersecurity. Forescout VistaroAI I thinks like a security expert, not a chatbot. It uses cybersecurity‑specific, preprogrammed skills to analyze anomalies, interpret posture changes, and automatically highlight affected assets. It eliminates the need for prompt engineering, providing role-based automation with human-in-the-loop control. The result is faster, more accurate decisions, and clearer starting points for real investigations.