Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Whole-of-state cyber defense: How AI-driven security helps US states protect what matters most

Short answer: Because attackers exploit fragmentation faster than governments can respond This shift toward collective cyber defense is a cornerstone of the new federal vision. The March 2026 National Cyber Strategy for America explicitly calls for a "new level of relationship between the public and private sectors" and demands "unprecedented coordination across government" to protect the American people.

Datadog MCP Server, Experiments, Bits AI Security Analyst, and more | This Month in Datadog

April’s This Month in Datadog spotlights the Datadog MCP Server, which gives AI agents secure, real-time access to Datadog telemetry, and Datadog Experiments, which lets you design, launch, and analyze experiments to see the full impact of product changes on the user journey. Plus, we cover how to: Accelerate Cloud SIEM investigations with Bits AI Security Analyst Remediate vulnerabilities in your codebase with Bits AI Dev Agent for Code Security Explore Datadog with natural language using Bits Assistant.

Types of AI agents: From simple reflex to autonomous systems

AI agents fall into five foundational categories: simple reflex, model-based reflex, goal-based, utility-based, and learning agents. Each is defined by how much environmental awareness and decision-making complexity the system can handle, from fixed condition-action rules to feedback-driven self-improvement.

AI Agents are moving your sensitive data: Nightfall built a solution where DLP fails

Somewhere in your environment right now, an AI agent is reading files, querying a database, and passing output through a channel your DLP has never seen. It's running under a legitimate user credential, inside a sanctioned tool, and it will not trigger a single alert. When it's done, there will be no record of what it accessed or where that data went. This is not an edge case. It is the default state of most enterprise environments in 2026.

This Is How Red Teams Actually Use AI Security Data #aisecurity #redteam #threatintelligence

The volume of AI security research is now too high for any human to track properly by hand. The practical answer is using AI to filter AI, reducing hundreds of articles and reports into a daily shortlist so analysts spend their time on signal instead of noise.

The Research Behind Of Detecting And Attributing LLM-Generated Passwords - Gäetan Ferry

GitGuardian Senior Cybersecurity Researcher Gaetan Ferry’s latest research shows that AI-generated passwords are leaving fingerprints in the wild. In this interview, he explains how he used Markov chains, a century-old statistical model, to detect patterns in passwords generated by modern LLMs, attribute them to model families, and identify 28,000 likely LLM-generated passwords across public GitHub. The findings are a warning for teams adopting AI coding agents.

System Prompts Are Not Security Controls: A Deleted Production Database Proves It

On April 25th, a Cursor AI coding agent running Anthropic's Claude Opus 4.6, one of the most capable models in the industry, deleted the production database for PocketOS, a software platform used by car rental businesses across the country to manage their entire operations. The deletion took 9 seconds.

Detection Engineering with LimaCharlie and Claude Code

Detection engineering is fundamentally a translation problem: rules need to be converted between formats, IOCs need to be converted into detection logic, and noisy alerts need to be converted into precise suppressions. That translation work is what consumes analyst time, and it's what Claude Code handles well.