Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How to Monitor MCP Usage: A 10-Step Security Checklist for 2026

What you need to know: MCP can evade traditional DLP, IAM, and SIEM controls because agent traffic looks like authorized API calls, sensitive data is semantically transformed before it leaves the perimeter, and exfiltration happens through tool invocations rather than file transfers.

Cyberhaven Analyst Plugin: AI-Assisted Security Investigation in Claude Code and Codex

Security teams have a data problem. Not a shortage of data, but instead there is a growing data surfacing problem. The signals are there, the incidents are logged, and the classifications exist. But, getting from raw data to a prioritized action plan still requires close to an hour of manual querying, tab-switching, and context reconstruction, every single time. The Cyberhaven Analyst Plugin changes that.

Plenary Session on Data Protection in the Age of AI at CII CIO Awards & Conclave

In this panel discussion titled "Data Protection in the Age of AI" Our Founder & CEO Mr. Anirban Mukherji along with several distinguished speakers, focused on critical aspects of data privacy and cybersecurity. The session explored how artificial intelligence impacts data management and the necessity of robust data privacy and security measures. Experts discuss the importance of responsible AI practices to navigate the evolving digital landscape effectively.

AI Agents in the Cloud: A Risk Management Framework for Security Leaders

Your risk committee meets Thursday. The agenda has a new item: AI agent risk posture. You open the register. The fraud detection agent shipped in March is on it. So is the customer service agent. Neither row is useful — “likelihood: medium, impact: high, control: service account scoped via IAM.” Three months ago that was approximately right. Last week the platform team added two MCP connections, the model was upgraded, and the agent now touches data classes the entry never anticipated.

What's happening to DevOps Security?

As 2026 rolls on, our capacity to prompt ourselves silly appears to be limitless. We’ve already seen the financial, legal, and reputational damage to Deloitte as they partly refunded the Australian government for a 237-page audit report containing LLM-generated hallucinations like fabricated academic references, fake footnotes, and a false quote attributed to a judge.

Stop Blaming AI for Bad System Design | Fix MCP Security

Every few weeks, a new story surfaces: an AI agent deletes a production database, an autonomous coding tool racks up a five-figure cloud bill, or a chatbot exfiltrates internal documents through a prompt injection attack. The reaction is predictable. “AI is dangerous.” “LLMs can’t be trusted.” “We need better guardrails on the model.” But if you look at the root cause of these incidents, the model is rarely the problem. The system around it is.

Are banks ready for AI-powered cyber threats?

A recent American Banker article, “Knock on wood: Are banks doing enough to cope with Mythos?” raises a timely and uncomfortable question about advanced AI models like Anthropic’s Claude Mythos. As highlighted in the article, INETCO CEO Bijan Sanii points out a critical truth: The conversation is being fueled by the emergence of AI technology capable of identifying software vulnerabilities at a speed and scale that was previously unimaginable.

Snyk Embeds Anthropic's Claude to Advance AI-Powered Security for Software Development

BOSTON, May 7, 2026 — Snyk, the AI security company, today announced it is leveraging Anthropic's Claude models to advance software security in an era of AI-powered development. Starting today, Snyk has integrated Claude into the Snyk AI Security Platform — powering automated vulnerability discovery, prioritization, and developer-ready fixes across code, dependencies, containers, and AI-generated artifacts. The threat driving that integration is real and accelerating.

Agentic AI in security operations: Friend, risk, or both

Agentic AI is forcing a hard question on every security leader: when your SOC is full of autonomous “doers” instead of just dashboards and scripts, is that your new best friend or a brand‑new risk surface you barely understand? The honest answer is both, and the way you design, govern, and deploy these systems will decide which side wins.