Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

OpenClaw (Moltbot) Personal Assistant Goes Viral - And So Do Your Secrets

Early 2026, Moltbot a new AI personal assistant went viral. GitGuardian detected 200+ leaked secrets related to it, including from healthcare and fintech companies. Our contribution to Moltbot: a skill that turns secret scanning into a conversational prompt, letting users ask "is this safe?".

Threat hunting to detection engineering: Analyzing real malware with Claude Code, LimaCharlie, and Linux

Claude Code, originally just auto-complete on steroids for IDEs, shows a lot of promise for becoming a major tool in the DFIR/detection engineering/security analyst’s toolbox. Whether it’s Claude Code’s support of MCP, agent skills, or general ability to quickly figure out how to accomplish a given task, it is rapidly becoming more than a code generation tool. This is the first of a three-part series.

Productivity at a Price: The Rising Cost of AI Convenience

Humans have always sought to streamline productivity through the most convenient solutions available, prioritizing speed to stay ahead and gain an edge over the competition. From the assembly line to the cloud, the goal remains the same: do more with less friction. Today, that convenience is synonymous with AI. While these tools have revolutionized how we work, the reality remains that rapid innovation always comes with a hidden cost.

Voice of Security 2026: AI is everywhere yet manual work persists

AI adoption in security has soared. But for many teams, manual work and burnout remain stubbornly high. To understand why, and what security teams must do next, we partnered with Sapio research to survey more than 1,800 security leaders and practitioners worldwide for our Voice of Security 2026 report. We wanted to learn how teams are using AI and automation, how the role of security is evolving, and how professionals believe AI will impact their careers. The data is revealing.

Stop Staring at JSON: How GenAI is Solving the API "Context Crisis"

There is a moment that happens in every SOC (Security Operations Center) every day. An alert fires. An analyst looks at a dashboard and sees a UR: POST /vs/payments/proc/77a. And then they stop. They stare. And they ask the question that kills productivity: "What does this thing actually do?" Is it a critical payment gateway? A test function? Does it handle credit card numbers or just transaction IDs?

MCP & AI Agent Security: Addressing the Growing Data Exfiltration Vector

The security landscape is shifting. For the past two years, security teams have focused primarily on what users type into chatbots by monitoring interactions with ChatGPT, Gemini, and Claude. But a new risk vector is emerging, one that operates largely outside traditional security controls: AI agents accessing corporate data autonomously through the Model Context Protocol (MCP).

From IDE to CLI: Securing Agentic Coding Assistants

Today we’re excited to announce that Zenity now protects the most powerful, enterprise-critical coding assistants - Cursor, Claude Code, and GitHub Copilot - from build-time to runtime. As AI becomes a first-class developer tool, Zenity gives security teams the visibility and control they need to safely embrace coding assistants everywhere they’re used, in IDEs, CLIs or in the cloud.

Semantic Guardrails for AI/ML - Protegrity AI Developer Edition

In this installment of our AI Developer Edition Set-up series, Dan Johnson, a software engineer at Protegrity, introduces semantic guardrails. Learn how to protect your LLM and chatbot workflows from malicious prompts and insecure AI responses. As AI becomes central to enterprise operations, controlling the context of conversations is a major challenge. Semantic guardrails provide a safety layer that ensures your AI stays on topic and never leaks sensitive PII.