Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

What is OpenClaw andAgentic AI? The Security Issues You Need to Be Aware of Now

Over the past several weeks, OpenClaw and MaltBook have exploded across the headlines. Outlets have published stories about AI agents organizing themselves or even acting independently on Moldtbook. SecurityScorecard’s Jeremy Turner, VP of Threat Intelligence & Research and Anne Griffin, Head of AI Product Strategy discuss what OpenClaw is, how agentic AI works, and where the real security issues are based on new research from SecurityScorecard's STRIKE Threat Intelligence team.

12 Critical Shadow AI Security Risks Your Organization Needs to Monitor in 2026

What data are your employees feeding into unapproved AI tools? If you can't answer that question, then you might have shadow AI security risks that you don't know about. The Netwrix Cybersecurity Trends Report 2025 found that 37% of organizations have already had to adjust their security strategies due to AI-driven threats, while 30% haven't started AI implementation at all. That gap between how fast AI threats are evolving and how slowly organizations are responding is where shadow AI thrives.

Secure AI Code Generation: From Policy to Practice

IIf you’re using AI to generate code, you’re likely moving faster than ever. You’ve probably felt that surge of productivity when a complex logic problem gets solved in seconds or boilerplate code appears instantly. But here is the problem: speed without guardrails creates security debt, and with AI, that debt accumulates at a terrifying rate. Recent data paints a concerning picture.

The Future of AI Agent Security Is Guardrails

If you've been paying attention to the AI agent space over the past few months, you've probably noticed a pattern: every week brings a new story about an AI agent doing something it absolutely should not have done: reading private emails, exfiltrating credentials, or executing shell commands that a human would have never approved. The OpenClaw saga alone gave us exposed databases, command injection vulnerabilities, and a $16 million scam token, all in the span of about five days.

From Acceleration to Exposure: Why AI Demands Mature AppSec

For most engineering teams, AI feels like a breakthrough years in the making. Code gets written faster, reviews move quicker, and releases that once took weeks now happen in days—or even hours. But as more of the software lifecycle becomes automated, a less comfortable reality is setting in: application security hasn’t kept pace, and AI-native security practices are often missing. When AppSec foundations are immature, AI doesn’t reduce risk—it scales it.

Vibe Coding & AI Coding Assistants: Who Secures AI-Generated Code?

84% of developers are using or planning to use AI tools in their workflow (Stack Overflow, 2025). AI coding assistants like Codex, GitHub Copilot, and CodeWhisperer are changing how we build software. But here’s the real question: Who secures AI-generated code? In this video, we break down: If you’re using AI to write code, you need: AI-generated code is still code. It must be reviewed, validated, and monitored.

How miniOrange's GPT App Connects LLMs to Your WordPress Site

WordPress is entering a new phase in how websites are managed with the introduction of API Abilities and support for the Model Context Protocol (MCP). These updates allow WordPress core, plugins, and themes to clearly define the actions they support and how those actions should be executed. For the first time, WordPress can communicate its capabilities in a structured way that large language models can reliably understand.