Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How to Make AI Security Foundational to Your Data Security Stack

Most organizations treat AI security as a finishing touch: A policy written after an incident or a product category evaluated after the core stack is already in place. That sequencing is the problem. AI has fundamentally changed how sensitive data moves inside an organization, through prompts, agents, summarization tools, and third-party models that operate entirely outside traditional security perimeters.

Zenity Joins CoSAI: Why Agentic AI Standards Need Practitioners at the Table

The agentic AI security standards your enterprise will adopt in the next 18 months are being written right now, inside working groups most CISOs have never heard of. The Coalition for Secure AI (CoSAI), an OASIS Open Project with more than 45 sponsor organizations, including Google, Microsoft, NVIDIA, IBM, and Meta, is producing the frameworks, reference architectures, and secure design patterns that will define how autonomous agents operate inside enterprise environments.

Composable AI Agents and the SOC That Runs Itself

Picture a SOC that investigates its own alerts, hunts threats across customer tenants, isolates compromised endpoints, and writes its own detection rules. Envision the same SOC attacking itself every morning to find the gaps it missed, all before your analysts arrive for the day. This is not a roadmap item, but an operational reality on LimaCharlie. It’s what agentic AI security looks like on a platform built to support it.

Announcing Justification Coach: AI-Powered Guidance for Better Access Requests and Stronger Audits

Today, we’re introducing Justification Coach, a new AI-powered capability that helps users write better access request justifications in real time, so admins get the context they need for audits and investigations without having to chase people down after the fact.

New KnowBe4 Agent Risk Manager Addresses Pervasive AI Agent Risk

By Roger A. Grimes and Matthew Duren AI agents can deliver incredible productivity gains, but their operational complexity makes effective threat modeling harder than ever, including for developers, administrators and especially end users. At the same time, both developers and non-developers are increasingly vibe-coding, or using AI to generate functional software from natural language prompts.

Everyone Is Securing the Wrong Layer of AI

The AI security market is crowded. Vendors are racing to protect prompts, harden models, detect jailbreaks, and scan for data leakage at the LLM layer. The investment is real. The intent is good. And most of it is missing the point. Here is the problem: agents do not just think. They act. They call APIs. They trigger workflows. They write to databases, send emails, move money, and modify production systems.

How GitGuardian and CyberArk MCP Servers Cut Secrets & Vault Sprawl with AI Automation

Watch the teams of GitGuardian and CyberArk for a demo-first session on how MCP (Model Context Protocol) servers can help you tame secrets sprawl and vault sprawl by letting developers use AI to trigger the right actions, with far less cognitive load! What you’ll learn.