Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Exabeam Agent Behavior Analytics: First-of-Its-Kind Behavioral Detections for AI Agents

AI agents are moving into real workflows faster than most teams expected. According to PwC’s 2025 AI Agent Survey, 79% of companies are already adopting AI agents, and 88% of executives expect to increase AI-related budgets in the next year. These agents are now handling research, summarization, customer engagement, and operational tasks at a scale humans can’t match.

0-Click RCE in Claude Desktop: How AI Extensions Threaten Endpoint Security

The modern enterprise software ecosystem increasingly relies on desktop AI applications enhanced through extensible plugin or extension frameworks. These extensions are designed to improve productivity by enabling integrations with local files, browsers, APIs, developer tools, and internal systems. However, this same extensibility introduces a high-risk attack surface when extension permissions, sandboxing, and input validation are weakly enforced.

Using SSL Inspection and AI Guardrails to Protect Infrastructure

Using SSL Inspection and AI Guardrails to Protect Infrastructure How do you protect your AI infrastructure from threats without impacting user experience? In this video, we'll cover the methods organizations can use to inspect encrypted traffic, including what is sent to AI chatbots, and add guardrails to protect against security risks. We'll cover.

How do AI guardrails protect infrastructure from the unsafe and unpredictable territory of LLM risks

How do AI guardrails protect infrastructure from the unsafe and unpredictable territory of LLM risks? An AI firewall or guardrail device sits between your applications and large language models to keep the data sent and received from LLMs safe, compliant, and high-quality. Its design is to inspect natural-language traffic and protect your infrastructure against LMM vulnerabilities, including prompt injection, jailbreak attacks, data poisoning, system prompt leakage, and OWASP Top 10 vulnerabilities, using advanced, proprietary reasoning models.

A January Snapshot: Real-World AI Usage

AI is no longer a fringe productivity experiment inside organisations, it is embedded, habitual, and increasingly invisible. This snapshot from CultureAI’s January usage data highlights how AI is actually being used across everyday workflows, and where risk is forming as a result. Rather than focusing on hypothetical threats or model-level concerns, the findings below surface behavioural signals from real interactions: prompts, file uploads, and context accumulation.

International AI Safety Report 2026: What It Means for Autonomous AI Systems

The International AI Safety Report 2026 is one of the most comprehensive overviews to date of the risks posed by general-purpose AI systems. It’s compiled by over 100 independent experts from more than 30 countries, and shows that while AI systems are performing at levels that seemed like science fiction only a few years ago, the risks of misuse, malfunction, and systematic and cross-border harms are clear. It makes a compelling case for better evaluation, transparency, and guardrails.

The 2026 Forecast for AI-Driven Threats

2025 changed the shape of digital risk. In 2026, the impact accelerates. The fastest-growing threats no longer look like traditional attacks. They arrive through apparently legitimate automated access – AI agents, LLM crawlers, and delegated automation interacting directly with revenue-critical systems. They don’t trigger alarms. They quietly extract value, distort pricing logic, and reshape digital economics at scale.

I Built a Production-Ready App in 20 Minutes with Claude Opus 4.6

My boss dropped a bombshell at 4:00 PM: build a secure, production-ready app from scratch by tomorrow morning. Instead of panicking, I put Claude Opus 4.6 to the test. In this video, I walk you through the entire end-to-end process of using an AI agent to architect, code, and debug a full-stack application. We’ll look at "Plan Mode," how the AI handles environment errors (like Windows SQLite issues), and most importantly, how we verified the AI's code for security vulnerabilities using Snyk.

AI Security in 2026 Starts With Identity #cybersecurity #datasecurity #identitysecurity

As AI adoption grows, identity risk grows with it. Dirk Schrader, VP of Security Research at Netwrix, explains why governing human and machine identities is foundational to securing AI systems. How are you governing identity in your AI workflows today?