Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

GDPR Compliance for AI Agents: A Startup's Guide

AI agents are moving fast. They book meetings, draft emails, summarize calls, search internal knowledge bases, and increasingly act on behalf of users. And as more teams adopt these systems, a familiar question surfaces almost immediately: How does GDPR apply to AI agents? What we’ve learned—working with startups rolling out AI features across support, sales, HR, and engineering—is that GDPR is not a blocker.

Old AI Security vs Evo: Watch Agentic Security Replace Weeks of Manual Work

From intelligent chatbots to autonomous agents, innovation has never moved faster thanks to GenAI. But with the rate of velocity comes a massive new challenge: a class of complex, non-deterministic security risks that traditional cybersecurity methods are simply not equipped to handle. AI-native applications are already running in production. Across industries, teams are deploying copilots, RAG systems, autonomous agents, and AI-powered workflows faster than traditional security processes can keep up.

INETCO's Bijan Sanii on Conversations Live: 'Cybersecurity is an arms race. AI today, quantum tomorrow'

At the recent Conversations Live with Stuart McNish panel on cybersecurity — part of the thoughtful public affairs dialogue series produced in partnership with the Vancouver Sun — industry leaders gathered to unpack the real-world risks shaping organizational resilience and national security. The event, held on Dec. 10, 2025, brought together experts from across the cybersecurity landscape to go beyond headlines and explore strategies for responding to evolving threats.

How security leaders can safely and effectively implement agentic AI

2025 began with experts warning about the dangers of agentic AI use—but that didn’t slow adoption. Our annual State of Trust Report shows that nearly 80% of organizations are either actively using or planning to use agentic AI. That acceleration is outpacing the governance required to keep these systems safe: ‍ ‍ A level of machine autonomy that would’ve been unthinkable just a few years ago is quickly becoming normalized.

Stop Feeding Logs to LLMs: A Multi-Agent Approach to Security Investigation

See how Torq harnesses AI in your SOC to detect, prioritize, and respond to threats faster. Request a Demo Noam Cohen is a serial entrepreneur building seriously cool data and AI companies since 2018. Noam’s insights are informed by a unique combination of data, product, and AI expertise — with a background that includes winning the Israel Defense Prize for his work in leveraging data to predict terror attacks.

Secure AI Agent Infrastructure with Zero-Code MCP

Learn how to secure AI and MCP infrastructure without writing authorization code, rewriting MCP servers, or limiting agent work with Teleport’s zero-code MCP integration. AI agents are becoming powerful participants in engineering workflows. But without meaningful authorization boundaries, they can quickly become an existential security risk. AI agents do not behave like traditional applications. Instead, they generate actions and chain together tools in unpredictable ways.

Proactive WAF Vulnerability Protection & Firewall for AI + Multiplayer Chess Demo in ChatGPT

In this episode of This Week in NET, we talk with Daniele Molteni, Director of Product Management for Cloudflare’s WAF, about how Cloudflare responded within hours to a newly disclosed React Server Components vulnerability — deploying global protection before the public advisory was even released.

Questions to ask before vetting an AI agent for your SOC

So you’re ready to “hire” an agent or two for security operations. While AI agents won’t replace your human analysts, they are quickly becoming indispensable team members. Choosing the right ones should resemble a typical hiring process: you need to determine if they possess the necessary skills to fill your team’s gaps, work effectively with others, and grow with your organization. Here are five questions worth asking before you bring an AI agent on board in your SOC.

Why "We Thought It Was On" Keeps Leading to Breaches

At UC Irvine’s Digital Leadership Agenda 2026, moderated by Nicole Perlroth, Garrett Hamilton illustrates what those blind spots can look like: “We believed it was deployed.”“It was turned on.”“It should have stopped this.” Except one exception, one policy gap, one control not applied at scale — and assumptions replace reality. The real problem isn’t visibility. It’s continuously validating intent against execution.

Misconfigurations Are Still Owning Security Teams

Garrett Hamilton sat down with Todd Graham, Managing Partner at Microsoft’s venture fund, M12, to talk about why M12 invested in Reach and why our mission was a no-brainer for him. Nation-state attacks make the headlines—but most people are getting owned by misconfigured servers, networks, and controls hiding in plain sight. Turns out the problem isn’t what teams don’t own. It’s what they do own that isn’t, in most cases, even turned on.