Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Why Securing AI Code Generation is Critical for AppSec

The revolution is here, but it’s not what we expected. AI coding assistants have transformed software development, with developers shipping code faster than ever before. GitHub Copilot, Amazon CodeWhisperer, and Claude Code have become as essential to modern development as Git itself. The productivity gains are undeniable; what once took hours now takes minutes. But there’s a dangerous blind spot in this revolution: security.

Observability and Security for the AI Era

Datadog has always been driven by a broader vision of helping teams understand and operate complex systems. In this session, you’ll hear from Yanbing Li, Chief Product Officer, and Shri Subramanian, Group Product Manager, as they share the latest updates across the Datadog product suite and discuss how that vision continues to shape the platform’s evolution and support the next generation of AI-driven applications.

A Look At GitGuardian's ML-Powered Contextual EnrichmentAnd Incident Scoring

In this quick introductory video, Mathieu Bellon, Senior Product Manager at GitGuardian, sits down with Dwayne McDaniel, Developer Advocate, to cover some of the advancements GitGuardian has made by integrating machine learning directly into the secrets security platform. Mathieu describes how engineers and responders can save serious time as by automating contextual analysis, geving the humans in the loop with the best information to be able to take an informed action when it comes to secrets leaks. They also discuss the security implications and where teams can look if they want to opt out or bring their own agents.

7 Generative AI Security Risks and How to Defend Your Organization

Generative AI creates new attack surfaces that traditional security tools were not designed to address. The biggest generative AI security risks include prompt injection, data leakage, shadow AI, compliance exposure, model poisoning, insecure RAG pipelines, and broken access control. Each one requires a specific defense, not a generic firewall or DLP rule.

Best Enterprise DLP Tools for AI Data Risk (2026 Comparison)

Employees move sensitive data into AI tools every day. Someone pastes customer records into ChatGPT to draft an email. A developer feeds proprietary source code into a coding assistant to fix a bug. A project manager drops a confidential contract into Gemini to summarize it for a meeting. According to research from Cyberhaven Labs, 39.7% of the data employees share with AI tools is sensitive, and enterprise adoption of endpoint-based AI agents grew 276% in the past year alone.

Understanding shadow AI in your endpoint environment

Generative AI–and large language models in particular–reached mass consumer adoption beginning in late 2022 and early 2023, with ChatGPT reaching 100 million users faster than any consumer application in history. Since then, AI has advanced at a breakneck pace and now seems to be incorporated in every tool, app, and website–regardless of how useful it might actually be.

How Financial Services Teams Should Secure AI Agents in 2026

Your fraud detection agent scores 30,000 transactions per hour. Your KYC agent processes identity verifications against government watchlists. Your customer service chatbot resolves disputes and initiates balance transfers. Each agent runs on Kubernetes with inherited service account permissions that span payment APIs, customer databases, and compliance systems. Now imagine one of those agents is compromised through a prompt injection embedded in a customer support ticket.

How to Make AI Security Foundational to Your Data Security Stack

Most organizations treat AI security as a finishing touch: A policy written after an incident or a product category evaluated after the core stack is already in place. That sequencing is the problem. AI has fundamentally changed how sensitive data moves inside an organization, through prompts, agents, summarization tools, and third-party models that operate entirely outside traditional security perimeters.