Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI Data Privacy Trends and Future Outlook 2025

AI is now woven into everyday work. Customer teams rely on chat assistants, developers use copilots, and analysts ask models to sift through knowledge bases. The biggest shift in 2025 is not a single law or headline. It is the move from occasional audits to continuous, technical controls that run wherever data flows.

How agentic AI and non-human identities are transforming cybersecurity

Within the average enterprise, non-human identities (NHIs) now outnumber employees, contractors, and customers by anything between 10-to-1 and 92-to-1. Add to this the fragmentation of human identity management resulting from authorizing a single person’s access to multiple on-premises, cloud computing and hybrid environments, and enterprise identity and access management (IAM) becomes extremely challenging.

Revolutionizing DevSecOps with AI-Powered Application Security

The application security landscape is undergoing a fundamental transformation. While organizations race to deliver software faster than ever, traditional security approaches create bottlenecks that compromise both speed and protection. This isn’t a problem you can solve by throwing more disparate tools at the challenge. It requires a holistic, strategic shift to AI-powered application security.

Malicious MCP Server on npm postmark-mcp Harvests Emails

On September 25, 2025, the npm package postmark-mcp, an MCP (Model Context Protocol) server intended to let AI assistants send emails via Postmark, was reportedly modified to secretly exfiltrate email contents by adding a blind-copy (BCC) to an external domain. Current analysis suggests the behavior began around 1.0.16 and persisted in later versions.

Regulatory Gaps and Legacy Systems Are Aiding AI-Powered Cyberattacks on Governments

Public sector organizations face unprecedented cybersecurity challenges as artificial intelligence reshapes how adversaries launch attacks. Threat actors now use AI to execute large-scale, highly personalized phishing campaigns, automate the discovery of vulnerabilities, and evade detection faster than traditional defenses can respond.

Securing AI Part 3: AI Agents - Use Cases and Security

A10 security experts, Jamison Utter, Diptanshu Purwar, and Madhav Aggarwal explore the topic of securing AI agents, which they define as systems that perceive, decide, and act. They discuss: Defining AI Agents: Explaining that agents are not just chatbots, but are the "hands of AI" that can execute actions, call APIs, and automate complex workflows. The Challenge of Security: Discussing how security for AI agents goes beyond traditional model security and includes protecting against prompt injection, malicious instructions, and preventing unsafe actions or data leakage. The Importance of Context and Data.

AI Agent Security: Verifying Workflows with AI Firewalls & Guardrails

AI Agent Security: Verifying Workflows with AI Firewalls & Guardrails A10 security experts Jamison Utter, Madhav Aggarwal, and Diptanshu Purwar discuss the importance of context-aware security for AI agents. They emphasize that when automating workflows with AI, it's crucial to ensure that the context fed to the agents and their subsequent actions are verifiable and in line with existing company policies.

Attackers Use AI Development Tools to Craft Phony CAPTCHA Pages

Attackers are abusing AI-powered development platforms like Lovable, Netlify and Vercel to create and host captcha challenge websites as part of phishing campaigns, according to researchers at Trend Micro. “Since January, Trend Micro has observed a rise in fake captcha pages hosted on such platforms,” the researchers write.