Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI security: A comprehensive guide for evolving teams

The AI boom has introduced intelligent tools into most industries, not just in tech-first organizations. But the rising adoption also opens the door to new risks. ‍ Vanta’s AI governance survey found that 63% of organizations rate data privacy and protection as the top concern with AI, followed by security and adversarial threats at 50%. These numbers emphasize how urgently organizations want to prioritize defenses for AI-specific attack vectors.

AI Code Review in 2025: Technologies, Challenges & Best Practices

AI code review leverages artificial intelligence models and machine learning techniques to analyze and provide feedback on source code, automating and improving the traditional code review process. It is crucial for software development workflows, offering significant advantages to developers and teams. AI code review can scan for bugs, style violations, security vulnerabilities, and other issues.

CTI Roundup: SystemBC, ShinyHunters, AI-obfuscated Phishing

This week, Tanium’s Cyber Threat Intelligence (CTI) team investigates SystemBC, a large-scale proxy botnet that’s leveraging compromised virtual private server (VPS) infrastructure to support cybercriminal operations, including ransomware and credential theft. Next, the team looks at ShinyHunters—a financially motivated data extortion group that’s now targeting enterprise cloud applications.

How Falcon ASPM Secures GenAI Applications and Lessons from Dogfooding

The widespread availability of large language models (LLMs) has driven the rapid development of generative and agentic AI applications for business use cases. These systems can reason, plan, and act autonomously, creating security risks that traditional security tools weren’t built to handle. Their popularity has widened the attack surface, both for organizations using external LLMs and those building their own GenAI applications.

Beyond Agent-Washing: How Torq Delivers True Agentic Automation for Security

Eldad Livni is the Co-Founder and Chief Innovation Officer at Torq. Prior to founding Torq, Eldad co-founded and served as CPO of Luminate Security, a pioneer in Zero Trust/SASE. Following Luminate’s acquisition by Symantec, he went on to act as CPO of Symantec’s Zero Trust/Secure Access Cloud offering. The security industry has a new buzzword problem.

Enterprise AI Security Redefined: Protecto vs. Traditional DLPs

Protecto replaces the patchwork of DLPs and DSPMs with AI-native controls, so you can safely unlock enterprise data for AI. Prompts, models, and context power Agentic AI. But context is also the most volatile and exposed layer - where 90% of enterprise AI risks originate. Intellectual property loss, unauthorized access, privacy violations, compliance failures - all start in the context. That’s why Protecto brings Zero Trust controls to data in AI.

Securing AI: The New Frontier of API Security

A10 Networks' security experts, Jamison Utter, Diptanshu Purwar, and Madhav Aggarwal, discuss the security challenges of AI. They discuss the new world of API-enabled AI agents and the necessity for robust security controls. Learn how to prevent misuse within the enterprise as they explore data ingress/egress and API security in the context of large language models (LLMs).

Understanding the OWASP AI Maturity Assessment

Today, almost all organizations use AI in some way. But while it creates invaluable opportunities for innovation and efficiency, it also carries serious risks. Mitigating these risks and ensuring responsible AI adoption relies on mature AI models, guided by governance frameworks. The OWASP AI Maturity Assessment Model (AIMA) is one of the most practical. In this article, we’ll explore what it is, how it compares to other frameworks, and how organizations can use it to assess their AI maturity.