Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Take control of public AI application security with Cloudflare's Firewall for AI

Imagine building an LLM-powered assistant trained on your developer documentation and some internal guides to quickly help customers, reduce support workload, and improve user experience. Sounds great, right? But what if sensitive data, such as employee details or internal discussions, is included in the data used to train the LLM?

Cloudflare for AI: supporting AI adoption at scale with a security-first approach

AI is transforming businesses — from automated agents performing background workflows, to improved search, to easier access and summarization of knowledge. While we are still early in what is likely going to be a substantial shift in how the world operates, two things are clear: the Internet, and how we interact with it, will change, and the boundaries of security and data privacy have never been more difficult to trace, making security an important topic in this shift.

Data Leaks and AI Agents: Why Your APIs Could Be Exposing Sensitive Information

Most organizations are using AI in some way today, whether they know it or not. Some are merely beginning to experiment with it, using tools like chatbots. Others, however, have integrated agentic AI directly into their business procedures and APIs. While both types of organizations are undoubtedly realizing remarkable productivity and efficiency benefits, they may not know they are putting themselves at a significant security risk.

A litmus test for AI agents

What is an ”AI agent”? Confusion abounds. There is also some consensus: agents must of course be AI-driven systems. They should have some degree of autonomy, and they should be able to use tools in addition to understanding and reasoning. But why isn't, say, ChatGPT an agent? According to most definitions out there, it actually is. Yet most (including OpenAI themselves) don’t describe it that way.

Understanding and Securing Exposed Ollama Instances

Ollama is an emerging open-source framework designed to run large language models (LLMs) locally. While it provides a flexible and efficient way to serve AI models, improper configurations can introduce serious security risks. Many organizations unknowingly expose Ollama instances to the internet, leaving them vulnerable to unauthorized access, data exfiltration, and adversarial manipulation.

Panel Discussion - The Evolving Threat Landscape: Risks in the Age of AI Disruption | DevSecNext

As AI reshapes industries, it also introduces a wave of emerging security risks—some known, others yet to be discovered. In this DevSecNext panel discussion, experts from engineering, cloud business, venture capital, and security innovation dive deep into the intersection of AI disruption and the evolving threat landscape. This talk was recorded at DevSecNext, a community-driven event reimagining how we share security insights—short, to the point, and packed with actionable takeaways.

Inbar Raz - Living off Microsoft Copilot | DevSecNext

What happens when hackers weaponize Microsoft Copilot? In this eye-opening session, Inbar Raz takes a red-team-level deep dive into how attackers can abuse Copilot to exfiltrate data, bypass security controls, and even social engineer victims—automated by AI. This talk was recorded at DevSecNext, a community-driven event reimagining how we share security insights—short, to the point, and packed with actionable takeaways.

How Attackers Use AI To Spread Malware On GitHub

Github Copilot became the subject of critical security concerns, mainly because of jailbreak vulnerabilities that allow attackers to modify the tool’s behavior. Two attack vectors – Affirmation Jailbreak and Proxy Hijack – lead to malicious code generation and unauthorized access to premium AI models. But that’s not all. Contents hide 1 Jailbreaking GitHub Copilot 1.1 Affirmation jailbreak? “Sure,” let’s exploit the AI system(s) 2 Proxy Hijack.

2025 Cato CTRL Threat Report: Top 4 AI Predictions for the Year Ahead

Today, Cato Networks published the 2025 Cato CTRL Threat Report. It is the inaugural annual threat report from Cato CTRL, the Cato Networks threat intelligence team. The key theme for this year’s report is artificial intelligence (AI), which reflects the current cybersecurity landscape where AI usage is skyrocketing among vendors—and threat actors. Within the report, we examine the security risks associated with LLMs and the increased adoption of AI applications within organizations in 2024.