Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Artificial Intelligence Security Posture Management (AISPM): An Explainer

As AI Agents continue to revolutionize everything about how business is done, ensuring the security of these agents has become paramount. While organizations have rushed to adopt DLP processes and whitelist/blacklist policies to block the use of malicious prompts, it’s worth noting that DLP and firewalls have been around for a very long time and have proven limited in mitigating the risks of users copy/pasting sensitive information onto the internet.

Sensitive Data Leaks from AI Model Use | The 443 Podcast

How are you using ChatGPT at work? On this week's episode of, Corey Nachreiner and Marc Laliberte dig into a report on sensitive data leakage caused by AI model use. They also cover a recent report that highlights a drop in ransomware payments in 2024, as well as a recent attack targeting ASP.NET web servers.

The Dangers of Rushing into AI Adoption: Lessons from DeepSeek

As organizations race to adopt the latest advancements in artificial intelligence, DeepSeek serves as a cautionary tale about the potential dangers of rushing into the hype cycle without adequate consideration of security and ethical implications. DeepSeek, a Chinese AI startup, has been identified as having several significant security risks and vulnerabilities that could pose threats to both the company and its users.

CrowdStrike Leads Agentic AI Innovation in Cybersecurity with Charlotte AI Detection Triage

AI has become both a powerful ally and a formidable weapon in today’s cybersecurity landscape. While AI enables security teams to detect and neutralize threats with unmatched speed and precision, adversaries are equally quick to exploit its potential with increasingly sophisticated and automated attacks. This duality has created an arms race in which organizations must not only adopt AI but continually innovate to stay ahead.

What You Need to Know about the DeepSeek Data Breach

DeepSeek, founded by Liang Wenfeng, is an AI development firm located in Hangzhou, China. The company focuses on developing open source Large Language Models (LLMs) and specializes in data analytics and machine learning. DeepSeek gained global recognition in January 2025 with the release of its R1 reasoning model rivalling OpenAI's o1 model in performance but at a substantially lower cost.

How to Securely Embrace the AI Revolution in Software Development

Software development is one of the most impacted workflows in the Artificial Intelligence revolution. How will you handle the AI-driven revolution in software development securely? Check out this video to see how our innovation can help you stop risks in AI and the software supply chain at the start.

Securing Code in the Era of Agentic AI

AI coding assistants like GitHub Copilot are transforming the way developers write software, boosting productivity, and accelerating development cycles. However, while these tools generate code more efficiently, they also introduce new risks more efficiently—potentially embedding security vulnerabilities that could lead to severe breaches down the line. What is your plan for reducing risk from the vast amount of insecure code coming through agentic AI in software development?

Web-Based AI Agents: Unveiling the Emerging Insider Threat

The introduction of OpenAI’s ‘Operator’ is a game changer for AI-driven automation. Currently designed for consumers, it’s only a matter of time before such web-based AI agents are widely adopted in the workplace. These agents aren’t just chatbots; they replicate human interaction with web applications, executing commands and automating actions that once required manual input.

EP 1 - AI Gone Rogue: FuzzyAI and LLM Threats

In the inaugural episode of the Security Matters podcast, host David Puner dives into the world of AI security with CyberArk Labs' Principal Cyber Researcher, Eran Shimony. Discover how FuzzyAI is revolutionizing the protection of large language models (LLMs) by identifying vulnerabilities before attackers can exploit them. Learn about the challenges of securing generative AI and the innovative techniques used to stay ahead of threats. Tune in for an insightful discussion on the future of AI security and the importance of safeguarding LLMs.