Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

EP 1 - AI Gone Rogue: FuzzyAI and LLM Threats

In the inaugural episode of the Security Matters podcast, host David Puner dives into the world of AI security with CyberArk Labs' Principal Cyber Researcher, Eran Shimony. Discover how FuzzyAI is revolutionizing the protection of large language models (LLMs) by identifying vulnerabilities before attackers can exploit them. Learn about the challenges of securing generative AI and the innovative techniques used to stay ahead of threats. Tune in for an insightful discussion on the future of AI security and the importance of safeguarding LLMs.

How AI-powered Secure Email Gateways Fight Back vs. AI-armed Bad Actors

As bad actors use artificial intelligence to step up their phishing game, mounting an effective defense means using a secure email gateway that likewise employs AI to detect even the most cleverly crafted phishing emails and the fraudulent websites to which the emails attempt to direct recipients. The concern is not just with generative AI (GenAI) tools like ChatGPT, which has some (rather limited) guardrails to prevent nefarious use.

Protecting Sensitive Data in Snowflake through Protecto's External Tokenization

With the rapid expansion of cloud data storage and analytics, enterprises are increasingly leveraging platforms like Snowflake for their scalability and performance. However, this also introduces new challenges in data security, particularly for industries dealing with sensitive data such as finance, healthcare, and e-commerce.

Guarding open-source AI: Key takeaways from DeepSeek's security breach

In January 2025, within just a week of its global release, DeepSeek faced a wave of sophisticated cyberattacks. Organizations building open-source AI models and platforms are now rethinking their security strategies as they witness the unfolding consequences of DeepSeek’s vulnerabilities. The attack involved well-organized jailbreaking and DDoS assaults, according to security researchers, revealing just how quickly open platforms can be targeted.

The AI Shared Responsibility Model: Who's Job Is It Anyway?

In this episode of Into the Breach, James Purvis and Filip Verloy explore the AI Shared Responsibility Model, a framework introduced by Microsoft. They break down the roles and responsibilities of cloud providers, model providers, and customers in securing AI-powered environments. From understanding the unique challenges of generative AI tools like CoPilot to the importance of proactive data governance, this discussion offers practical insights into navigating AI security today and in the future.

A Phased Approach: Thoughts on EU AI Act Readiness

The European Union’s (EU) AI Act (the Act) represents landmark artificial intelligence (AI) regulation from the EU designed to promote trustworthy AI by focusing on the impacts on people through required mitigation of potential risks to health, safety and fundamental rights. The Act introduces a comprehensive and often complex framework for the development, deployment and use of AI systems, impacting a wide range of businesses across the globe.

Game Development Security Trends in 2025

Game development is more exciting than ever, but with new technology comes new security challenges. In 2025, protecting games isn't just about stopping cheaters - it's about safeguarding player data, preventing cyberattacks, and ensuring fair play in an industry that's constantly evolving.

How BullX Neo Uses AI to Improve Trading Accuracy

Success in cryptocurrency trading depends on speed and precision. The market moves quickly. Bullxneo offers a smart trading bot that uses artificial intelligence (AI). It changes how trading works in the market. BullX Neo helps traders trade accurately and protect their investments. It reduces risks and boosts profits, even those without trading experience. The new instrument works using unique systems. These systems are different from the trading bots available today. As explained below.