Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

EP 4 - AI-Powered Fraud: Redefining the Identity Threat Landscape

Imagine receiving an urgent email from your bank that looks perfectly legitimate. It warns you of a suspicious transaction and prompts you to verify your identity. You hesitate but click, and suddenly, your credentials are compromised. This scenario, crafted by AI-powered fraud-as-a-service, is happening now.

Insider Risk with Nightfall DLP: Episode 2 - Managing Shadow AI

Earlier this year, security researchers found more than 1 million records, including user data and API keys, in an exposed DeepSeek database. This massive exposure event tells us that data exfiltration risk and AI proliferation are forever linked together: as AI tools grow in popularity and complexity, exfiltration risk rises in kind.

AI Agents and API Security: The Hidden Risks Lurking in Your Business Logic

Modern organizations are becoming increasingly reliant on agentic AI, and for good reason: AI agents can dramatically improve efficiency and automate mission-critical functions like customer support, sales, operations, and even security. However, this deep integration into business processes introduces risks that, without proper API security, can compromise sensitive data and decision-making.

Exploring AI for Vulnerability Investigation and Prioritisation

The sheer volume of cybersecurity vulnerabilities is overwhelming. In 2024, there were 39,998 CVEs — an average of 109.28 per day! This constant stream of new threats makes it increasingly difficult for security teams to keep up. Large Language Models (LLMs) offer a possible solution, helping automate vulnerability investigation and prioritisation, allowing teams to more efficiently assess and respond to emerging risks. Do you even have time to skim over 109 CVEs a day?