Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

See through document fraud with Document AI Enhanced Fraud Detection

On April 2, 1796, a full house packed the Drury Lane Theatre in London, eager to witness the first showing of a newly discovered Shakespeare play. The problem was that William Henry Ireland wrote the play, Vortigern, and the entire production was a hoax. Although there was some controversy before opening day, several experts reviewed the manuscript and supporting documents and confirmed that the play was a long-lost Shakespeare original.

Agent Skills are the New Packages of AI: It's Time to Manage Them Securely

Let’s talk about agent skills. As the AI agent ecosystem matures, we’re seeing a major shift in how users equip agents to run automated workflows. While robust protocols such as MCP exist to handle complex system integrations and authentication, skills have emerged as the go-to, low-friction way to shape an agent’s day-to-day behavior. Skills are extremely easy to adopt. In many cases, they are simply lightweight files that orchestrate scripts and commands.

How Degenerative AI Exposes Deepfakes

Detection tools now use so called degenerative AI to analyse every frame of a video, looking for traces of the models and methods used to generate or edit it. Generative AI produces the fake, degenerative AI hunts for subtle artefacts in pixels, giving investigators a way to flag manipulated content at scale. ⸻ For more information about us or if you have any questions you would like us to discuss email podcast@razorthorn.com. We give our clients a personalised, integrated approach to information security, driven by our belief in quality and discretion..

How to Protect Sensitive Data from LLMs | AI Data Privacy Demo

AI tools like ChatGPT, Gemini and other LLMs are powerful — but what happens when sensitive data gets sent to them? In this video, we demonstrate how Protecto AI prevents sensitive information from reaching LLMs using Masking APIs and Unmasking APIs. You’ll see a real workflow where user prompts containing credit card details and personal data are automatically masked before being processed by an AI model like Gemini 2.5 Flash.

AI in Cybersecurity Certification

Positive feedback can lead to unintended consequences. A dog learned that saving kids from the River Seine earned food and praise. So he started dragging them in to “save” them. AI models optimize for feedback in a similar way. Cato’s AI in Cybersecurity course shows how to manage the risks. It’s free and earns you CPE credits.

You Can Create a Convincing Deepfake in Under an Hour

A non technical user can produce a credible deepfake in under an hour using off the shelf tools and footage from normal video meetings. Common habits such as recording calls for later review give attackers enough material to train models, so every routine sales or onboarding call becomes potential training data. ⸻ For more information about us or if you have any questions you would like us to discuss email podcast@razorthorn.com. We give our clients a personalised, integrated approach to information security, driven by our belief in quality and discretion..

AppSec in the age of AI: An RSA Conference preview

Application security is at a breaking point as development teams move faster than ever, aided by AI-powered coding assistants. While these tools boost productivity, they also introduce subtle errors and insecure patterns at scale. The result: a growing backlog of vulnerabilities that outpaces traditional AppSec models. This webcast examines the risks and opportunities of AI in AppSec and who will be addressing it at RSA Conference. We’ll explore how defenders can use AI to level the playing field with automated scanning, intelligent prioritization, and secure-by-design practices.

How Artificial Intelligence (AI) Can Increase Threat Detection and Response

Security leaders are being squeezed from both sides. On one side, threat actors are scaling operations with AI automation, using it to craft more convincing social engineering attacks, accelerating reconnaissance, and improving lateral movement. On the other side, defenders are drowning in telemetry, suffering under staffing constraints, and facing the harsh reality that threat actors don’t keep business hours.

How Governments Use AI Safely | AI Governance Explained

How are governments using AI while protecting citizens’ data and privacy? In this episode of AI on the Edge, Ciara Maerowitz, Chief Privacy Officer for the City of Phoenix, explains how cities implement AI governance, manage bias, ensure transparency, and assess AI risks. Learn how responsible AI frameworks, policies, and risk management help governments safely adopt artificial intelligence.