Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Meet Rai: AI That Runs More of the Security Work

MSPs are managing more customers, more environments, and more tools than ever before. At the same time, customer expectations keep rising -- faster response times, clearer reporting, and consistent service across every client. All of that pressure lands on already‑lean teams. That’s the reality Rai was built for.

Claude Mythos Is Not the Problem. Your Security Basics Are

There is a lot of panic around Claude Mythos. Some people are saying it will hack every system, that the sky is falling, and that there is no stopping it. That fear is dangerous because it makes teams freeze. Claude Mythos is genuinely powerful. AI systems like this can find security issues in minutes that even experienced penetration testers might take weeks to identify and exploit. That part is real. But here is the important point: AI is still exploiting what is already there.

AI in security feels harder than it is

Anyone who's stood up a SIEM from scratch knows the feeling: weeks of infrastructure work, integration headaches, and a services team alongside for the whole process. That experience shaped how people think about adopting anything new in security ops. The instinct is to treat AI the same way: budget for it, plan for it, bring in specialists. This instinct is costing teams real time. Traditional infrastructure takes great effort to stand up. Infrastructure-as-code happens in seconds.

Designing AI workflows: principles for safety and control

Most teams adopting AI in their workflows understand that LLMs do not behave like traditional software. The same input does not always produce the same output, and even when it does, the model can be wrong, manipulated, or misled. Hallucinations happen even without adversarial input. Air Canada learned this in 2024 when a tribunal ordered the airline to honor a bereavement-fare refund policy its support chatbot had invented out of thin air.

Reviewing Malicious PRs at Scale with AI

As AI coding assistants accelerate software development, the volume of pull requests at Datadog has grown to nearly 10,000 per week, increasing the risk that malicious changes slip through due to review fatigue. To address this, Datadog built BewAIre, an LLM-powered code review system designed to identify malicious source code changes introduced by threat actors. By reducing approval fatigue for developers while increasing friction for attackers, BewAIre guides human reviewers to the areas where judgment matters most, without slowing developer velocity.

The Fastest-Growing AI Categories in the Enterprise Are Also the Riskiest

Security teams often focus governance efforts on the most popular AI tools. But the real risk question isn't which tools employees use most. It's which tools are growing fastest and what data those tools can reach. New data from Cyberhaven Labs shows that the AI categories posting the largest year-over-year growth numbers are the same categories with privileged access to source code, credentials, customer contracts, and internal architecture.

AI Is Moving Fast in Manufacturing

Artificial intelligence is rapidly becoming embedded across manufacturing environments, from engineering and design to supply chain optimisation and operations. What was once experimental is now being applied in day-to-day workflows, often driven by the need for speed, efficiency, and competitive advantage. Recent research shows that 73% of manufacturing organisations report rapid AI adoption, with 90% ranking AI as a top security priority for 2026. The direction of travel is clear.

How to Harden AI Agents in Cloud Environments: The 9 Capabilities Your Stack Must Provide

Most “hardening” advice for AI agents is a checklist of things to configure before the agent runs. CIS Kubernetes Benchmark gates. Pod Security Standards baselines. NetworkPolicy templates. None of it’s wrong — it’s just one of four phases, the one your stack already covers. The other three are Observe, Enforce, and Reconcile. They’re where AI agents actually get breached, and they’re where most stacks have nothing.

AI Agent Security Performance: Framework for Evaluating Latency, Throughput, and Observability Overhead

Every AI workload security PoC reaches the same conversation. Platform engineering pushes back: the AI team won’t accept extra latency on inference. The security engineer hunts for benchmarks and finds a contradiction. Langfuse publishes 15% overhead. AgentOps publishes 12%. The security vendor quotes 1–2.5%. None is lying. They measure different layers.