Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

VCF 9, Infrastructure, and the AI Revolution

Artificial intelligence is changing the IT landscape in radical, unprecedented ways. It’s rewriting the rules of code generation, automating complex customer service interactions, and providing data insights that used to be impossible to extract, even in recent decades. However, for IT managers and those responsible for keeping the lights on, AI represents a massive shift in infrastructure requirements.

AI Risk Management: Process, Frameworks, and 5 Mitigation Methods

AI risk management is the process of identifying, assessing, and mitigating risks associated with artificial intelligence systems to ensure they are developed and used responsibly. It involves using frameworks like the NIST AI Risk Management Framework to address technical, ethical, and social challenges, including data bias, privacy violations, and security vulnerabilities.

Prompt Injection Attacks: Why AI Security Starts with IAM

AI agents are rewriting the rules of efficiency, but one hidden flaw could turn them against you. Prompt injection attacks let hackers hijack your AI, steal data, and break safeguards straight through everyday inputs. No code exploit is required, only a clever manipulation. Identity and Access Management (IAM) plays a massive role in AI security to protect at first hand.

The MCP Trojan Horse: AI's Hidden Security Risk

The race to adopt AI agents has created a massive, unmonitored blind spot in the enterprise software supply chain. At the heart of this revolution is the Model Context Protocol (MCP) – an open connectivity standard designed to move AI models (LLMs) out of their passive “chat box” and give them direct active access to your company’s internal systems.

Agentic AI Risk Survey: Why CISOs Are Slowing Adoption

This week, we released our 2026 State of Agentic AI Risk Report, a global survey of 250 senior cybersecurity leaders examining how enterprises are approaching agentic AI as it moves closer to production. The findings point to a clear reality. While AI agents are advancing quickly, security leaders are deliberately slowing adoption. In fact, 98% of respondents say security and data concerns have already slowed deployments, added scrutiny, or reduced the scope of agentic AI initiatives.

Nation-State Threat Actors Incorporate AI to Streamline Attacks

Researchers at Google’s Threat Intelligence Group (GTIG) warn that nation-state threat actors have adopted Gemini and other AI tools as essential components of their operations. The threat actors are using tools to conduct research and reconnaissance, target victims, and rapidly create phishing lures.

ARMO Behavioral AI Workload Security

AI is not just another workload category. It is the first category of workloads that decides what to do at runtime. And that changes everything about how security must work in the cloud. For years, cloud security evolved around deterministic systems. You deploy code. That code follows defined logic paths. If something unexpected happens, such as a new process, an unusual outbound connection, or privilege escalation, you investigate and respond.