Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How AI Companies Can Use Data Lineage To Stop IP Theft - And Win When It Goes To Court

The 21st-century gold rush is the AI boom, and it is producing a wave of emerging AI companies. Being the first to build and apply AI in novel ways successfully is the difference between success and failure. Because of this, companies can find themselves making a trade-off between time-to-market and security.

Considerations for Microsoft Copilot Studio vs. Foundry in Financial Services

Financial services organizations are increasingly turning to AI agents to drive productivity, automate workflows, and deliver an innovative edge. Within the Microsoft ecosystem, two agentic platforms, Copilot Studio and Foundry, are paving new paths for agent development and deployment. Despite their shared vision for enterprise AI, their differences have important implications for user groups, agent capabilities, and security priorities.

The Evolving Role of AI Governance: Turning Risk into Responsibility

This article is part of a monthly LevelBlue series that explores the evolving world of AI governance, trust, and responsibility. Each month, we look at how organizations can use artificial intelligence safely, thoughtfully, and with lasting impact. Artificial intelligence has moved from being an experiment to becoming an expectation. It now shapes how decisions are made, how customers are supported, and how innovation happens. As AI grows in influence, so does the need to manage it wisely.

The Evolution of Cybersecurity Automation and AI Adoption

Automation has become the foundation of modern cybersecurity operations. What was once a tool for efficiency is now critical. In parallel, artificial intelligence is no longer just a buzzword; it is reshaping how organizations detect, analyze, and respond to threats. The new Cybersecurity Automation and AI Adoption Report explores how global security leaders are approaching these technologies, what’s driving adoption, and where organizations still face challenges.

How AI-Driven Attacks Are Putting Gmail Security At Risk

Gmail has always been a common target for cybercriminals, and with the arrival of advanced AI tools, the threat level has increased significantly. Now, attackers no longer rely on generic phishing emails or scam methods. They are using AI to create convincing messages and imitate real support agents to make attacks look more genuine. This change in attack patterns has made Gmail users more vulnerable because they can’t differentiate between real and fake messages.

How Enterprise CPG Companies Can Safely Adopt LLMs Without Compromising Data Privacy

A major publicly traded CPG company wanted to adopt LLM to improve performance marketing, analytics, and customer experience. However, the IT team blocked AI usage and uploads to external AI tools as interacting with public AI models could expose sensitive brand, consumer, and financial data. This isn’t an isolated problem. It’s a pattern across enterprises: business agility collides with security requirements.

Inside the Agent Stack: Securing Azure AI Foundry-Built Agents

This blog kicks off our new series, Inside the Agent Stack, where we take you behind the scenes of today’s most widely adopted AI agent platforms and show you what it really takes to secure them. Each installment will dissect a specific platform, expose realistic attack paths, and share proven strategies that help organizations keep their AI agents safe, reliable, and compliant.

Are we on the path to AI defenders vs. AI attackers?

Swarms of AI bots are now being used to continuously test security perimeters. In this episode, Michael Baker, VP and Global CISO at DXC Technology, discusses the shift to AI-driven security operations. He recently met with startups working on agentic pentesting to find vulnerabilities before bad guys do. The advantage? You control these bots and get immediate feedback. The threat? Adversaries are building the exact same capabilities right now.