Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

The Dark Side of AI: How Cybercriminals Exploit Generative AI for Attacks

Artificial Intelligence (AI) has been a game-changer in industries that have further churned into process efficiency and revolutionized cybersecurity. On the flip side, its potential has been weaponized by threat actors. Google's Threat Intelligence Group (GTIG) recently came out with reports which showed that state-sponsored hackers are actively exploiting Google's AI-powered Gemini assistant to strengthen their cyberattacks.

Warning: Organizations Need to Prep For AI-Powered Ransomware Attacks

The rise of agentic AI tools will transform the cybercrime landscape, according to a new report from Malwarebytes. Agentic AI—which is still under development—is a step above the generative AI tools that are currently available to the public, and will likely be widely released in 2025. While these tools will have many legitimate uses, they’ll also enable cybercriminals to scale their attacks.

Introducing the Ivanti ITSM & Protecto Partnership: Enabling Secure Data for AI Agents

Discover how Protecto secures data within Ivanti ITSM APIs to prevent data leaks, privacy violations, and compliance risks. In this video, we’ll show how Protecto acts as a data guardrail, ensuring that sensitive information like PII and PHI is identified, masked, and handled securely before it reaches AI agents. Participants: Amar Kanagaraj, Founder & CEO of Protecto Kalyan Vishnubhotla, Director of Strategic Partnerships, Ivanti.

Strategies and Tradeoffs when Running AI Models on Lean Resources

This article explores the recommended infrastructure for AI workloads, strategies to optimize performance on less expensive servers, and trade-offs in terms of cost and results. We’ll also provide examples of AWS EC2 instance types and pricing to illustrate practical options.

The AI Hunger Games - The Rapid Adoption of DeepSeek: A Security Nightmare

The recent rapid adoption of the AI application “DeepSeek” has gained significant global attention. Becoming the app on both the Apple Store and Google Play Store within its first few days, seeing over 10 million downloads. While this explosive growth of DeepSeek R1 highlights the public’s fascination with AI-driven tools, the security community and policymakers have been less enthusiastic.

Feroot Security Research Reveals DeepSeek AI's Hidden Data Pipeline to China

ABC Good Morning America featured an exclusive report this morning highlighting Feroot’s discovery of concerning code within DeepSeek’s AI platform. Feroot, a leading cybersecurity firm, uncovered hidden capabilities enabling direct data transmission from DeepSeek to China Mobile servers.

AP News - Feroot Research Uncovers DeepSeek's Connection to Chinese State-Owned Telecom

Researchers at Feroot Security have identified computer code within the web-based version of DeepSeek’s AI chatbot that could potentially send user login information to China Mobile, a Chinese state-owned telecommunications company. This discovery raises significant privacy and national security concerns, particularly as China Mobile has been barred from operating in the United States due to its alleged ties with the Chinese government and military.

DeepSeek Just Shook Up AI. Here's How to Rethink Your Strategy.

The rapid rise of generative AI (genAI) applications is reshaping enterprise technology strategies, pushing security leaders to reevaluate risk, compliance, and data governance policies. The latest surge in DeepSeek usage is a wake-up call for CISOs, illustrating how quickly new genAI tools can infiltrate the enterprise. In only 48 hours, Netskope Threat Labs observed a staggering 1,052% increase in DeepSeek usage across our customer base.

The Hidden Biases in Your AI

"Bias" might sound simple, but in AI, it's anything but. Here's the reality: AI isn't free of prejudice; instead, it reflects it-sometimes in surprising and troubling ways. A quote from IBM's Francesca Rossi captures it well: "AI is a reflection of our humanity. When we don't address biases, we don't just create flawed machines; we amplify our own inequalities." This concept isn't just a philosophical idea; it's an observable and urgent issue.