Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

A Deep Peek at DeepSeek

DeepSeek’s rapid ascent in the AI space has made it impossible to ignore. Its sophisticated models and AI assistant have captured global attention. And, while headlines focus on DeepSeek’s capabilities, STRIKE research exposes critical security flaws, hidden data flows, and unanswered questions about who has access to the data and why.

Proficio utilizes Elastic Security for threat management and AI integration

Brad Taylor, CEO and co-founder of Proficio, discusses the dynamics of cybersecurity, the essentials of managed detection and response, and how Proficio leverages Elastic and AI to protect global organizations from emerging threats. About Elastic Elastic, the Search AI Company, enables everyone to find the answers they need in real time, using all their data, at scale. Elastic’s solutions for search, observability, and security are built on the Elastic Search AI Platform — the development platform used by thousands of companies, including more than 50% of the Fortune 500.

From our DevSecOps teams to yours: Discover Mo Copilot

Join Rowan Noronha, Kui Jia, and John Visneski as they explore how cutting-edge AI is revolutionizing DevOps and security workflows with Sumo Logic Mo Copilot, an innovative AI-powered assistant designed to simplify and accelerate DevSecOps operations. Learn how Copilot leverages natural language processing to address common challenges such as troubleshooting, threat response, and unified data integration, offering teams unprecedented efficiency and clarity.

Tokenization Vs Hashing: Which is Better for Your Data Security

Data security is a critical concern for organizations worldwide. Cyberattacks and data breaches have put sensitive information such as customer data, payment details, and user credentials at constant risk. Techniques like tokenization vs hashing provide essential tools to safeguard this information effectively. Understanding the distinctions between these methods is crucial for selecting the right approach.

LLMjacking targets DeepSeek

Since the Sysdig Threat Research Team (TRT) discovered LLMjacking in May 2024, we have continued to observe new insights into and applications for these attacks. Large language models (LLMs) are rapidly evolving and we are all still learning how best to use them, but in the same vein, attackers continue to evolve and grow their use cases for misuse.

Using Exposed Ollama APIs to Find DeepSeek Models

The explosion of AI has led to the creation of tools that make it more accessible, leading to more adoption and more numerous, less sophisticated users. As with cloud computing, that pattern of growth leads to misconfigurations and, ultimately, leaks. One vector for AI leakage is exposed Ollama APIs that allow access to running AI models. Those exposed APIs create potential information security problems for the models’ owners.

AI Security is API Security: What CISOs and CIOs Need to Know

Just when CIOs and CISOs thought they were getting a grip on API security, AI came along and shook things up. In the past few years, a huge number of organizations have adopted AI, realizing innumerable productivity, operational, and efficiency benefits. However, they’re also having to deal with unprecedented API security challenges. Wallarm’s Annual 2025 API ThreatStats Report reveals a staggering 1,025% year-on-year increase in AI-related API vulnerabilities.

Don't Fall Victim: DeepSeek-Themed Scams Are on the Rise

Scammers are taking advantage of the newfound popularity of the China-based AI app DeepSeek, according to researchers at ESET. DeepSeek released its generative AI tool last month, and it’s since overtaken ChatGPT as the top free app in Apple’s App Store. Users are now spotting lookalike domains designed to deliver malware or steal information. Other scams offer users the opportunity to buy phony stocks in DeepSeek.

Autonomous Adversaries: Are Blue Teams Ready for Cyberattacks To Go Agentic?

2024 was a year of incredible progression for Artificial Intelligence. As large language models (LLMs) have evolved, they have become invaluable tools for enriching the capabilities of defenders – instantly providing the knowledge, procedures, opinions, visualizations, or code any given situation demands. However, these same models provide outputs that enable even low-sophistication attackers to uplift their own skill-levels.