Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How to build secure agent swarms that power autonomous systems in production

We worked with the Autonomy team to show how 1Password can secure agent swarms using a safer pattern: just-in-time, least-privilege access, without inheriting broad device, cloud, or infrastructure permissions, and without hardcoding secrets into agents.

What Are Moltbot and Moltbook? Why the Agentic AI Frenzy Is a Security Trap

AI agents aren’t taking over. But agentic AI without security is a real problem. Over the last few days, Moltbot and its social platform Moltbook have surged across headlines and social media. Some are calling it a glimpse of artificial general intelligence. Others say AI agents are organizing themselves. That’s not what’s happening. In this video, SecurityScorecard’s Jeremy Turner, VP of Threat Intelligence & Research, breaks down what Moltbot actually is, why this isn’t AGI, and where the real danger lives.

ChatGPT Oopsies Series of Information - The 443 Podcast - Episode 356

This week on the podcast, we cover a Politico report detailing a security lapse at CISA in the United States involving sensitive data and a public version of ChatGPT. Next, we dive into a couple of recently resolved vulnerabilities in the SolarWinds Web Help Desk application. Finally, we end with some closure on a story about two Coalfire penetration testers who were arrested several years ago for completing a penetration test in Iowa.

Claude Code writes and tests Cobalt Strike detection rules #cybersecurity #ai #securityoperations

Watch Claude Code generate production-ready Cobalt Strike detection rules in LimaCharlie. The agent defines detection requirements, creates rule logic for high-signal patterns, validates syntax, and deploys rules to the tenant. Named-pipe indicators and process-based signatures are tested against positive and negative controls to confirm accuracy. Security teams can operationalize threat-specific detections in minutes instead of hours.

The AI Blind Spot Debt: The Hidden Cost Killing Your Innovation Strategy

In today’s AI rush, I’ve seen even the most disciplined organizations find it almost impossible to apply the hard-won lessons of DevOps and DevSecOps onto AI adoption. These organizations often feel forced to choose between moving fast and staying in control. As a result, they develop a “wait and see” approach to AI usage and implementation, and it’s creating a new, more dangerous form of technical debt. I call it the AI Blind Spot Debt.

Cyberhaven DSPM: Uniting DSPM & DLP to Secure Data in the AI Era

Enterprise security programs were built for a time when data lived in a small number of predictable locations. That model no longer holds. Today, data is constantly created, copied, transformed, and shared across cloud applications, endpoints, on-prem systems, and generative AI tools, often without clear visibility. Protecting data in the AI era requires three pillars: holistic visibility across the full data lifecycle, a deep understanding of data with context (e.g.

When AI Agents Create Their Own Reddit: Moltbook Highlights Security Risks in the Agentic Action Layer

A new platform, Moltbook, has attracted significant attention within the AI community. It is not famous because humans are posting there, but because autonomous AI agents are. Moltbook is a social network designed for AI agents to post, comment, upvote, and even form communities. Humans can observe these interactions but cannot participate. This experiment reveals a striking reality. AI agents are coordinating, sharing code, and developing complex cultures without human visibility.

The Prescriptive Path to Operationalizing AI Security

In introducing the AI Security Fabric, we have outlined how security must evolve as software is built by humans, models, and autonomous agents working at machine speed. The Fabric defines the architectural shift required to build trust at AI speed, delivered through the Snyk AI Security Platform. We’re now focusing on the next question: how organizations put that vision into practice. Operationalizing AI security is not about enabling a single feature or deploying a tool.

Introducing the AI Security Fabric: Empowering Software Builders in the Era of AI

Today, we’re thrilled to introduce the AI Security Fabric, delivered through the Snyk AI Security Platform, and operationalized through a prescriptive path for AI security. As software creation shifts to humans, models, and autonomous agents working together at machine speed, security must evolve just as fundamentally. The AI Security Fabric defines the new paradigm, and the Prescriptive Path shows how the Snyk AI Security Platform gets you there.

January Release Rollup: Egnyte MCP Server, File Server Connector, and More

We’re excited to share new updates and enhancements for January, including: For more info on these updates, check out the list below and dive into the detailed articles. Please join the Egnyte Community to get the latest updates, chat with experts, share feedback, and learn from other users.