Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

You can't rely on open source for security - not even when AI is involved

Open source libraries, packages, and models power nearly every product team today. They accelerate development, democratize innovation, and let teams stand on the shoulders of giants. But there’s a dangerous assumption creeping into engineering orgs: that open source — or AI trained on open source — will keep your software safe. That assumption is wrong. Open source gives you speed and community, not guaranteed security.

How autonomous AI agents like OpenClaw are reshaping enterprise identity security

The viral surge of OpenClaw (formerly Clawdbot and Moltbot) has captured the tech world’s imagination, amassing over 160,000 GitHub stars and driving a hardware rush for Mac Minis to host these 24/7 assistants.

Bitsight: AI-powered intelligence that outsmarts cyber risk

Bitsight is the global leader in cyber risk intelligence, leveraging advanced AI to empower organizations with precise insights derived from the industry’s most extensive external cybersecurity dataset. With more than 3,500 customers and over 68,000 organizations active on its platform, Bitsight delivers real-time visibility into cyber risk and threat exposure, enabling teams to rapidly identify vulnerabilities, detect emerging threats, prioritize remediation, and mitigate risks across their extended attack surface.

LLM Application for Protegrity AI Developer Edition

Securing LLM Workflows with Protegrity AI Developer Edition Learn how to protect sensitive data and prevent malicious prompt injections in your AI applications. In this technical walkthrough, Dan Johnson, Software Engineer at Protegrity, demonstrates a dual-gate security architecture designed to safeguard Large Language Models. Discover how to implement a security gateway that sits between your users and your LLM. This demonstration covers the integration of semantic guardrails and classification APIs to ensure data privacy and system integrity.

Jupyter Notebook for Protegrity AI Developer Edition

Want to test Protegrity’s data protection features without any local installation? In this tutorial, Dan Johnson shows you how to make your first protect and unprotect API calls directly in your browser using our interactive Jupyter Notebook (Binder). This is the fastest way to see Protegrity’s Python SDK in action—authenticating, applying protection policies, and maintaining data utility in real-time.

Clawing For Scraps: Risks of OpenClaw AKA ClawdBot

The world of AI is still advancing rapidly, but so are the threats. Wherever you get your news, Clawdbot, or is it Moltbot, or is it now called OpenClaw(?) is everywhere lately. You can’t avoid talk of this AI personal assistant. It’s actually now called OpenClaw after some naming drama, and at the time of writing has 166k followers on GitHub. The repository also has an alarming number of forks, issues, and pull requests.

Security Considerations When Deploying AI in Legal Environments

Say a mid-sized law firm discovers that confidential case files, including privileged attorney-client communications, were exposed through an AI tool someone in the office started using without IT approval. The breach goes unnoticed for weeks. By the time they catch it, sensitive data has already been logged on external servers. This nightmare could happen to law firms that rush to adopt AI without proper security frameworks in place.