Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

EP 42 - Year in Review 2023: Unleashing AI, Securing Identities

In this year-end Trust Issues podcast episode, host David Puner takes listeners on a retrospective jaunt through some of the show’s 2023 highlights. The episode features insightful snippets from various cybersecurity experts and thought leaders, each discussing crucial aspects of the ever-evolving cyber landscape.

Conversational AI vs. generative AI: What's the difference?

In the intricate world of artificial intelligence, it's essential to distinguish between the different AI technologies at our disposal. Two key domains that often lead to confusion are conversational AI and generative AI. Though their names might sound related, they are fundamentally different in their applications and underlying mechanisms. Let's dive into the realm of AI to elucidate the distinctions between these two intriguing domains.

The Challenges for License Compliance and Copyright with AI

So you want to use AI-generated code in your software or maybe your developers already are using it. Is it too risky? Large language model technology is progressing at rapid speeds, and policy makers are ill-equipped to catch up quickly. Anything resembling legal clarity may take years to come about. Some organizations are deciding not to use AI at all for code generation, while others are using it cautiously — but everyone has questions.

Five Questions Security Teams Need to Ask to Use Generative AI Responsibly

Since announcing Charlotte AI, we’ve engaged with many customers to show how this transformational technology will unlock greater speed and value for security teams and expand their arsenal in the fight against modern adversaries. Customer reception has been overwhelmingly positive as organizations see how Charlotte AI will make their teams faster, more productive and learn new skills, which is critical to beat the adversaries in the emerging generative AI arms race.

4 Ways Veracode Fix Is a Game Changer for DevSecOps

In the fast-paced world of software development, too often security takes a backseat to meeting strict deadlines and delivering new features. Discovering software has accrued substantial security debt that will take months to fix can rip up the schedules of even the best development teams. An AI-powered tool that assists developers in remediating flaws becomes an invaluable asset in this context.

Executive Order (EO) 14110: Safe, Secure & Trustworthy AI

More news about Artificial Intelligence (AI)? We know. It’s hard to avoid the chatter — and that’s for good reason. The rise of AI has many people excited for things to come. But many others are, quite understandably, concerned about the ethical implications of this powerful technology. Fortunately, the Biden Administration is working to address the concerns of the American people by governing the development and use of AI.

Microsoft Copilot Studio Vulnerabilities: Explained

Last week, Michael Bargury and the team at Zenity published a video summarizing 6 vulnerabilities that are found in Microsoft Copilot Studio. The video highlights, in sequence, a myriad of ways that business users can create their own AI Copilots that are risky, why they are risky, and how they can be easily exploited. While I highly recommend checking out the video, this blog sets out to provide a look at why these vulnerabilities matter, and what considerations should be taken to mitigate them.

LLMs, Quantum Computing, and the Top Challenges for CISOs in 2024

Amidst the ongoing surge in cyber threats, CISOs are encountering increasing challenges in their responsibilities. During a recent CISO Panel Discussion on Application Security hosted by our CEO, Ashish Tandan, CISOs Kiran Belsekar from Aegon Life and Manoj Srivastava from Future Generali expressed concerns about managing security postures and shared actionable strategies to tackle evolving threats.