Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

EP 50 - Adversarial AI's Advance

In the 50th episode of the Trust Issues podcast, host David Puner interviews Justin Hutchens, an innovation principal at Trace3 and co-host of the Cyber Cognition podcast (along with CyberArk’s resident Technical Evangelist, White Hat Hacker and Transhuman Len Noe). They discuss the emergence and potential misuse of generative AI, especially natural language processing, for social engineering and adversarial hacking.

Speed vs Security: Striking the Right Balance in Software Development with AI

Software development teams face a constant dilemma: striking the right balance between speed and security. How is artificial intelligence (AI) impacting this dilemma? With the increasing use of AI in the development process, it's essential to understand the risks involved and how we can maintain a secure environment without compromising on speed. Let’s dive in.

Protecto - AI Regulations and Governance Monthly Update - March 2024

In a landmark development, the U.S. Department of Homeland Security (DHS) has unveiled its pioneering Artificial Intelligence Roadmap, marking a significant stride towards incorporating generative AI models into federal agencies' operations. Under the leadership of Secretary Alejandro N. Mayorkas and Chief Information Officer Eric Hysen, DHS aims to harness AI technologies to bolster national security while safeguarding individual privacy and civil liberties.

Understanding AI Package Hallucination: The latest dependency security threat

In this video, we explore AI package Hallucination. This threat is a result of AI generation tools hallucinating open-source packages or libraries that don't exist. In this video, we explore why this happens and show a demo of ChatGPT creating multiple packages that don't exist. We also explain why this is a prominent threat and how malicious hackers could harness this new vulnerability for evil. It is the next evolution of Typo Squatting.

An investigation into code injection vulnerabilities caused by generative AI

Generative AI is an exciting technology that is now easily available through cloud APIs provided by companies such as Google and OpenAI. While it’s a powerful tool, the use of generative AI within code opens up additional security considerations that developers must take into account to ensure that their applications remain secure. In this article, we look at the potential security implications of large language models (LLMs), a text-producing form of generative AI.

Casting a Cybersecurity Net to Secure Generative AI in Manufacturing

Generative AI has exploded in popularity across many industries. While this technology has many benefits, it also raises some unique cybersecurity concerns. Securing AI must be a top priority for organizations as they rush to implement these tools. The use of generative AI in manufacturing poses particular challenges. Over one-third of manufacturers plan to invest in this technology, making it the industry's fourth most common strategic business change.

How AI will impact cybersecurity: the beginning of fifth-gen SIEM

The power of artificial intelligence (AI) and machine learning (ML) is a double-edged sword — empowering cybercriminals and cybersecurity professionals alike. AI, particularly generative AI’s ability to automate tasks, extract information from vast amounts of data, and generate communications and media indistinguishable from the real thing, can all be used to enhance cyberattacks and campaigns.

The NIST AI Risk Management Framework: Building Trust in AI

The NIST Artificial Intelligence Risk Management Framework (AI RMF) is a recent framework developed by The National Institute of Standards and Technology (NIST) to guide organizations across all sectors in the use of artificial intelligence (AI) and its systems. As AI continues to become implemented in nearly every sector — from healthcare to finance to national defense — it also brings new risks and concerns with it.