AI has become a hot topic thanks to the recent headlines around the large language model (LLM) AI with a simple interface — ChatGPT. Since then, the AI field has been vibrant, with several major actors racing to provide ever-bigger, better, and more versatile models. Players like Microsoft, NVidia, Google, Meta, and open source projects have all published a list of new models. In fact, a leaked Google document makes it seem that these models will be ubiquitous and available to everyone soon.
AI-related security risk manifests itself in more than one way. It can, for example, result from the usage of an AI-powered security solution that is based on an AI model that is either lacking in some way, or was deliberately compromised by a malicious actor. It can also result from usage of AI technology by a malicious actor to facilitate creation and exploitation of vulnerabilities.
I’m Peter Wrenn, my friends call me Pete! I have the pleasure of being the moderator of the Tines Technical Advisory Board (TAB) which is held quarterly. In it, some of Tines’s power users engage in conversations around product innovations, industry trends, and ways we can push the Tines vision forward — automation for the whole team. Well, that’s the benefit to our customers and Tines.
It’s human nature: when we do something we’re excited about, we want to share it. So it’s not surprising that cybercriminals and others in the hacker space love an audience. Darknet Diaries, a podcast that delves into the how’s and why’s and implications of incidents of hacking, data breaches, cybercrime and more, has become one way for hackers to tell their stories – whether or not they get caught.
CyberWire wrote: "Researchers at SlashNext describe a generative AI cybercrime tool called “WormGPT,” which is being advertised on underground forums as “a blackhat alternative to GPT models, designed specifically for malicious activities.” The tool can generate output that legitimate AI models try to prevent, such as malware code or phishing templates.