As part of SecurityScorecard’s commitment to making the world a safer place, we are now the first and only security ratings platform to integrate with OpenAI’s GPT-4 system. With this natural language processing capability, cybersecurity leaders can find immediate answers and suggested mitigations for high-priority cyber risks.
Security teams are faced with relentless cyberattacks, and they cannot engineer defenses fast enough. SOC teams face limited visibility, insufficient context, and the inability to identify the threats that matter. Analysts are even more burned out, switching from tool to tool, frantically trying to make sense of what they are seeing.
We’ve had occasion to write about ChatGPT’s potential for malign use in social engineering, both in the generation of phishbait at scale and as a topical theme that can appear in lures. We continue to track concerns about the new technology as they surface in the literature.
Advancements in AI have led to the creation of generative AI systems like ChatGPT, which can generate human-like responses to text-based inputs. However, these inputs are at the discretion of the user and they aren’t automatically filtered for sensitive data. This means that these systems can also be used to generate content from sensitive data, such as medical records, financial information, or personal details.
The nature of an advanced artificial intelligence (AI) engine such as ChatGPT provides its users with an ability to use and misuse, potentially empowering both security teams and threat actors alike. I’ve previously covered examples of how ChatGPT and other AI engines like it can be used to craft believable business-related phishing emails, malicious code, and more for the threat actor.
The Verge came out with an article that got my attention. As artificial intelligence continues to advance at an unprecedented pace, the potential for its misuse in the realm of information security grows in parallel. A recent experiment by data scientist Izzy Miller shows another angle. Miller managed to clone his best friends' group chat using AI, downloading 500,000 messages from a seven-year-long group chat, and training an AI language model to replicate his friends' conversations.
Not long after impressing Microsoft 365 customers with the recent Microsoft 365 Copilot announcement, Microsoft have launched another AI-powered Copilot product. This time with a whole new set of possibilities – introducing Microsoft Security Copilot.
We’re pleased to share that Salt has extended the capabilities of our powerful AI algorithms, further strengthening the threat detection and API discovery abilities of the Salt Security API Protection Platform. (Check out today’s announcement.) Here at Salt, we always look forward to the RSA Conference, but this year we are doubly excited to attend and showcase these new advanced capabilities! Salt invests significant resources into the continued innovation of our API security platform.
ChatGPT may not be used by all organizations and may even be banned. But that doesn't mean you don't have exposure to the security risks it contains. This post looks at why ChatGPT should be part of your threat landscape.