Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

How Governments Can Mitigate AI-Powered Cyber Threats

Cybersecurity leaders across all levels of government are growing increasingly alarmed by the rise of cyber attacks fueled by Artificial Intelligence (AI). Cybercriminals are now incorporating machine learning and automation into their strategies, significantly boosting the scale, efficiency and sophistication of their attacks. According to a recent survey of over 800 IT leaders, a staggering 95% believe that cyber threats have become more advanced than ever before.

'Tis the Season for Artificial Intelligence-Generated Fraud Messages

The FBI issued an advisory on December 3rd warning the public of how threat actors use generative AI to more quickly and efficiently create messaging to defraud their victims, echoing earlier warnings issued by Trustwave SpiderLabs. The FBI noted that publicly available tools assist criminals with content creation and can correct human errors that might otherwise serve as warning signs of fraud.

How to prompt prompt LLMs to fine-tune an AI-generated fuzz test

In previous videos, you've seen that LLM can generate fuzz tests. But what if AI fails to produce a working test or to cover specific workflows that are unavailable as unit tests or usage examples in the code base? You can prompt AI to make changes. Here is how the "Interactive mode" works in CI Fuzz.

One Identity's approach to AI in cybersecurity

In this video, Chinski addresses the challenges posed by malicious AI, such as deepfakes and advanced phishing attacks, emphasizing the importance of threat detection and response. On the flip side, Chinski showcases how One Identity uses predictive AI and machine learning in solutions like Identity Manager and Safeguard to enhance security through behavioral analytics and governance.

FBI Warns of Cybercriminals Using Generative AI to Launch Phishing Attacks

The US Federal Bureau of Investigation (FBI) warns that threat actors are increasingly using generative AI to increase the persuasiveness of social engineering attacks. Criminals are using these tools to generate convincing text, images, and voice audio to impersonate individuals and companies. “Generative AI reduces the time and effort criminals must expend to deceive their targets,” the FBI says.

Trustwave Named a Major Player in IDC MarketScape: Worldwide Cloud Security Services in the AI Era 2024-2025 Vendor Assessment

IDC has positioned Trustwave as a Major Player in the just released IDC MarketScape Worldwide Cloud Security Services in the AI Era 2024–2025 Vendor Assessment (IDC, November 2024) for its comprehensive set of offensive and defensive cloud security services. IDC said organizations should consider Trustwave when “Enterprises with varying levels of security maturity that require customized hybrid approach and depth of offensive and defensive security capabilities should consider Trustwave.

How to Strike a Balance Between Automation and Human Touch in AI Recruitment

As AI continues to redefine recruitment, the question arises: can we automate without losing the human touch? The integration of AI into recruitment processes, from sourcing and screening to interviewing and prequalifying candidates, has increased efficiency.