Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

AI in Action: Transforming Cyber Defense Strategies with Agentic MDR

While various AI and machine learning automated workflows offer a great deal of insights into complex prediction and computation problems, recent advances in generative AI provide excellent summarization and content generation capabilities for a broad range of use cases. This means that search results are more comprehensive and accurate, often tailored to end-user needs. However, one remaining opportunity is to get the end-to-end job done with accuracy, speed and more importantly, agility.

How to Use Microsoft Copilot for Security: Complete eGuide to Generative AI for Cybersecurity

In the constantly evolving world of cybersecurity, defense teams need all the resources they can get to keep up. Fortunately, the massive advances in generative AI present SOC teams with a powerful set of tools to optimize security practices and match even fully automated adversaries using natural language input. Microsoft Security Copilot is among the most advanced examples of these tools.

Securing AI: How Mend.io & OWASP Are Making AI Safer for Enterprises #securitymanagement #shorts

Mend.io, formerly known as Whitesource, has over a decade of experience helping global organizations build world-class AppSec programs that reduce risk and accelerate development -– using tools built into the technologies that software and security teams already love. Our automated technology protects organizations from supply chain and malicious package attacks, vulnerabilities in open source and custom code, and open-source license risks.

EP 4 - AI-Powered Fraud: Redefining the Identity Threat Landscape

Imagine receiving an urgent email from your bank that looks perfectly legitimate. It warns you of a suspicious transaction and prompts you to verify your identity. You hesitate but click, and suddenly, your credentials are compromised. This scenario, crafted by AI-powered fraud-as-a-service, is happening now.

Managing shadow AI: best practices for enterprise security

The rush to work faster with artificial intelligence (AI) risks encouraging employees to accidentally put sensitive data at risk. Take this scenario: someone in the procurement team has a tight deadline, so they upload a confidential contract into an AI tool to review a few redlines. It’s unclear if the AI system is storing the data from the contract, how long it’ll be retained, and if the data will resurface in a future prompt to someone else.

The EU AI Act: Key deadlines, risk levels, and steps to prepare

The EU AI Act is one of the world’s first comprehensive regulations aimed at AI-based systems. While we had voluntary standards like ISO 42001, the Act introduced mandatory requirements that in-scope organizations must meet to avoid considerable fines and operational disruptions. ‍ If you develop, use, or distribute AI systems, you may have to meet the obligations prescribed by this directive. Our EU AI Act summary will help you do so by covering: ‍

5 Steps to Securing AI Workloads

In the past year alone, the number of artificial intelligence (AI) packages running in workloads grew by almost 500%. Which is to say: AI is everywhere, and it’s settling in for the long haul. Naturally, as helpful as they are, these AI workloads come with security challenges, including data exposure, adversarial attacks, and model manipulation. So as AI adoption accelerates, security leaders must build an AI workload security program to protect their organizations while enabling innovation.