Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

A New World in Generative AI with Purple Llama - This Week in AI

Meta has announced the launch of Purple Llama, an umbrella project promoting open trust and safety in generative AI. The project features tools and evaluations designed to enable developers to deploy generative AI models and experiences responsibly in line with best practices outlined in Meta’s Responsible Use Guide.

SearchGPT, Llama 3.1 & GPT-4o Mini - Monthly AI News By Protecto

OpenAI has launched a prototype called SearchGPT, a new AI-driven search tool that integrates advanced AI capabilities with real-time web information. This temporary prototype, currently available to a select group of users and publishers, aims to enhance how people find information online by providing fast, accurate answers with precise citations. The ultimate goal is to gather feedback and refine these features before integrating them into the broader ChatGPT platform.

The EU AI Act: Ensuring Cybersecurity and Trustworthiness in High-Risk AI Systems

Artificial Intelligence (AI) has come a long way since John McCarthy first coined the term in 1955. Today, as AI technologies become deeply embedded in our daily lives, the potential they hold is immense – but so are the risks to safety, privacy, and fundamental human rights. Recognizing these concerns, the European Union (EU) took a proactive step in 2021 by proposing a regulatory framework aimed at governing AI.

Keeping humans in the loop of AI-enhanced workflow automation: 4 best practices

In today's rapidly advancing technology landscape, the role of people in workflow automation and orchestration is more critical than ever. At Tines, we firmly believe that human oversight should be an integral part of important workflows, ensuring that all decisions are grounded in context and experience. AI in Tines is secure and private by design. This means the platform doesn’t train, log, inspect, or store any data that goes into or comes out of language models.

Building for the Future DevSecOps in the era of AI ML Model Development

Melissa McKay, JFrog Developer Advocate, and Sunil Bemarkar, AWS Sr. Partner Solutions Architect, discuss practical ways to mature your MLOps approach including bringing model use and development into your existing secure software supply chain and development processes. Watch to learn more and get a demo of the JFrog and Amazon SageMaker integration.

Zero to 80% Faster - How to Leverage AI to Accelerate Security Reviews

Stop wasting your team's time answering security questionnaires. It's time to supercharge the way you complete security reviews by leveraging AI to unlock unprecedented speed and accuracy. We'll explore proven strategies for fast tracking the way your team completes security questionnaires using advanced AI tools and automation. You'll learn best practices like maintaining a centralized knowledge base and leveraging a public-facing trust portal to get ahead of questions.

How we created the first conversational AI cloud security analyst

In the rapidly evolving landscape of cybersecurity, the need for a robust and intelligent assistant capable of analyzing, summarizing, and reacting to events is paramount. This is why we designed Sysdig SageTM, our large language model (LLM)-based cloud security analyst, to be an expert in cloud detection and response (CDR). Sysdig Sage excels at summarizing complex events and providing clear explanations, which is crucial for identifying and promptly reacting to potential threats.

Top 7 Practices to Prevent Data Leakage through ChatGPT

Generative AI (GenAI) tools like ChatGPT have already become indispensable across organizations worldwide. CEOs are particularly enthusiastic about GenAI’s ability to let employees “do more with less”. According to the McKinsey Global Survey on the State of AI in 2024, 65% of organizations already use GenAI tools extensively, and Gartner forecasts that this number will reach 80% by 2026.