How ChatGPT Can Lead to Malicious Code Spread

Research reveals that attackers exploit ChatGPT to disseminate malicious packages by leveraging AI-generated hallucinations, leading to potential risks in software environments. With many developers relying on ChatGPT for code solutions, the chance of downloading harmful packages increases. Vigilance is crucial in verifying package legitimacy to mitigate supply chain attacks.

Assessing Language Model Deployment with Risk Cards

Introduction When establishing documentation, reporting or auditing standards, we need clear terminology. Adopting this terminology for language model (LM) behaviors as hazards, there is an expansive literature documenting a wide array of potential harms to various human groups. However, the risk of harm depends on the context or application in which the LM is applied and its intended audience. If […]

Key Updates in OWASP Top 10 for LLM Applications 2025

Large Language Models (LLMs) face significant security challenges, with many applications inadequately handling security controls. The OWASP Top 10 for LLM applications offers guidance to address risks like prompt injection and model theft. Implementing robust security measures and continuous monitoring can help organizations protect sensitive data while utilizing LLM technologies effectively.