GenAl Security Research

Introducing Vision To The Fine-Tuning API
Developers Can Now Fine-Tune GPT-40 With Images And Text To Improve Vision Capabilities
Learn More
What an Incredible Evening at the Al x Security Summit!
On October 10th, 2024, I spent an incredible evening in Antler Singapore.
Learn More
S-tron China - S-Talent Talk
On September 20-21, 2024, I spent an unforgettable 2 days in S-tron China at the West Bund Art Center in Shanghai.
Learn More

Is your child’s AI powered Robots safe?

The Widespread Adoption of AI-Powered Learning Devices and Their Potential

LLM attacks in Web GenAI Application

What is a LLM (large language model) Large Language Models

Out-of-Band Data Leakage Attack based on Indirect Prompt Injection

Data Exfiltration via Hyperlink Auto-Retrieval Many chat applications automatically inspect

How to prevent LLM Data Leakage Attacks

What’s LLM Data Leakage Data leakage in generative AI Data

How to prevent LLM Model Theft Attacks

Why this happened? Large Language Models (LLMs) process and generate

Deepfake How the Technology Works & How to Prevent Fraud

What is a Deepfake? Impersonation is a problem for marketplaces

Subscribe TrustAI Newsletter

Get our latest GenAI/LLM security research.

Join AISecX - AI Security Discord Community

Join the AISecX towards a secure Al era. We're building a safer future together, be part of it!