Description 

Join us for a presentation of collaborative research findings from Vector Institute's AI Engineering team and industry partners.

This research addresses a critical misconception - that synthetic data is inherently free from privacy risks. While designed to protect individual privacy, AI-generated synthetic data can be vulnerable to re-identification attacks, creating a false sense of security. Through extensive evaluation of membership inference attacks (MIAs) under varying configurations and attacker profiles, our research teams identified key factors that influence privacy leakage in diffusion-based tabular data generators. The findings help establish practical thresholds for acceptable privacy risk in real-world, policy-driven deployments. Researchers will present their methodologies, experimental results, and practical frameworks developed over the course of this six-month collaboration.