Stanley Wu

FRESH OFF THE PRESS

  • June, 2025: Paper on diffusion model poisoning via VLM adversarial examples accepted to CCS '25

Stanley Wu

I am a 2nd year Ph.D. student at the University of Chicago SAND Lab, co-advised by Ben Zhao and Heather Zheng. My work is supported by the National Science Foundation's Graduate Research Fellowship (GRFP), which I received in 2024.

My primary academic interest lie in adversarial machine learning, with a particular focus on security issues with generative AI. Recently, I have been studying the safety limitations of generative models, and developing methods to protect human creatives against intrusive training.

I received my bachelors in computer science from Northeastern University in 2023, during which I was very fortunate to work with Alina Oprea, and Jonathan Ullman. Before UChicago, I spent an excellent year working as a data scientist for Klaviyo.

Email: stanleywu+w AT cs DOT uchicago DOT edu

Papers

2025
On the Feasibility of Poisoning Text-to-Image AI Models via Adversarial Mislabeling
Stanley Wu, Ronik Bhaskar, Anna Yoo Jeong Ha, Shawn Shan, Haitao Zheng, Ben Y. Zhao
ACM Conference on Computer and Communications Security (CCS)
2024
Disrupting Style Mimicry Attacks on Video Imagery
Josephine Passananti*, Stanley Wu*, Shawn Shan, Haitao Zheng, Ben Y. Zhao
preprint
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Shawn Shan, Wenxin Ding, Josephine Passananti, Stanley Wu, Haitao Zheng, Ben Y. Zhao
IEEE Symposium on Security and Privacy (Oakland)
TMI! Finetuned Models Leak Private Information from their Pretraining Data
John Abascal, Stanley Wu, Alina Oprea, Jonathan Ullman
Privacy Enhancing Technologies Symposium (PETS)
2023
How to Combine Membership-Inference Attacks on Multiple Updated Models
Matthew Jagielski*, Stanley Wu*, Alina Oprea, Jonathan Ullman, Roxana Geambasu
Privacy Enhancing Technologies Symposium (PETS)