A Study of Face Obfuscation in ImageNet

Kaiyu Yang, Jacqueline Yau, Li Fei-Fei, Jia Deng, Olga Russakovsky

Image obfuscation (blurring, mosaicing, etc.) is widely used for privacy protection. However, computer vision research often overlooks privacy by assuming access to original unobfuscated images. In this paper, we explore image obfuscation in the ImageNet challenge. Most categories in the ImageNet challenge are not people categories; nevertheless, many incidental people are in the images, whose privacy is a concern. We first annotate faces in the dataset. Then we investigate how face blurring -- a typical obfuscation technique -- impacts classification accuracy. We benchmark various deep neural networks on face-blurred images and observe a disparate impact on different categories. Still, the overall accuracy only drops slightly ($\leq 0.68\%$), demonstrating that we can train privacy-aware visual classifiers with minimal impact on accuracy. Further, we experiment with transfer learning to 4 downstream tasks: object recognition, scene recognition, face attribute classification, and object detection. Results show that features learned on face-blurred images are equally transferable. Data and code are available at https://github.com/princetonvisualai/imagenet-face-obfuscation.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment