Date of Award
Doctor of Philosophy (PhD)
Human Centered Computing
Photos which contain rich visual information can be a source of privacy issues. Some privacy issues associated with photos include identification of people, inference attacks, location disclosure, and sensitive information leakage. However, photo privacy is often hard to achieve because the content in the photos is both what makes them valuable to viewers, and what causes privacy concerns.
Photo sharing often occurs via Social Network Sites (SNSs). Photo privacy is difficult to achieve via SNSs due to two main reasons: first, SNSs seldom notify users of the sensitive content in their photos that might cause privacy leakage; second, the recipient control tools available on SNSs are not effective.
The only solution that existing SNSs (e.g., Facebook, Flickr) provide is control over who receives a photo. This solution allows users to withhold the entire photo from certain viewers while sharing it with other viewers. The idea is that if viewers cannot see a photo, then privacy risk is minimized. However, withholding or self-censoring photos is not always the solution people want. In some cases, people want to be able to share photos, or parts of photos, even when they have privacy concerns about the photo.
To provide better online photo privacy protection options for users, we leverage a behavioral theory of privacy that identifies and focuses on two key elements that influence privacy -- information content and information recipient. This theory provides a vocabulary for discussing key aspects of privacy and helps us organize our research to focus on the two key parameters through a series of studies.
In my thesis, I describe five studies I have conducted. First, I focus on the content parameter to identify what portions of an image are considered sensitive and therefore are candidates to be obscured to increase privacy. I provide a taxonomy of content sensitivity that can help designers of photo-privacy mechanisms understand what categories of content users consider sensitive. Then, focusing on the recipient parameter, I describe how elements of the taxonomy are associated with users' sharing preferences for different categories of recipients (e.g., colleagues vs. family members).
Second, focusing on controlling photo content disclosure, I invented privacy-enhancing obfuscations and evaluated their effectiveness against human recognition and studied how they affect the viewing experience.
Third, after discovering that avatar and inpainting are two promising obfuscation methods, I studied whether they were robust when de-identifying both familiar and unfamiliar people since viewers are likely to know the people in OSN photos. Additionally, I quantified the prevalence of self-reported photo self-censorship and discovered that privacy-preserving obfuscations might be useful for combating photo self-censorship.
Gaining sufficient knowledge from the studies above, I proposed a privacy-enhanced photo-sharing interface that helps users identify the potential sensitive content and provides obfuscation options. To evaluate the interface, I compared the proposed obfuscation approach with the other two approaches – a control condition that mimics the current Facebook photo-sharing interface and an interface that provides a privacy warning about potentially sensitive content. The results show that our proposed system performs better over the other two in terms of reducing perceived privacy risks, increasing willingness to share, and enhancing usability. Overall, our research will benefit privacy researchers, online social network designers, policymakers, computer vision researchers, and anyone who has or wants to share photos online.
Li, Yifang, "Investigating Obfuscation as a Tool to Enhance Photo Privacy on Social Networks Sites" (2020). All Dissertations. 2694.