Exploiting leakage in privacy-protecting systems

Date

2016-12

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Conventional systems store data unencrypted. This allows them to easily access and manipulate their data. However, by not protecting their data, these systems are at a greater risk if they are compromised by a malicious hacker. More advanced systems add encryption to their data, but this causes other issues. Normal encryption often ruins the ability to run computations on data, negating many of the reasons to store the data in the first place. More recently, some systems have attempted to strike a compromise between security and functionality by using encryption that partially protects their data while still allowing certain operations to be performed. Examples of these systems include general purpose frameworks like Mylar for Web applications, as well as domain- and application-specific systems like P3 for photo storage. This dissertation examines the privacy concerns that arise when using these systems with realistic datasets and real-world usage scenarios. The first system we explore is Mylar, an extension to the popular Meteor framework. Meteor is a JavaScript-based framework for concurrently developing the client and server parts of Web apps. Mylar allows users to share and search over data while protecting against a compromised or malicious server. We expand Mylar's vague definitions of passive and active adversaries into three threat models and show that Mylar is insecure against all three models. Mylar's metadata leaks sensitive information to an adversary with one-time access to Mylar's encrypted database. Mylar provides no protection against adversaries which can monitor user access patterns, allowing them to watch for data dependent behavior corresponding to sensitive information. Finally, Mylar fails to protect against active attackers who, by nature of the system, have been given the ability to modify the database and run search over the encrypted data. We next look at set of systems designed to protect sensitive images by selectively obfuscating them. We examine a system called P3 which splits an image into two images: a secret image that contains most of the identifying information and a public image that can be distributed with less risk of leaking information. We also investigate mosaicing (often called pixelation) and blurring, two commonly used image obfuscation techniques. Examining the obfuscated images, it's obvious that all three of these systems leak information. However, it's not clear how to exploit this leakage or if doing so is even possible. The authors of P3 specifically examined P3 using a number of techniques that mimic human image recognition. We bypass the need for human recognition by making use of modern machine learning techniques. Using neural networks, we are able to classify the obfuscated image content automatically without needing human assistance or having to define image features. Finally, we conclude by proposing a number of guidelines for creating modern privacy-preserving systems. We look at problems that arise when creating a scheme on paper as well as issues that come up when implementing the system. These guidelines were created by examining the mistakes of BoPET and image obfuscation researchers and developers. We present them in the hope that they will be used to insure the effectiveness of future privacy systems.

Description

Keywords

Citation