Permissions Explorer


Mobile device permission frameworks are incredibly limited and the data that they hand over to app developers is obfuscated from the average user. With the addition of on-device machine learning and advances in server side model capabilities, there are numerous inferences that apps can make with little to no visibility to the device owner.

We believe that users have the right to know what it means to give an app access to their data and what mobile apps are able to infer about them with modern AI technology.


Permissions Explorer App!

We have built an application which demonstrates inference capabilities with common mobile permission types, to measurably increase user understanding of personal data sharing with apps.

This app requests popular permissions from the user, encrypts this data and sends to our server. 

On our server, we run large language models locally (so that user data is not shared with Meta/Google/ChatGPT!). These models make inferences about the user. We then send these inferences back to the user, to evaluate their accuracy! 

While Meta or Google normally use this inference data to learn about users, we won’t! We will not access any inference data, and will immediately delete any permissions or inference data, as soon as it is sent back to the user.

To explore what can be learned using our application, and how our application influences participant permissions beliefs, we are running a user study through Prolific.

We have participants download the application managed by Harvard’s Applied Social Media Lab, and accept a series of permission pop-ups (like those they have seen requested by applications like TikTok or Instagram). The app shows a series of inferences that our models are able to make about the participant using this information, ranging from their age and gender to their location and medical history. The researchers will not see or store their permissions data, and will not see or store these inferences without their permission. The researchers will ask participants to report back about what types of inferences were made, if they were correct, and their comfort with these inferences.


Contact Us!

If you have questions, please reach out to Sarah and Zoe at: permsexplorer@cyber.harvard.edu

About Us

We are a team of researchers affiliated with the Applied Social Media Lab at Harvard University. We are passionate about ensuring social media applications serve to the public good. We are happy to chat about this study, or any other questions you may have about social media platform privacy!

Sarah Radway

PhD Candidate

Zoe Robert

Principal Engineer

Matthew Soto

Research Assistant

James Mickens

Professor

Sebastian Diaz

BKC Tech Director

Meg Marco

ASML Senior Director