Favourite Research Papers: PrivCom Intern Roundtable on the FTC's PrivacyCon 2020 (Part 2)
Note: Special thank you to the Bermuda Department of Workforce Development for sponsoring these interns' placement with PrivCom.
On 21 July 2020, the United States Federal Trade Commission (FTC) hosted a virtual PrivacyCon, in which researchers presented the latest reports and analysis of privacy issues. In line with current events, this year's PrivacyCon had an extra focus on health information. You can visit the FTC's site to see more on the event, including presentation materials.
PrivCom was in attendance, and we asked our 2020 interns what their thoughts were on the event:
Let's go into more depth: would you summarize one of the research papers you found interesting?
Chelsea Basden, Health Privacy Analyst: "Quinn Grundy, University of Toronto, Data Sharing Practices of Medicines Related apps and the Mobile Ecosystem: Traffic, Content, and Network Analysis; and Commercialization of User Data by Developers of Medicines-Related Apps: a Content Analysis"
Health apps are becoming more and more common as we increasingly are becoming more technologically advanced in our society. These health apps, however, may pose a potential threat to our privacy concerning our personal data. A research team took twenty-four commonly used health apps and found that oftentimes these health apps use data to create tailored ads for its consumers.
The data taken from these apps can be technical, such as device type, but one-third of the twenty-four apps were shown to provide IP addresses and personal email. Third and fourth parties can get data elements that are identifiable, such as your name, medications you may be taking, and even the time zone you are in that could possibly pinpoint your location.
It has been shown, though, that if a person were to make a privacy complaint to the third and fourth parties, they were simply referred back to the developers. In recent years, consumer data has become a large business, and the sharing of this data can have real consequences, not on the companies using them, but on the people who they are taking it from. People can face things such as targeted ads or even biased algorithms that target certain groups which could affect a person’s healthcare. Developers who control this data, as well as the third and fourth parties who use this data for their own purposes, must show Accountability in order to ensure the privacy of data.
Jaime Furtado, Media & Communications Analyst: "John Torous, Harvard Medical School, and Sarah Lagan, Beth Israel Deaconess Medical Center, Actionable App Evaluation: Objective Standards to Guide Assessment and Implementation of Digital Health Interventions"
Apps centralized around health can be a very useful tool for patients and clinicians to augment care, but many go unchecked by regulations in places like the United States because of how they classify themselves. Lots of these apps are dangerous and sell or expose health data to third parties. The common way of searching for health apps is ineffective in suggesting safe, appropriate apps to the public.
First, many health apps don’t even have basic privacy policies. Most health apps on the app store don’t claim to be HIPAA [Health Insurance Portability and Accountability Act] compliant. Only half of the apps share data securely and 80% shared data with third parties. Apps in the mental health space that host supposedly private sessions can be exploited by advertisers. These apps can get away with much of this because of their listing under “Health and Fitness” rather than “Medical Devices,” two categories of apps that have vastly different regulations.
In the environment of app stores, apps often must make exaggerated claims about how good their app is in order to get as many downloads as possible, even if they are unsubstantiated. Unfortunately, the same goes for health apps, preying on patients that are unsure of real treatments for their problems. Of 59 apps claiming to diagnose mental illness, only 1 included a citation to published literature.
Health apps are also not obligated to share up-to-date, reputable information. This means that millions of people are seeing inaccurate health information such as out-of-date suicide helpline phone numbers. Much of the misinformation isn’t purposeful; instead, it is because of a lack of updates.
The way that the majority of people get apps from app stores is they search a keyword based around what they want and look at the top results the app store comes up with, along with the rating the app has been given by users. Sadly, these factors aren't related in any way to the app’s clinical usability. Apps suggested are often completely unrelated to health issues, and while okay for those relatively healthy, those who aren’t that see them or even download them can be negatively affected. For example, if a consumer were to search for an app surrounding depression, one of the top results would be a wallpaper app containing many phone backgrounds with depressing thoughts. For obvious reasons, this wouldn't be the best thing to see. This is not intentional of course; the search results ranking is based heavily on downloads and ratings, and lighthearted apps like wallpaper apps are far more common and popular in the app space.
In order to organize these health apps into more appropriate lists, over 600 evaluation schemes have been made. While partially effective, many of these schemes have the same problems. For one, lists with static rankings don’t show how rankings change when apps are updated or if they aren’t updated. This can lead to once-accurate information quickly becoming inaccurate. Next, not all users will have the same experience with one app. When a list is made by one person or a group of people, this doesn’t reflect how all users will find the app, just the list creator(s). Also, schemes often base their ratings of apps on certain criteria, but that criteria isn’t shown to viewers, meaning that a user may value the privacy of an app over the ease of use, but because there were more questions based on ease of use, one app was rated over the other and the user has no idea.
Because of these issues with the current evaluation schemes, the American Psychiatry Association (APA) App Evaluation Model was created. The intention is to have an evaluation model that is based on giving the responsibility of assessing app quality and efficacy on the app users and letting them choose what they need the app to have. The criteria are based on accessibility, privacy, clinical foundation, features and the conditions supported, and these can be used to filter out unwanted apps built upon what the user finds valuable.
There are over 100 questions for every app, centered around the app’s origin, functionality, method of input (survey, diary, etc.) and output (notifications, summary of data, etc.), privacy policies, engagement style, features, data-sharing, and clinical foundation (i.e. Is it backed by evidence?). These questions are aligned with the APA Pyramid but designed for the user to pick what matters most to them.
Giving information to users about the apps they’re using is a worthwhile objective. If people are shown the properties of an app, they will quickly be able to select what is best for them while prioritizing what they want. Displaying every apps’ properties will force applications to become better if they don’t fulfill the consumers’ needs. This will change the situation, so that a user doesn't need to make a leap of faith anymore, and it won't work for apps to make exaggerated, unsubstantiated claims. It’s great that health apps are the starting point for this new style of organizing, as the users of health apps are often more vulnerable than the average user. Health apps may be the start, but it’s certain not to be the end.
Kahlil Smythe, Privacy Technology Analyst: "Pardis Emami-Naeini, Carnegie Mellon University, Ask the Experts: What Should Be on an IoT Privacy and Security Label?"
Because these devices are becoming a major part of individuals’ lives, the Internet of Things (IoT) can consume people’s personal data and other unauthorized information, to sell or share. As IoT becomes more developed, smart device companies that conserve privacy are increasing.
For instance, some smart TVs sell people's data to third parties without informing them. The consumers are not told who their data is being shared with or sold to. Also, manufacturers may be putting their customers at risk by not informing them about their devices' many sensors. For example, one manufacturer has emailed recordings of one unsuspecting user’s voice to their employees.
In order to protect consumers from potential data breaches, researchers suggested a security and privacy label similar to a food nutrition label. The hope is that this label will effectively present information about the company’s privacy practices and policies to the consumer. In the process of designing the label, the designers took a holistic view by inviting a diverse sample of experts to examine the label. The experts and designers identified forty-seven key factors that should be covered.
The most important details were printed on an understandable label and physically attached to the device. Each label includes a QR code linked to another description online, which explains more information about the smart device’s privacy practices and policies. The label attached to the device is the "primary" layer of notification; it is the concise format of the label. The description linked from the QR code is the "secondary" layer; it has more detailed information for people who want to learn more about the device's practices.
To learn more about these issues, see:
To reach out to the Office of the Privacy Commissioner, please visit our Contact Us page.