After an update of my Facebook app for iPhones in early October 2015, I noticed that a new feature of the app tracked what I had copied into my “clipboard”, analyzed whether the copied text fit the standard URL format (e.g. http://enhancinglife.uchicago.edu/), and suggested that I share the URL with other users. I was surprised because there hadn’t been any prior requests for permission to access the clipboard—and the personal data stored in my clipboard included not just banal information like the website address for my favorite pancake recipe, but also sensitive data like the login credentials for my online banking account. Nevertheless, I wasn’t bothered by the new feature and assumed that the app certainly wouldn’t do something bad with the content found on my iPhone, and I continued using it without making any changes in the settings.
A few weeks later, when I was studying proposals for the new European rules on data protection (the General Data Protection Regulation [GDPR] passed by the European Parliament on April 14, 2016, coming into force in May 2018), I realized that these rules probably would have protected me from my own careless behavior. Even though Facebook claimed that they were “not able to see what the link is” and that they did not save any of the users’ data after they have been removed from the clipboard, their automatic access to and analysis of text data copied into my clipboard certainly fell within the definition of “processing” of personal data under the new regulations.
As a consequence, the new feature would have actually required my consent before Facebook was able to access my iPhone clipboard. Since valid consent under these rules calls for an “unambiguous indication” of my will (i.e. a “statement” or “a clear affirmative action”), I did not consent to the new feature by simply continuing to use the app without making any changes. Hence, the disclosure of my data would not have been legally effective, and Facebook would have violated the rules.
From data protection to data paternalism
Data protection rules such as the GDPR generally restrict the processing of data—including the collection, storage, and use of data—that is conducted either by state authorities or private entities. Data protection is not an end in itself but serves the higher purpose of protecting and enhancing certain aspects of people’s personal lives, in particular their right to privacy. However, if someone has given his or her explicit or implied consent to data disclosure and processing, data protection rules must be regarded as an imposed protection against potential threats to their privacy. This is what I call (governmental) data paternalism.
Data paternalism pursues a legitimate objective—protecting people’s privacy—but it can also constitute a threat to personal autonomy. Under Article 7 of the Charter of Fundamental Rights of the European Union, for example, everyone has the right to freely communicate with other persons and disclose their own data, even if these activities come with certain risks to privacy. In principle, it is up to me (and not the government) to decide which information I would like to share, and for what purpose. For example, as long as I am aware that my Facebook app is accessing my clipboard, I should be able to permit Facebook to process my data, even if this involves potential risks.
Justifying data paternalism
Certainly, data protection rules asking for a clear, objective indication of my (explicit or implicit) consent can be seen as an enforcement of my personal autonomy. However, the more formalities are set by data regulation—like asking for explicit consent or an unambiguous statement—the more likely it is that my actual will might not be legally recognized. When I continued using my Facebook app, I consented to the processing of my clipboard data, but the GDPR rules would have rendered my consent legally ineffective because I did not give an unambiguous indication of my will.
I argue that data protection rules calling for explicit consent or unambiguous statements are only justified with respect to the processing of data that involves particular risks to private life. For instance, the new European regulations require the explicit consent of a person who is about to be adversely affected by a decision that is solely based on automated data processing. This would protect, for example, against a booking website that automatically raises the price for a hotel room, based on an analysis of my previous internet behavior. In such cases, data paternalism is absolutely justified: If my internet user data is collected and analyzed without my explicit consent, the booking website provider will be liable for all damages resulting from this unlawful processing of my personal data, including the higher pricing.
Other provisions of the new European regulations stipulate that consent to data processing shall not be considered to be freely given if the provision of a service is made conditional on consent to the processing of personal data that is not necessary for providing the service. An example of unnecessary processing of data could be the collection and analysis of user data (my browsing history or recently “liked” or shared items on Facebook) conducted by social media providers for advertising purposes—because the processing is not necessary for the social media services.
With this necessity requirement, however, the GDPR goes beyond justified data paternalism. It is surely adequate in cases where a person depends on a particular service (as in life insurance or loan contracts) and hence lacks the ability to exercise his or her free will with respect to the data processing. Yet in many other situations it appears to be a means of preventing the commercialization of personal data rather than a safeguard of personal autonomy. In my opinion, the GDPR’s data paternalism approach here is in violation of everyone’s fundamental right to personal autonomy in deciding upon issues of his or her private life: if those individuals want their data to be commercialized, so be it.
The challenge for the architects of data paternalism regulations, going forward, will be to weigh privacy protection against respect for personal autonomy. Data paternalism can be a viable concept of enhancing digital life – as long as it keeps the balance between the aim of privacy protection on the one hand and respect for personal autonomy on the other hand. Key factors in this balancing act will certainly be: How sensitive are the personal data to be processed in the particular case? How high are the potential risks of the data processing with regard to the private life of the people concerned? After all, my favorite pancake recipe is not as confidential as my online banking password.
Read a Q&A with Christoph Krönke about his research here.