Today, rapid advances in technology have allowed us to be connected in ways that would have seemed unimaginable even a decade ago. But does this connectivity come at a cost? Nick Couldry, Professor of Media, Communications, and Social Theory at the London School of Economics and Political Science, is exploring the two-way bargain that is emerging from new digital business models: as we become more deeply connected through the internet and social media, and rely more heavily on digital tools in fields like health care, we are increasingly open to surveillance. In his research for The Enhancing Life Project, Couldry is examining discourses around data, surveillance, and privacy, with the goal of identifying long-term risks and resolutions for this increasingly pressing problem.
What was the spark for the research you’re pursuing for the Enhancing Life Project? Are these new questions, or an extension of past research, or both?
I’m a sociologist of media and my focus is on media not in a conventional sense—as texts or programs or things that are produced—but on media as social relations. In particular, my way of thinking about media understands it as a way of organizing all social relations with a focus on big institutions that we call media which have a special kind of power: the symbolic power over the stories we tell about our lives and our societies. More recently, I’ve focused that interest on the power of media institutions through the online infrastructure. And in particular, I’ve started to get interested in the data aspects and what’s necessary for the data to be gathered, in other words, the practice of surveillance.
When I joined The Enhancing Life Project, I had the idea to formulate this interest in terms of about the "price" of connection. I want to bring into focus a direct conflict between the sorts of freedoms we take for granted. Connecting to people all around the planet at all times with immense speed is an enormous expansion of human life and an extension of human freedom. But the infrastructure we’re told is necessary to make this possible is based in business models that involve permanent tracking of what we do online, often based on only partial consent. What are the long-term consequences of that contradiction? How will we live with that contradiction?
How will you explore this contradiction? What sources and methods will you be using?
I thought the most useful angle was to take at face value the discourses of major organizations about data and to analyze them using narrative analysis—to see what they say, what they don’t say, the metaphors they use, the potential problems with those metaphors. We’ve analyzed a lot of World Economic Forum documents, OECD reports, reports by consultants to the US Government and the UN—basically high-level synoptic reports about data and what’s at stake in the big data economy. And we’ve found there’s a very interesting pattern. A key move that’s made early on by the authors of these reports such as the World Economic Forum is to say that data is like a natural resource, like oil. It’s just there. And because it’s a natural resource, it’s argued in these reports that there’s an obligation to use it, and to use it well and responsibly. But of course data isn’t something that’s just there. It was given by us. Sometimes our consent was asked for, but sometimes not.
That’s a key thing I want to criticize—this idea of data as a natural resource—because I think it’s an evasion of the moral issue about the initial collection of data. My research team will also think through a positive notion of autonomy, using philosophical sources and commentators like Hegel. And we’ll look at case studies where these general discourses around data are applied: the health sector and the education industry.
What’s the most surprising or interesting challenge you’ve encountered so far in your Enhancing Life Project research?
In a sense it’s a very simple project—we’re just analyzing documents. But in another sense it’s a difficult project because you have to get to the point where you can see the weaknesses in the argument that we should all just open ourselves up to surveillance in order to benefit from the sharing of data. There are people who argue that they want to understand their body, and to do that they need to track their body using data and send the information off to institutions that can analyze it, and trust that they won’t abuse it. That could be seen to give you power over your body. But what are the real power implications of doing that all the time? If we no longer have any private space of our own from surveillance and monitoring, is that really such a good deal?
Not everyone will agree, but I think something might be damaged when there is something or someone looking into your space of reflection, the space we have always assumed was private where you reflect with yourself and strive to be a better person—really, to enhance your life. For me, it’s a big problem when we imagine that space with cameras permanently installed in it. It’s not an easy problem to get into view because we haven’t had to define that space and defend it before. But our new ability to collect big data raises important new contradictions for us to consider. If freedom is essential to an enhanced life, but the precondition of economic and maybe human development generally is a process that increasingly undermines our autonomy as subjects, then we have a significant moral problem to resolve.