On a Sunday afternoon last April, long before COVID-19, my nine year old set up a fish tank, my 13 year old took me to the movies and The New York Times told me to panic.
“It’s time to panic about privacy,” wrote columnist Farhad Manjoo. “Each time you buy some new device or service that trades in private information — your DNA, your location, your online activity — you are gambling on an uncertain and unprotected future. Here is the stark truth: we in the West are building a surveillance state no less totalitarian than the one the Chinese government is rigging up.”
The newspaper then published a series of investigative reports. One revealed how US law enforcement is using Google’s Sensorvault to find criminal suspects (and witnesses) by drawing on location data, often without users knowing. Another showed how China is using facial recognition software and public cameras to monitor and control the minority Uighur population, even when they leave their home province.
What happened? Until recently, privacy lurked in the shadows. Suddenly it’s stumbled into the light to become a defining issue of our time.
Get Crikey FREE to your inbox every weekday morning with the Crikey Worm.
In 2013, whistleblower Edward Snowden popped up in a Hong Kong hotel room to drop bombshells about government surveillance.”We are building the biggest weapon for oppression in the history of mankind,” he said.
Perhaps that was the moment when privacy forced its way into the public consciousness. Or perhaps it was the following year, when nude photos of Jennifer Lawrence, Selena Gomez and dozens more were leaked to 4chan, and then to the wider web. Several celebrities responded with heartfelt pleas that the content not be shared. It was shared anyway. Or perhaps it was in 2015, when hackers raised intriguing ethical questions by “doxing” — that is, exposing — the identities of men using the Ashley Madison adultery website. Some of the men committed suicide; others initiated a lawsuit, which ended in a multi-million dollar payout by Ashley Madison’s parent company. Or perhaps it was in 2017, when a Canadian court ordered a company that makes vibrators to pay out $4 million for tracking users’ sexual activity.
To cap it all, we then learnt that democracy had taken a hit. In March 2018, it emerged that the data of 87 million Facebook users had been harvested in the attempt to influence elections in the US, the UK and many, many more countries. This was the Cambridge Analytica scandal, in which a Facebook design flaw was exploited by a seemingly harmless psychological quiz app to manipulate the voting process. In 2019, the Federal Trade Commission (FTC) responded to this (and other transgressions) by imposing a US$5billion penalty. “The $5 billion penalty against Facebook is … almost 20 times greater than the largest privacy or data security penalty ever imposed worldwide,” the FTC said. “It is one of the largest penalties ever assessed by the U.S. government for any violation.”
Admittedly, the FTC’s decision wasn’t unanimous. Two of the five commissioners wanted the penalty to be bigger.
Cambridge Analytica showed how invasions of privacy can compromise not just individuals, but society. When privacy is threatened, democracy can falter. All of which suggests that privacy is collective and “networked”, rather than purely individualistic. My privacy matters for your benefit, and vice versa. Privacy only properly makes sense when we think of it as relational.
This was a conclusion I came to in a very roundabout manner. For nearly 20 years, I worked at the Sydney Morning Herald, where one of my regular topics was film. Another regular topic was fatherhood, which meant that I was also writing about my wife and children. I didn’t realise at the time, but here was a neat illustration of the relational nature of privacy. As I wrote about my life, the content was about my family, some of it personal.
In early 2013, having left the newspaper, I embarked on a PhD into the ethics of new media, and privacy became my focus. And the more I studied, the more I became intrigued by a single sentence dating from 1785. It comes from Immanuel Kant’s Groundwork of the Metaphysics of Morals.
The Groundwork is slim, but its impact is big. At its heart is the notion of a “categorical imperative”: a supreme moral principle. Kant expressed this categorical imperative in several forms, but the most enduring is the “formula of humanity”, which tells us that we must never treat another, including ourselves, “merely as a means”, but only ever as autonomous agents who are free to chart their own course. The formula of humanity prohibits exploitation and mandates egalitarianism. It tells us to treat all persons as imperfect rational beings of absolute worth. It commands us all to act with respect. Today, it is a cornerstone of human rights law. The task of my research became to apply this single sentence to internet privacy.
As my research took shape, three main questions emerged. The first asks: what is the problem? In the examples cited above, I’ve barely skimmed the surface. Philosophers Jeremy Bentham and Michel Foucault described the Panopticon, in which surveillance leads to conformity and obedience. In Panopticon 3.0, we step into the net, where we are all surveillers, and all surveilled. Potentially, everyone watches everyone, even into the future.
The second main question is in two parts, and concerns the meaning and value of privacy. It’s the second part that often feels particularly challenging. Why does privacy matter? Those with nothing to hide have nothing to fear, right? While we’ve all heard doomsayers invoke the totalitarian nightmare depicted in George Orwell’s 1984 – where privacy and freedom are crushed beneath Big Brother’s boot heel – others have been resolutely optimistic.
In 1982, science fiction novelist Isaac Asimov portrayed the utopian possibilities of a world of total openness and connection. In Foundation’s Edge, Asimov described the planet Gaia, where humans, with the help of robots, have developed a collective consciousness that binds all living objects, and even some inanimate objects. Here, there is no privacy, and the result is a peaceful, blissful paradise where each person lives as part of a networked super-organism. “It seems to me,” says one character in Foundation’s Edge, “that the advance of civilization is nothing but an exercise in the limiting of privacy.”
Gaia is an imaginary world. In our world, I suggest, privacy matters a great deal. Everyone has something to hide, and something to fear. Without privacy, our humanity is diminished. Without privacy, we cannot be free to think, act and express ourselves fully. Without privacy, we cannot befriend or love, given that my closest relationships are founded on trust, forged in part by keeping one another’s confidences and secrets. And without privacy, society and democracy cannot flourish.
And yes, I am aware of the irony that someone who once wrote a blog and a book about becoming a father is now arguing for the value of reticence.
Finally, the third main question addressed in my research asks how we might best protect privacy. Here, building on Kant’s categorical imperative, I argue for law that protects a privacy that is not just individualistic, but relational. Such law ought to take Europe’s General Data Protection Regulation, or GDPR, as its template. Further, such law ought to take its cue from consumer law by outlawing misleading and deceptive conduct and mandating fairness and transparency. And such law needs extra-legal supports, in the form of social norms, market forces and, above all, coding. For instance, there is a key role for ‘privacy by design’, which recognises that privacy needs to be coded into new services and platforms at the outset.
There is one caveat, however. Privacy matters, but must always be balanced against other rights, interests and freedoms. Big data has tremendous potential to enhance the lives of individuals and society: to ease traffic congestion; to solve crimes; to advance medical research. Right now, personal data can help us combat COVID-19. The question is, how do we strike an appropriate balance?
In 2019, the internet turned 50, the web turned 30 and Facebook turned 15. Unfortunately, confusion reigns. “The online world has become so murky that no one has a complete view,” wrote tech innovator Jaron Lanier in 2018. “The strange new truth is that almost no one has privacy and yet no one knows what’s going on.”
Net Privacy is an attempt to figure out what’s going on, and then to propose practical solutions by applying a clear ethical framework. Granted, there are other approaches we could apply, but my suspicion is that other legitimate ethical frameworks might yield a similar analysis.
That’s merely my suspicion. What I can say with more certainty is that on an April afternoon last year my family and I installed a fish tank and saw a movie. And later that night, at a casual dinner with friends, my wife and I shared political opinions and risqué jokes. These opinions and jokes were private. Or were they? After all, the New York Times was telling us to panic.
Here on our island continent, most of us are coast dwellers. At a young age, many of us are taught the basics of the ocean: how rips and currents flow; when conditions are dangerous; how to avoid sharks and stingers. And the first rule is: don’t panic. If you panic, you’re more likely to drown. Right now, we’re drowning in data abuses and privacy violations. Still, panic isn’t the best response. What we need to do is help each other make it back to shore, and to a particular sort of freedom.
Sacha Molitorisz is a postdoctoral research fellow at the Centre for Media Transition at the University of Technology Sydney. This is an edited extract from Net Privacy: How we can be free in an age of surveillance, out now through New South Books.