With Facebook Inc expected to go public on May 18, the social-networking behemoth’s hyped offering is drawing a combination of heightened investor interest, worries about valuation connected to advertiser expectations, and that old chestnut … privacy.
Experts say that the pressure of going public will result in Facebook counteracting its “lumpy and slowing” sales growth by aggressively collecting more user data in a bid to boost revenue by reassuring advertisers, and by extension investors, that they’re getting enough bang for their buck.
But are users paying attention to concerns voiced by privacy advocates? After all, Facebook is a much more complicated story than simply encouraging lack of personal information security and privacy consciousness.
It’s equally about encouraging the unreflective acceptance of, and the guilty pleasures of complicity in, voyeurism.
Facebook fiercely protects the privacy of its users in their role of voyeurs, and this is a central and essential feature, the removal of which would potentially cause a serious collapse in willingness of users to expose themselves.
It also protects the privacy of anonymous readers: you cannot see who has read your page, although this would be trivial to implement, and potentially extremely interesting. Attempts to do this have apparently been strongly discouraged/prevented.
So Facebook exposure is not a symmetrical two-way model where everyone is supposed to be open and friendly, and disrespect for privacy/PI security is universal. (It’s more like the “anonymous reader” web from the dawn of Internet 1.0.)
Not only that, its enthusiasm for exposure is fundamentally selective: only the voyeur’s habits and privacy are hidden and protected (from the subject whose page they view, though perhaps not from the advertisers who indirectly via Facebook as intermediary can presumably track everything people view; but that’s another topic, a second level of voyeurism hidden by a deeper level of non-visibility!
Young users say it would inhibit both their posting, and also their reading, if the subjects could tell who was reading their page:
“I would not check out a possibly cute boy’s page if he could see it was me and a gang of girls suddenly checking him out”
“I would not post some stuff if I could see it was a bunch of geeky boys perving at me”.
These two statements are quite significant.
So, ignorance is bliss: “it may well be happening (and there may even be many others who I would be unhappy to see viewing my page) but if I can’t see it, I can happily ignore it”
It is this ignorance that makes up one of the critical “digital world” differences with a normal public space.
In a real world public space, you can go out and see and be seen as someone looking, where on Facebook you can only be seen — it is liked a sort of masked ball where in role as poster you have no mask, but as viewer you are anonymous (unless you as viewer choose to post). Equally, in a real domestic as opposed to public environment with true “friends”, your friends are open enough to be visible in their presence and checking you out, whereas Facebook uses the term “friends” for people you would be uncomfortable knowing are watching you.
The Facebook set-up could even be a sort of voluntary “Panopticon” — you and all the other (voluntarily self-admitted!) inmates are exposed to potentially constant but unseen anonymous surveillance, but with the difference that you cannot even see the blank windows of the one-way glass in the central guard tower, all you can see is the pictures on the wall of other people’s selective fantasy versions of their lives, constant fun. How non-scary!
The real panopticon was notorious for supposedly changing inmate behaviour (repressing acts that they don’t want to be seen doing) due to the consciousness of this constant possible surveillance, even when there is no actual watcher.
But on Facebook, if there is deliberately no up-front consciousness of “who might be watching”, does this chilling/self-repressing effect not occur; or is it more subtle?
This model encourages self-denial, or training yourself not to care or think about something that may actually be happening and that would embarrass you, but is helpfully prevented by the software design choice from being accessible. Just don’t think about it. If you thought about it you’d stop.
It confirms the original model of Facebook as essentially to enable gangs to safely voyeur on others, and carefully protects the voyeurs’ privacy and security against the only people who would care, namely the subject of their view.
This also perhaps raises questions about risks in Facebook model for training habits of thought: the necessary suspension of critical thinking about what might be happening to your information. It relies on you actively not caring, and encourages this by the convenient but not secret invisibility of the voyeurs.
It’s not just playing on the naughty teen model of “see what you can get away with”, and “act as if there’s no consequences” for unwise exposure by yourself. It’s also: “enjoy our safe, private voyeur machine, pretend it’s about ‘friends’ when it relies on hiding non-friend access”.
No wonder it’s so successful! Criticising it is like taking the lolly jar away … and most users would prefer to keep accessing their sugar hit in exchange for trading off their own personal security. After all, what they don’t know won’t hurt them, right?