The ongoing debate about Europe’s so-called ‘right to be forgotten‘ ruling on search engines has shone a light onto a key pressure point between technology and society. Simply put the ability of digital technology to remember clashes with the human societal need to forgive and forget.
Our technology now enables total recall of the digital tracks and traces we leave as we go about our lives. It’s trivially cheap to store vast amounts of data. And for tech giants like Google, there’s no business imperative to ever forget an iota of our information. Data is the business. It’s where the value lies now and where more will be created in future as new devices come on stream generating ever more data to be mined, analyzed, joined up and the resulting intel on us sold off , or used to fashion new, more effective products.
Sure you might want a tech company to create some cool new service that delights you. But would you be willing to let them implant a chip in your brain so they can read your mind during their R&D process to really push your buttons? Where do you draw the line between you as a person, and the services you want to use?
There’s a problem with total recall. It doesn’t allow us as a society to forget. And that means, paradoxically, we lose something. Perfect memory engenders individual paralysis — because any legacy of personal failure is not allowed to fade into the background. And individuals are not, therefore, encouraged to evolve and move on.
Total recall shuts us down. It encourages conformity and a lack of risk taking. If trying to do something results in a failure that follows you around forever then the risk of trying is magnified — so maybe you don’t bother trying in the first place. It’s anti-creative, anti-experimental, even anti-entrepreneur. To cite a Steve Jobs-ism, it’s anti-foolish.
Name the human society where total recall is considered the norm. It’s far more human to forget. Forgetting allows for new beginnings. As a creative medium, a little forgetting goes a long way. While too much recall smacks of dystopia, or prison, or the dragnet digital surveillance programs set up in secret by our own governments. It’s hardly an accident that corporate power and state machinery are aligning along the same digital fault lines here.
There was an attempt under the Stasi in East Germany to exhaustively document individual citizens. To create systemic, mechanized total recall — and that was using records kept on paper. Digital technology enables so much more.
These are the issues currently being wrestled with in Europe. Regulators seeking to safeguard individual privacy rights are clashing with tech giants like Google who monetize their algorithm-powered total recall.
Google continues to spin the European ruling as ‘knowledge censorship’ — and has so far turned the process of de-indexing private individuals’ links into a theatrical farce by co-opting the media, whose business models generally align with its own here, to be its outraged mouthpiece. But that spin just obscures the genuine nuance of this debate. (Gigaom’s David Meyer wrote a great piece this week dissecting some of the political complexities involved in the rights and wrongs of online censorship.)
Consider startups like Snapchat or Secret or Whisper. A new breed of technology startup is thinking different vs the old web guard — and winning users exactly because they recognize that humans need multiple social layers to interact; that humans find anonymity liberating; that it’s normal to have different personas in different social contexts; that it’s ok to screw up now and again; and that no one in their right mind wants some silly photo they snapped on a whim to follow them around forever and ever online.
It’s no accident startups that get all that are getting traction right now. Newer tech companies understand the pressure points being created by technology and are using the same technology — which is, after all, just a tool — to relieve those pressure points. And winning fans in the process.
Even Google recognizes some of these truths. Just last week it gave the green light to pseudonymous behaviour on its social network Google+, rolling back from its original requirement that people be tied to their real names.
Facebook too, another data-harvesting giant, has been forced to concede attention to less rigidly identity-centric rivals like Snapchat — and has ended up building ephemeralside products (not to mention buying a private messaging service with fantastic user engagement for a massive $19 billion), that give people a space to interact privately, and share content without fear of future consequences.
What’s posted on Secret is not the same content that gets posted on Facebook — but you can bet your life the two services share some of the same users. The point is, information and communications are always layered and stratified. To pretend otherwise — i.e. to argue that there’s some single sum-of-all-knowledge platform where everyone’s information is supposed to perpetuate ad infinitum is the skewed view here.
The bottom line is for creativity to flow society needs to feel fluid. And that requires freedom to try and space to fail.
And while technology threatens that freedom — at a big platform level — by its ability to capture, store and make trivially retrievable everything we do, conflating the individual with whatever of their actions the platform has marked, it can help too; via smaller services that reintroduce some of the myriad layers, channels and outlets we need to function and thrive.
Still, that’s not enough, given the huge power and reach of the big platforms. Platforms that have captured and continue to capture so much of our data — even, in the case of Facebook, harvesting data on people who have never even signed up to use its services — meaning there’s no opt out of your personal data being stored on their servers, unless you live off the digital grid entirely (good luck!).
Or — in the case of a dominant public search platform like Google, which organizes individuals’ information to create a proprietary hierarchy of retrieval — there is no practical opt out for individuals, and no individual input into what Google’s algorithms determine is immediately attached to a search for your name.
Europe’s right to be forgotten ruling offers a small outlet for private individuals to be heard when it comes to their own info-hierarchy.
It’s a small step — and not without its problems. The implementation currently has glaring holes that allow for easy workarounds (for instance based on territoriality) and loopholes that allow the entire process to be subverted (most obviously as media outlets publish stories about de-indexed links, thereby turning old, irrelevant information into something current and newsworthy again).
What’s needed, more generally, are more creative approaches to the storage of information about private individuals. Academics Julia Powles and Luciano Floridi recently wrote elegantly about this concept in The Guardian, calling for a process of information sedimentation — aka “solutions, adapted to the infosphere, that enable us, individually and as a society, to remember without recalling; to live with, but also beyond, experience; to acknowledge without constraining”.
This is not about deleting knowledge or censoring/sanitizing behaviour; it’s about being appropriately sympathetic to the ephemeral character of (human) memory — which, being flexible rather than rigid, allows individuals and societies to move on.
Intel’s David Hoffman postulated last year the possibility of creating an industry self regulatory body to arbitrate on when web users should have a right to have certain personal online information obscured.
A so-called Obscurity Center might be a better solution than expecting search engines to be judge and jury on when to grant an individual request for obscurity, as is now the situation in Europe (and part of the reason we are seeing so many problems with the de-indexing ruling). While, relying on overburdened national regulators to step in and arbitrate in a timely fashion seems just as forlorn a hope. Technology always outstrips the establishment.
Perhaps the personal information of private individuals that’s stored and made searchable on big dominant platforms like search engines and social networks should be required to have an expiry date, or made intentionally and exponentially more difficult to locate as time goes on.
Twitter does this quite well, with an interface that does not permanently delete information but layers it chronologically, making it more arduous to dig down into the sediment of an individuals’ tweets. It’s not that you can’t go looking for the things someone was tweeting five years ago, but it’s not a trivial task to summon their past into the present — and the amount of effort required to unearth that past voice becomes a reflective reminder of its context as a past info-artifact, divorced from their current identity.
No one said this complex problem had an easy fix. But to argue the issue itself is black and white — an open and shut case of ‘knowledge vs censorship’ — is to belie the complexities of human identity and social interaction. We are not simple creatures. We are full of contradictions and capriciousness. And our tools should therefore not seek to pin us down, or paint us as black or white — but support and reflect our multifaceted nature.
And, while the big platforms are pouring time, money and resources into resisting this reality, there are ample opportunities for startups to innovate — to examine these socio-tech pressure points and come up with creative solutions that can help individuals and the Internet grow together.
So startups, here’s one more thing for your to-do list: remember the importance of forgetting.
[Image by final gather via Flickr]