Forever young? A “right to be forgotten” in the Internet Age

I’ve often said that I feel sorry for the generation just five years younger than me (I was born in 1989), most of whom have had publically accessible online profiles since pre-puberty, for they’ll never have the opportunity to forget themselves. There will always be a record of their youth.

I’ve said that to friends, referring to the fact that most of us when we ‘grow up’, look back upon our younger selves with a degree of shame or shock, and ask questions like: ‘what was I thinking?!’, ‘did I really talk like that’, and ‘how can that be my hair?!’

Fortunately for those who grew up before the age of social media, those faded photos we’d rather not remember are usually stored away in a box in an attic somewhere. If not forgotten, they’re rarely recalled. Personally, it’s not bad haircuts but regrettable transcripts of my 13-year-old self’s MSN Messenger and Bebo conversations that I dread ever being dragged back into the light of day.

For pre-internet generations, the past can be kept where one left it, moved on from. One’s old self can be forgotten and a new self reinvented.

Not so in the internet age – and what progress that represents!

In today’s world, “we are increasingly defined by our Google profile” and “many people are unhappy with the results…” says Anna Fraser, Vic Uni student and author of “Should there be a right to be forgotten (the right to make search engines forget about you) in New Zealand? An analysis of Google v Spain“.

Search engines don’t (yet, anyway) discriminate in generating results.

They disregard – with cold algorithmic indifference – the potential fact that someone might rather others not have access to discreditable information about themselves.

Unless you live in the EU that is, where individuals have recently been granted a so-called “right to be forgotten”.

The Google Spain Decision

It’s Privacy Week (as I write) in New Zealand, which is an apt time to consider what a right to be forgotten means and, more accurately, a right to have links to unwanted personal information hidden from a search engine’s result listings. It’s a good time to ask whether New Zealand’s privacy laws aren’t already sufficient to prevent harm without stifling the promise of a free internet.

In 2014 the Court of Justice of the European Union (CJEU), in its landmark Google Spain decision, recognised a so-called “right to be forgotten”, allowing individuals in the EU to make prejudicial but true information about them less accessible to anyone searching their name online. Less a right to actually be forgotten, it’s more a right to hide undesirable personal information.

The Privacy Commissioner John Edwards has blogged on the idea of introducing a similar “right to be forgotten” to New Zealand’s privacy laws, asking New Zealanders where they think the balance should lie between rights to personal privacy and the right to freedom of expression and information. Remarking that Google Spain had been met with “criticism, astonishment, suspicion, relief and applause”, he says the implications of the right to be forgotten are “not as clear cut as you might think”.

Google Spain – a case about a history of bad debt

The whole thing started like this…

In 1998 a Spanish newspaper published two articles about one Mr González – a mere 35 words each – recording that Mr González’s home was to be auctioned to pay off his debts. The articles were lawfully published online. Nearly two decades later, leading to the CJEU decision, Mr González realised that any internet user who typed his name into Google was likely to receive links to the articles. The past was over, and it was prejudicial to Mr González’s present and future that information about his debt history continue to be recalled and remembered, he argued. So he demanded that Google break the links, and that the original articles be removed by their publisher. Only partly successful, Mr González won the right to have the links removed from online circulation. The original articles remain online – they’re just much more difficult to find.

It’s an attempt to take us back, “to a time when people had to go to the library to research past debts rather than instantly downloading them.”

When to dig up dirt, to muckrake, meant more than paying a computer hacker.

While not a resounding victory for Mr González, the implications of Google Spain are massive, and managing the change of law has proved a mammoth task, which has so far been conveniently and disturbingly outsourced to Google itself.

“Do No Evil” – Google’s power to decide

The day following Google Spain more than 10,000 people in the EU requested that links to information relating to them be taken down. Among the first to seek to enforce the newly recognised right were a convicted child pornographer and a doctor who’d received a poor review of his medical practice.

Here’s how Google – who’s marketing mantra is “do no evil” – currently handles take-down requests.

An online web form allows individuals to request the removal of unwanted search result links. As the process is controlled by Google, a private company, there is (ironically?) little information publicly available about how decisions that must balance fundamental rights to privacy and information are made.

What is known is that such decisions are made by Google’s own internal “senior panel”, which discusses and votes on whether links should be removed, taking into account “the characteristics of the individual, the publisher of the information, and the nature of the information available via the link”. One thinks of 1984 protagonist Winston Smith, at his job in the Ministry of Truth, destroying past newspapers and re-writing new ones.

If Google deems that removal is justified, it will “delist” links as far as they are displayed against a search of the data subject’s name, and then only from the specific domain (geographical locality) that the individual lives in (although Google is in the process of extending the right to be forgotten, so that links are removed from other countries’ domains too).

As with other privacy law, an individual’s public profile will also be relevant to the assessment of privacy rights – the more famous you are, the less you can expect a right to privacy.

An aside: a recent Italian court decided that an individual is also entitled to “edit” the snippet / blurb of information that appears below links listed in Google’s search results if it is deemed misleading. Automated algorithms currently produce that content without human intervention, but soon Google might be required to take on an editing capacity too.

Google’s responsibility has been described as “quasi-judicial” – it has the power to make the sorts of decisions that are usually made by Governments and Courts.

Even Google’s own chairman Eric Schmidt doesn’t like the position his company has been put in, publicly decrying: “we didn’t ask for it”. Perhaps unsurprisingly – considering the principles of free information behind his revolutionary online encyclopaedia –  Wikipedia founder Jimmy Wales is also among the most vocal opponents of the “right to be forgotten” and has lobbied European Parliament to immediately overwrite the CJEU’s Google Spain decision, deriding it as a “right to censor some information that you don’t like”.

Privacy concerns or cosmetic censorship?

Since launching its “take-down” web form service in July 2014 Google has evaluated more than 1 million links and removed 41 per cent of those from its search results. Time has shown that Google will almost certainly remove links to articles naming the victims of crime, and it’s likely to remove links revealing a person’s involvement with minor crime or quashed convictions.

This guidance may be of practical benefit, maybe helping to inform NZ legislators and courts, but it hardly represents the inspired jurisprudence of law and policy-makers.

More likely Google’s decisions will be based on self-interest, market preservation, and retention of profitability for its shareholders. Google is a business.

It appears willing to remove links to articles containing minor yet insensitive personal details, such as residential addresses and opinions. An article detailing a competition entered by an individual when they were younger has been effectively hidden. Same for an article about minor crimes committed ten years previously by a school teacher.

More widely acceptable is the fact that Google seems especially reluctant to remove links when they lead to information about an individual’s professional capacity and activities. Google has chosen not to remove links to information about a couple arrested for business fraud, a professional’s arrest for financial crimes, a doctor’s botched procedure and an individual’s dismissal for sexual crimes committed at work.

As a Dutch court put it when faced with interpreting the new law; the “right to be forgotten” is not a right to remove articles which “may be unpleasant, but [are] not unlawful” from the eyes of the public. Although, that’s arguably exactly the right the appellant won in Google Spain.

It is a natural consequence of high profile crime, for example, that criminals receive public notoriety. Offenders must be prepared to live with the consequences of public knowledge of their decisions and actions, which can be far longer lasting, possibly permanent, in the internet age – but are arguably no less deserved. Do the crime, do the time, essentially.

But what about when people want the world to “forget” their less serious indiscretions, to make it more difficult for undesirable but true facts from their past to be known in the present?

With the internet, we are “embarking on a great experiment of never forgetting”, after all.

The “ubiquitous hoarding” of personal information, whether for national security or targeted marketing or another purpose, is not going away.

But I’ve got nothing to hide …

“I’ve got nothing to worry about, because I’ve got nothing to hide” has become such a dog answer in the age of internet-enabled mass surveillance. And it will probably bite its owners in the ass someday.

Everyone has something to hide, especially when they know others are looking.

Google’s search engine has the power to create a digital profile of an individual using the aggregation of disparate data about them. Taken individually, this data is mostly meaningless – a record of our thousands of discrete actions and appearances/references online. Taken together these disparate data points can paint an intimate picture of a person’s life, public and private.

And do we really want our doctors to have the ability to hide evidence of malpractice, lawyers erasing records of misconduct, sex-offenders deleting references to past crimes, rehabilitated drug addicts erasing evidence of their addiction, former prostitutes the record of their resume?

Where do we draw the line?

What other kinds of information might people want to conceal?

Is a “right to be forgotten” even that new?

The Privacy Commissioner blogs that the so-called “new” right has its origins in pre-internet jurisprudence and principles, and adds his distaste for the phrasing “right to be forgotten”.

“It is inaccurate, imprecise, and impossible”.

Really it’s a right to “practical obscurity” – a concept recognised more than 30 years ago by the US courts – which appreciates that while information may have been “publicly available” (such as on record at a courthouse or local council office) and open for inspection, “the passage of time and geographical obstacles to overcome for most people to gain access to the information guaranteed a degree of privacy in relation to that material”.

Even New Zealand’s High Court in Tucker – a leading case in our privacy law jurisprudence – recognised that privacy could potentially “grow back” over previously public information. Tucker was about a man with a bad ticker, who – ingeniously for the ‘80s – attempted to “crowdfund” the heart transplant that would save his life. He might have been successful, had media not published details of Tucker’s past convictions for sexual offending against children, derailing support for his operation. The case was never fully heard, and it remains academic whether New Zealand’s courts would have recognised that Tucker had an enforceable right to have those details of his offensive past “forgotten” or suppressed.

A right to be forgotten in New Zealand?

The archival permanence of internet record keeping makes it difficult for others to “‘forget’” information that would previously, pre-Google, have faded from memory.

The “core issue arises when information generated by Google is seen as prejudicial” by an individual who wants it removed, Ms Fraser writes. “How should their right to privacy be balanced with the rights of freedom of expression and access to information…?”

She says “a right to be forgotten should aim to reflect social mores about what a person is entitled to put behind them and what remains society’s business ad infinitum”.

I would add that entitlement is nothing without ability, and, really, if the internet has proven any old aphorisms to be true it’s: you can run but you can’t hide! Entitlement to remove links is one thing, ability to be forgotten another. Reading this, you now know about Mr G’s bad debts – the very information he wanted to hide from Google’s search listings which lead to the “right to be forgotten”.

“A free internet has effectively ‘harnessed the world’s interests, creativity, and intelligence to produce a colossal archive of everything’”.

“People should have the right to know true information.”

Ms Fraser says the soon-to-enter-force (next year) Harmful Digital Communications Act (HDCA), designed to “deter, prevent, and mitigate harm caused to individuals by digital communications; and provide victims … with a quick and efficient means of redress”, should be sufficient to protect New Zealanders’ legitimate rights to online privacy.

The HDCA and existing law can handle it (maybe)

The HDCA will establish an independent agency, whose experts could conduct the types of ethical analyses Google currently undertakes in the EU, she says. Where “sensitive personal facts” published online can be proved to be harmful to individuals, links to those facts will be taken down. A right to appeal through the judicial hierarchy would exist as usual.

Wellington barrister and privacy law lecturer Steven Price agrees.

The HDCA has potential to address these issues, and a “right to be forgotten” is “silly”, he says.

“There is no such right and there can’t be one.

“There can be a power to remove material from certain places if it is found to be interfering with legal rights. Pretty much all countries have that in some form.”

New Zealand already has relatively well-developed privacy laws that give individuals some power over how accessible their private information is to the public.

But under the developing European law, the rules for removal are “vague”, and their “application is, in practical terms, up to Google”.

“In NZ we already have the ability to have material removed for various privacy reasons via a variety of mechanisms – the courts can grant injunctions preventing the publication or continued publication of private facts where publication is highly offensive and there’s no countervailing public interest.

“The Press Council, Online Media Standards Authority and Broadcasting Standards Authority can rule on complaints and the first two can order removal of material in some cases.

“The Harmful Digital Communications Act will add to this when its civil regime comes into force, in particular because it contains the power to make takedown orders.

However, it is not designed to expand the reach of the law so much as to make the remedies offered by courts more accessible, Mr Price crucially says. It’s about a better way to enforce existing privacy law and principles.

“It’s not entirely clear how it will be interpreted, and some are worried that it may get out of hand.

“But it contains a range of measures aiming to limit the takedown powers, and expressly says that any takedown order must be consistent with the NZ Bill of Rights Act, which protects freedom of expression except where a particular ruling is reasonable and demonstrably justified in a free and democratic society.”

Forgive but don’t forget?

This is where things get interesting, for both Ms Fraser and the Privacy Commissioner.

Ms Fraser quotes an Oxford academic who supports a right to be forgotten: “We need to forget the details in order to see the forest and not the trees – if you have digital memories, you can only see the trees”.

Other academics lament: “aphorisms such as ‘time heals all wounds’ and ‘forgive and forget’ have been unreasonably relegated by the internet to the ashbin of history”.

Though, reference to an “ashbin” probably reveals the century, nay millennia, in which such uninspired thinking is founded.

Such perspectives consider that a “digital right to be forgotten should seek to mimic the function of real memory”, Ms Fraser says. And “…following this idea, all information that a reasonable person would forget in real life should have the capability of being digitally ‘forgotten’ by Google”.

Contrasting this conservatism is another set of perspectives that balance the rights to privacy and expression differently. It goes like this: “there is inherent value in the right to know and therefore nobody should be entitled” to the right to decide what gets forgotten or hidden from the internet’s infallible memory.

If generations before us were fortunate to avoid leaving any digital trace, those that follow “may leave nothing but ‘sanitised authorised biographies’”, with individuals defined by “haphazard and piecemeal collections of our finest and foulest moments”.

Sinking deeper into the implications of censoring history, arguments against a right to be forgotten have advanced the idea that future generations won’t require that their indiscretions be forgotten, but merely forgiven.

Because “’humans are weak and everyone misbehaves there should be public acceptance of these imperfections, it’s been said. The knowledge that nothing will be forgotten “should ‘increase understanding that human weakness is universal’ extending empathy and offering opportunities to those who have transgressed”.

To err is human, to forgive divine, Willy Shakespeare reckoned.

To forget is human too, but to forgive and not forget will be the future of humanity in the post-internet age.

Besides, freedom of information is a pillar of society that keeps us moving forward in the right direction. A right to be forgotten might lead to the “society that was forgotten” opponents warn.

Censorship, particularly in the hands of Google, is also likely to err on the side of caution, meaning Google is more likely to remove than refuse to remove (they still get 2,500 removal requests each day!), which represents a (to this author, unreasonable and dangerous) chilling effect on free speech.

The Big Question

What information should an individual be entitled to remove from the digital record of history, in a free and democratic society that values the freedom of expression – especially when that information is accurate?

“A free internet has effectively “harnessed the world’s interests, creativity, and intelligence to produce a colossal archive of everything,” those against the EU law say.

“People should have the right to know true information.”

If construed too widely, a “right to be forgotten” becomes a right to rewrite the past. And doesn’t that undermine many of the benefits and promises of the internet age? Wouldn’t it destroy the archive of everything?

Isn’t it a bit 1984?

Isn’t it like putting the leader of the Brave New World in which we live, Google, a company that says “Do No Evil”, in charge of spin-cycling history?

If knowledge is power, isn’t it dangerous to give anyone, especially a private company, the ability to decide what information will and won’t be archived online?

How would you balance the rights to privacy and the rights to expression and free access to information?

Where would you draw the line?

Be the first to comment

Leave a Reply

Your email address will not be published.