The ‘right to be forgotten’ has been labelled censorship and disastrous for the freedom of expression. In this paper, we explain that effecting the ‘right to be forgotten’ with regard to search results is ‘censorship’ at the level of information retrieval. We however claim it is the least heavy yet most effective means to get the minimum amount of censorship overall, while enabling people to evolve beyond their past opinions. We argue that applying the ‘right to be forgotten’ to search results is not a question of just ‘censoring’ search engines, but that seen from a broader perspective we—as society—will inevitably have to deal with developments in information technologies and choose between three types of ‘censorship’: (1) censorship of original sources, that is on the level of information storage; (2) censorship on the level of the initial encoding of that information or (3) censorship on the level of information retrieval. These three levels at which ‘censorship’ can take place are the three basic elements of the memory process; whether biological, technological or hybrid with the use of mnemonic technologies. Applying censorship as a means of ‘forgetting’ in the collective hybrid memory of the Web enables us to counter—at least partially—the functioning of the Web as a ‘Panopticon over Time’.
Philosophy & Technology – Springer Journals
Published: Oct 15, 2016