European intellectual property and data protection in the digital-algorithmic economy: a role reversal(?)

European intellectual property and data protection in the digital-algorithmic economy: a role... Abstract The author Giulia Schneider holds a law degree from Sant’Anna School of Advanced Studies in Pisa (Italy). She is currently a PhD candidate in Legal Studies at Bocconi University in Milan (Italy). Her research interests focus on the regulation of digital markets. This article This article investigates the evolution of the interaction between European IP and data protection laws in the algorithmic economy. It seeks to show the unsuitability of traditional legal theories in respect to the newly emerging paradigms. As a consequence of the reforms introduced over the past few years, the European IP and data protection frameworks have undergone a shift in terms of the systemic functions respectively carried out. By virtue of the extension of artistic IP tools to industrial information assets and the introduction of sui generis IP tools protecting industrial information as such, the European IP system has taken on a new control and secretization function regarding competitively sensitive information. In the meantime, the General Data Protection Regulation (GDPR) has introduced new transparency requirements, encouraging the production and release of information regarding the logic and the risks stemming from (mostly automated) algorithmic data processing. This paper ultimately demonstrates that a role reversal between IP and Data Protection has occurred: the new Data Protection framework seems indeed to have taken up the new of function of mitigating the ‘secretizing’ pressure of the current European IP System. 1. Introduction The relationship between industrial intellectual property (IP) and data protection laws has increasingly attracted the attention of legal scholarship enquiring the countenance of businesses’ and individuals’ rights in the digital-algorithmic space.1 These two branches of the law have been differently put in connection in terms of structural parallelism,2 supportive exchange3 and ultimately conflict.4 At a general level, such variously oriented literature reflects how, in the current technical environment, the boundaries between industrial IP and data protection are becoming increasingly blurred.5 This article draws upon this literature and contributes to the present debate by tracing the evolution of the relationship between European data protection and IP with the transition from an industrial to an algorithmic-information era. It argues that in the digital-algorithmic market the interaction between the two regulatory branches has become more thought-provoking than ever due to a ‘gene mutation’ that has come to affect both the IP and the data protection paradigm as a consequence of the latest reforms: as a result of the occurred structural changes, European IP and data protection laws appear to fulfil, respectively, new opposed systemic functions, which in turn are giving rise to unexpected overlaps. Both laws have indeed come to regulate the production and availability of digital industrial information,6 differently allocating rights over control and access to algorithms and the data generated by them. The analysis is organized as follows. (i) First reference will be made to classical justification theories of both IP and data protection law. These theories provide the analytical framework against which current European IP and data protection regimes will be tested. (ii) Against the backdrop of such framework, the analysis of newly enacted IP and data protection provisions will show the inadequacy of traditional justification theories. (iii) Ultimately, a new theoretical framework will be outlined, highlighting the functions carried out by each system in the digital-algorithmic economy. Using this framework newly emerging conflicts between the two branches of European law will be identified. 2. The invention-based economy: the origins of IP and data protection laws The origins of industrial IP and data protection laws can be traced back to an economic environment in which industrial products were at the centre of competition courses and where personal data was not a trading commodity.7 Accordingly, both regulatory frameworks were originally supported by precise theoretical justifications. The objects of protection—on the one hand inventions and on the other side personal data—were different and were thus also the functions respectively assigned to each regulatory system. Classical industrial IP theory is rooted in utilitarian and labour principles.8 Although differently tailored, these theories ultimately stress the need to incentivize inventors and economically reward them in return for the innovation brought about9: the inventor is entitled to an exclusive right for having transferred his invention to society. From a different perspective, the inventor will gain an exclusive right over his invention if he discloses it to society.10 This means that at the very core of the invention-based IP system there is a socially oriented transparency pursuit that is well reflected in the structural disclosure requirement of patent protection.11 As has been said, ‘the essence of the patent system is transparency and disclosure’.12 The disclosure requirement is directly aimed at enriching the public domain and enhancing the progress of technology.13 It is thus intimately connected with long-term innovation goals promoted by the public availability of new knowledge.14 This is well-reflected in the most recent discussions carried out at both theoretical15 and practical levels, where the disclosure requirement has been put at the centre of reform proposals aimed at tightening patents’ sharing benefits.16 Things are quite different when it comes to data protection law. As a strand of the literature has stressed, ‘privacy rights are indifferent to the production of information’.17 Indeed, at its roots, data protection does not deal with incentivizing the production of information, but rather with freezing the circulation of (personal) information.18 It primarily seeks to control the distribution of information in accordance with the will of data subjects.19 Accordingly, from a theoretical perspective, data protection has been conceptualized over time as control, limited access and contextual integrity.20 The primary means to achieve these fundamental objectives is consent. Through consent, the data subject is empowered to oversee third parties’ use of its personal data in accordance with the principles of purpose limitation and proportionality. As the law and economics literature stresses—and criticizes—data protection law minimizes information consumption with significant secretization externalities21 and associated anticompetitive effects.22 Given these premises, it appears that in an industrial era IP and data protection law are at opposite ends of a spectrum.23 Indeed, the industrial IP system, as shaped by the Paris Convention for the Protection of Industrial Property24 and the European Patent Convention,25 is product-oriented, thus patent-centric, with a primary transparency function encouraging the sharing of newly produced knowledge. On the other hand, data protection law as conceived under the EC Data Protection Directive,26 refers to a personal right functional to the protection of individuals’ autonomy and self-determination. It hinges on the notion of individual consent as a means of controlling and restricting access to personal data and thus ultimately curbs the domain of publicly available knowledge. 3. The digital-algorithmic economy: some contextual premises With the advent of the so-called digital economy, the original autonomy between the two above-illustrated paradigms has been sensitively challenged. For a proper assessment of the systemic pitfalls triggered by the changed technological environment, some contextualization is needed. In the digital marketplace, data have become a core economic asset.27 As Google chief scientist, Peter Norvig, has put it: ‘we don’t have better algorithms than anyone else, we just have more data’.28 As true as this might be, a vast strand of scholarship has underlined that the commercial and competitive value of personal data cannot be considered separately from the technical infrastructure that enables data-driven businesses to collect and aggregate the same data from various information sources, infer predictions over it and ultimately make judgements over actual or future behaviour of data subjects.29 Algorithms are the main drivers of information governance in the digital markets, where most of the data processing is mastered by machines and very little is left to human intervention.30 On the basis of the collected data, algorithms create models which subsequently govern businesses’ decision-making.31 As data scientists stress, the term algorithm generally describes a highly sophisticated and diversified technical reality.32 It is however possible to spot a few common features which render the phenomenon of algorithmic data processing an extremely challenging regulatory terrain. First, algorithms are affected by intrinsic technical obscurity, given by the difficulty of identifying or interpreting their functioning criteria.33 Secondly, algorithms are mostly self-learning, in the sense that they are capable of extracting new data from the formed models,34 thus triggering potentially infinite circles of further data processing.35 These structural and functional features of algorithmic data processing techniques have increased their economic value but also their invasiveness into data subjects’ personal sphere.36 The rising economic value of computational infrastructures and of machine- (that is, business-) generated information has boosted companies’ urge to shield these new intangible assets from competitors’ free-riding threats through effective means of legal protection. On the other hand, the maximization of companies’ production and consumption of data has quantitatively and qualitatively weakened the protection of users’ personal data and the traditional object of personal ‘data minimization’.37 As a result, in the face of ubiquitous algorithmic data processing, the adequacy of both traditional IP and data protection laws has been questioned, with significant legislative outcomes. Indeed, as the following sections will show, over the past few years, both branches of European law have undergone a substantial revision, directly or indirectly responding to the newly emerged need to render both industrial property rights and the right to data protection more technically sound. 4. The industrial IP framework in the digital-algorithmic economy: from patents to secrets The evolution of the industrial IP framework triggered by the economic advancement of algorithmic data processing practices can be referred to two main reform directions: (i) the extension of an IP tool like copyright, to industrial information assets, for example software; (ii) the introduction of sui generis IP tools protecting industrial information as such, as is the case of the database right. These two reform trajectories have different objectives since the first one mainly addresses businesses’ need to protect their IT infrastructure, whereas the second refers to the different demand to guard the information that is processed and generated through this processing infrastructure. The first reform trajectory is of interpretative nature: it originates from the extensive interpretation of existing IP tools. In this regard, the opportunity to protect algorithms (ie the software to which algorithms are applied to) through patents has been object of a heated debate, resulting at European level in a proposal of a directive related to the patentability of computer-implemented inventions, which however has never been adopted.38 Thus, regulatory answers have been sought in the existing framework: the practical difficulty of extending patent protection to software39 has suggested the extension of copyright to software40 and, even before the software, to the code source in which the algorithm is expressed. Copyright has been also employed for the protection of aggregated data processed by algorithms in case the selection and arrangement of it meets the originality threshold.41 At the same time, the growing importance of data in competition interactions has triggered a new wave of regulatory interventions, which have envisaged the creation of previously unknown tools for the protection of sensitive industrial information. In 1996 the Directive on legal protection of databases42 established an exclusive sui generis right over databases resulting from a ‘substantial investment’.43 Ten years later, the European IP framework was further reformed through the introduction of the Trade Secret Directive,44 providing very broad conditions for protection encompassing nearly every business’s confidential information45 and, despite formal declarations,46 a proprietary-styled protection over this information.47 Lately, discussions have also been going on over the need to introduce a new exclusive right specifically related to digital data.48 The case for a new ‘industrial data right’ has not been made yet. It has, however, been opposed in a Statement of the Max Planck Institute for Innovation and Competition, which has stressed the absence of a justification or a necessity to create exclusive rights in digital data.49 Both lines of development have brought up significant systemic changes to the traditional industrial IP system in terms of both the design of protection sought by businesses and, in turn, in terms of the functions overall carried out by the above-outlined protection tools. The rising importance within the IP system of (sui generis) rights over information goods has determined a retrenchment of the patent system’s primacy50 and, in turn, a shift of the regulatory framework’s barycentre from traditional invention-based protection to information exclusivities. This has been acknowledged both at an empirical and a theoretical level. Some recent quantitative studies have shown that European businesses are increasingly preferring trade secrets over patent protection.51 Already before this empirical evidence, IP scholars have been critically assessing the (over)protectionist effects of ‘hybrid IP regimes’52 eroding the public domain53 and hindering competition mechanisms.54 As a result, both the extensive employment of protection tools alternative to patents and the occurred legislative reforms have determined a substantial turnaround in the functions overall carried out by the newly shaped IP system. Indeed, the IP tools employed in the digital-algorithmic market aim primarily at shielding business’s competitive advantage deriving from investments in the production of information. Accordingly, these tools control and limit access over such information—as it happens with the database right and copyright—or secretize this same information, as is the case of trade secrets. The emerging IP paradigm thus appears to be sensitively different from the original one: it is information-centred, secrecy-oriented, with the primary function of strengthening businesses’ control over information goods, weakening third-party access and ultimately secretizing industrial information. 5. The data protection framework in the digital-algorithmic economy: from consent to transparency In the digital-algorithmic marketplace data is economically exploited through algorithmic processing techniques aggregating collected data for the generation of new data.55 The data refined through analytics thus provides the benchmark of businesses decision-making schemes.56 As widely recognized in the literature,57 most of business-generated data can be qualified as personal information. Indeed, the reversibility of anonymization and pseudonymization techniques58 enabled by the growing computational aggregation capacities, make nearly every data spread online ‘identifiable’.59 Hence, this business-generated data does not only fall under the above-recalled IP regimes, but is also regulated under data protection law. Against the backdrop of the changed technological environment, traditional data protection categories are showing their ineffectiveness: self-generating algorithmic data processing is challenging the distinctions between sensitive and non-sensitive data60; in the whirls of the resulting data flows the identity of data processors and data controllers is easily lost.61 The practical difficulty of spotting the data being processed and the subjects carrying out the processing invalidates consent as a means of defining the purpose and the context of the data processing and, with it, also the principles of purpose limitation and proportionality.62 As a reaction to the incapability of the previously existing data protection regime of addressing the concerns raised by machine-data processing, the GDPR has tried to provide more effective regulatory responses to the ‘rapid technological developments and globalisation’.63 The GDPR has empowered data subjects with new prevention and reaction tools, primarily aimed at enabling them to govern the risks stemming from algorithmic-driven decision-making processes.64 More specifically, the provisions entailed in the Regulation introduce new mechanisms aimed at enhancing the accountability of automated data processing and thus of the subsequent decision-making patterns.65 Accordingly, Articles 13–15 GDPR reaffirm the right of data subjects to have access to the data collected about them, requiring data processors to notify data subjects about occurred treatment. However, they also introduce the right to receive ‘meaningful information’, consisting in the data subject’s right to be informed about the ‘existence of automated decision-making, including profiling, referred to in Article 22(1) and (4), and at least in those cases, as meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject’.66 Under Article 35 GDPR, controllers are required to carry out a ‘Data Impact Assessment’, this being the ‘assessment of the impact of the envisaged processing operations on the protection of personal data’. Its function is exactly that of an ex ante tool of appraisal of the possible threats to the data subjects’ rights through a systematic evaluation of ‘the risks of varying likelihood and severity for the rights and freedoms of natural persons’67 carried out under the responsibility of the controller. Moreover, the controller has the duty to implement adequate appropriate technical and organizational measures to ensure and to be able to demonstrate that processing is performed in accordance with this Regulation.68 Similarly, Article 22 GDPR, referring to the specific case of ‘automated individual decision-making, including profiling’, requires the data controller to ‘implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision’. In addition to this, Article 30 GDPR imposes the creation of an ‘internal record of processing’ that is to “be made available to the supervisory authority on request” and Article 40 GDPR encourages designated bodies to ‘prepare codes of conduct (…) for the purpose of specifying the application of this Regulation, such as with regard to: (a) fair and transparent processing (…)’. Ultimately, Article 42 GDPR authorizes ‘the establishment of data protection certification mechanisms and data protection seals and marks’ that shall ‘be available via a process that is transparent’. In light of the cited provisions, it appears that the European data protection framework is increasingly relying on the principle of transparency.69 This principle ultimately requires the disclosure of ‘intelligible’70 and ‘meaningful’71 information regarding the existence, the nature and the consequences of the algorithmic-data processing. The above-mentioned transparency requirements are currently much debated, especially with regard to their scope72: some scholars stress the technical ineffectiveness of such requirements,73 while others question their novelty in respect of the previous data protection regime.74 It is indeed argued that the Data Protection Directive already entailed a specific right to access processed personal data75 and was thus already permeated by the transparency principle.76 This is certainly true. However, the transparency provisions entailed in the GDPR are numerically superior and qualitatively more pervasive in respect to the past. Under the previous data protection regime, the transparency principle, as objectified in data subjects’ rights to be notified about the processing and to access their data, was strictly related to control rationales. Indeed, it served the purpose of determining data subjects’ fully ‘informed’ consent. Hence, the transparency principle was intimately connected with the principles of purpose limitation and proportionality. To the contrary, the transparency principle as (re)affirmed in the GDPR seems to have a quite different significance. As some strand of literature is observing, transparency in the GDPR system is a means for technical accountability,77 designed not for the purpose of feeding data subjects’ consent but rather for that of increasing their awareness over the risks stemming from machine-driven data processing. Differently from the Data Protection Directive, the GDPR moves from the assumption that algorithmic data processing is a risky practice, which poses substantial threats to data subjects’ fundamental data.78 As a result, transparency in the GDPR is strictly connected to the other truly new data protection law principle of data sanitization—that is the removal from automatically processed datasets of special (ie sensitive categories of data79) in order to ex ante prevent discriminatory outcomes.80 Against this backdrop, the GDPR appears to have significantly changed the design and, in turn, the functions of data protection law. Under the new data protection regime, the regulatory focus has overall shifted from the moment of collection to the subsequent phase of processing and thus, more precisely, from consent to businesses’ conduct. The new provisions of the GDPR have indeed extended data controllers’ obligations leading to what can be defined as a ‘proceduralization’ of data protection law, increasingly centred on the active agent of mass scale processing rather than on the subjects who bear the legal and economic consequences of such a processing.81 The functional implications of these structural changes can be immediately perceived. The increasing importance of the transparency principle suggests that the control and limited access objectives are being overridden by an unfamiliar disclosure task. In the current technological environment, the protection of personal data loses its individual scope82 and becomes crucial for the protection of other fundamental rights.83 As a result, data protection law gains the function of producing publicly accessible information regarding the logic of widespread data processing and its potential effects on these same fundamental rights.84 The emerging data protection paradigm is thus sensitively different from the original one: it is indeed risk-oriented, business-centred, with the primary function of expanding, rather than controlling, access to information. 6. Conclusion: a role reversal and new conflicts in the digital-algorithmic economy As the analysis above has demonstrated, in the digital-algorithmic economy the currently dominant IP paradigm appears to have distanced itself from the traditional patent-centric approach. As a result, the public interest-oriented transparency function previously lying at the heart of the IP system seems to have been progressively marginalized and overtaken by a new control and secretization task regarding competitively sensitive information. To the contrary, under the GDPR, data protection law appears to have curtailed the traditional objective of controlling and limiting access to released personal data and to have significantly expanded businesses’ transparency burdens with regard to the generation of information about their data processing and thus decision-making activities. These transparency requirements lay the foundations for new accountability and self-correction measures that are currently under the scrutiny of data protection specialists.85 Hence, it seems that in the digital-algorithmic economy IP and data protection law have undergone a role reversal. The practical implications of such a functional shift are open to enquiry: more specifically, it is unclear whether the new transparency provisions of the GDPR will be capable of mitigating the algorithms’ secretization trend enabled by the current IP system or whether such a trend will, to the contrary, overwhelm the GDPR’s transparency requirements,86 by encouraging a restrictive interpretation of them and thus showing their unfeasibility not only at a technical but also at a legal level. Footnotes 1 For a general assessment see D. Liebenau, ‘What IP Can Learn from Informational Privacy, and Vice Versa’ (2016) 30(1) Harvard Journal of Law & Technology 285, 286–7. 2 J. Zittrain, ‘What the Publisher Can Teach the Patient: IP and Privacy in an Era of Trusted Privication’, (2000) 52 Stanford Law Review 1201, 1205. 3 On the support that IP can give to privacy enforcement, J.E. Cohen, ‘Copyright and the Jurisprudence of Self-Help’ (1998) 13 Berkeley Technology Law Review 1089, 1093–4. See also the debate regarding the establishment of propriety rights over personal data. P.M. Schwartz, ‘Property, Privacy, and Personal Data’ (2004) 117 Harvard Law Review 2055, 2066–9. 4 For the enforcement of IP, users’ privacy might be compromised. See J.E. Cohen, ‘A Right to Read Anonymously: A Closer Look at “Copyright Management” in Cyberspace’ (1996) 28 Connecticut Law Review 981, 992–3. 5 V. Mayer-Schonberger, ‘Beyond Privacy, Beyond Rights: Toward a “Systems” Theory of Information Governance’ (2010) 98 California Law Review 1853, 1857. 6 It must be clarified that this article will use the terms ‘information’, ‘knowledge’ and ‘data’ without differentiation. Indeed, this article focuses on the functional outcomes of European IP and data protection regulations as applied to industrial intangible assets. This being the object of analysis, the acknowledgment of the substantive differences between ‘data’ and ‘information’ would fall outside the limited target of this contribution. Nonetheless, it is important to recall the basic assumption according to which ‘data’, or better said, ‘data points’ are the raw material upon which analytical processes are carried out and subsequently employed in business practice as ‘information’ or ‘knowledge’. Thus, the notion of data is to be referred to a static dimension, whereas the notion of ‘information’ and ‘knowledge’ refers to a dynamic phase in which aggregated data is contextualized and actively employed in business practice. For a deeper insight on the complexity of information theory see the famous conceptualization of R.L. Ackoff, ‘From Data to Wisdom’ (1989) 15 Journal of Applied System Analysis, 3–9. More recently, M. Boisot and A. Canals, ‘Data, Information and Knowledge: Have We Got it Right?’ (2004) UOC Internet Interdisciplinary Institute <http://www.uoc.edu/in3/dt/20388/index.html>. 7 For a general assessment on the issue see J. Cohen, ‘The Regulatory State in the Information Age’ (2016) 17(2) Theoretical Enquiries in Law 2, 3. 8 R.P. Merges, Justifying IP (Harvard University Press, Cambridge, MA, 2011), pp. 52–6. 9 F. Machlup and E. Penrose, ‘The Patent Controversy in the Nineteenth Century’ (1950) 10(1) The Journal of Economic History 1, 5. More recently recalling this point, M. Lemley, ‘Property, IP and Free Riding’ (2005) 83 Texas Law Review 1031. 10 Cf Article 83 and 100 (b) European Patent Convention. 11 The original public law dimension of the patent system is well stressed by O. Liivak, ‘Private Law and the Future of Patents’ (2017) 30 Harvard Journal of Law and Technology 33, 41-42. On the issue also M. Lemley, ‘Taking the Regulatory Nature of IP Rights Seriously’ (2014) 92 Texas Law Review 107, 110, defining patents as ‘government interventions in the marketplace designed to achieve social policy ends’. 12 WIPO, ‘WIPO Technical Study on Patent Disclosure Requirements Related to Genetic Resources and Traditional Knowledge’, Study No 3-2004 <www.wipo.int/export/sites/www/freepublications/en/tk/786/wipo_pub_786.pdf>. 13 J. Barton et al, ‘Integrating IP Rights and Development – Report of the Commission on IP Rights-Commission on IP Rights’, September 2002, London <http://www.iprcommission.org/papers/pdfs/final_report/ciprfullfinal.pdf>. For the relevant literature see R. Feldman, ‘Transparency’ (2014) 19 Virginia Journal of Law and Technology 27, 30–5. 14 C. Correa, Trade Related Aspects of IP Rights: A Commentary on the TRIPS Agreement (Oxford, Oxford University Press, 2007) 94. 15 Stressing the importance of the disclosure requirement as substantiated in the ‘written description’ requirement, M. Risch, ‘A Brief Defense of the Written Description Requirement’ (2010) 119 Yale Law Journal Online 127. 16 Especially with regards to the disclosure of traditional knowledge, See J. Gibson, IP, International Trade and Protection of Traditional Knowledge (London, Earthscan, 2005), pp. 23–6. WIPO, ‘Declaration of the Source of Genetic Resources and Traditional Knowledge in Patent Applications: Proposals by Switzerland – Document submitted by Switzerland’, 6 June 2007, 4. 17 D. Liebenau (n. 1) 290. 18 M.A. Lemley, ‘Private Property’ (2000) 52 Stanford Law Review 1545, 1549–52. 19 J. Zittrain (n. 2) 1214–15. A.F. Westin, ‘Privacy and Freedom’ (1969) 22 Administrative Law Review 101, 103, pioneering the privacy as control theory. 20 R. Gavison, ‘Privacy and the Limits of Law’ (1980) 89 Yale Law Journal 421, 423, conceptualizing privacy as limited access theory; M. Nissenbaum, ‘Privacy as Contextual Integrity’ (2004) 79 Washington Law Review 119, 125. 21 R.A. Posner, ‘Privacy, Secrecy, and Reputation’ (1979) 28 Buffalo Law Review 1, 10–12. 22 R. Calo, ‘Privacy and Markets: A Love Story’ (2015) 91 Notre Dame Law Review 649, 651. On the issue also T. Zarsky, ‘The Privacy-Innovation Conundrum’ (2015) 19 Lewis & Clark Law Review 116, 121–8. 23 D. Liebenau (n. 1) 286. 24 Paris Convention for the Protection of Industrial Property of 20 March 1883 refers to industrial property in the ‘widest sense’. See WIPO, ‘Paris Convention for the Protection of Industrial Property’ <http://www.wipo.int/treaties/en/ip/paris/>. 25 European Patent Convention of 5 October 1973. EPO, ‘The European Patent Convention’ <https://www.epo.org/law-practice/legal-texts/html/epc/2016/e/ma1.html>. 26 Directive 95/46/EC of the European Parliament and the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, OJ L 281, 23 November 1995, 31–50. 27 On the issue see Autorité de la Concurrence and Bundeskartellamt, ‘Competition Law and Data’, Joint Position Paper, released on 10 May 2016 <http://www.autoritedelaconcurrence.fr/doc/reportcompetitionlawanddatafinal.pdf> 11–22; J. Drexl, ‘Designing Competitive Markets for Industrial Data – between Propertisation and Access’ (2016), Max Planck Institute for Innovation and Competition Research Paper No. 16-13 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2862975>, 8–11. 28 The quote is reported by C. Cleveland, ‘Google’s “Infringenovation” Secrets’, 3 October 2011, <https://www.forbes.com/sites/scottcleland/2011/10/03/googles-infringenovation-secrets/#309d8ef330a6>. More recently also Mr Ng affirmed that ‘data is the defensible barrier, not algorithms’. See S. Lohr, ‘Data Could Be the Next Hot Button for Regulators’, New York Times, 8 January 2018, <https://www.nytimes.com/2017/01/08/technology/data-regulators-google-facebook-monopoly.html>. 29 Stressing this point, G. Comandè, ‘Regulating Algorithms’ Regulation? First Ethico-Legal Principles, Problems and Opportunities of Algorithms’, in T. Cerquitelli, D. Quercia and F. Pasquale (eds), Transparent Data Mining for Small and Big Data (Cham, Springer, 2017), pp. 169, 175–7. Indeed, without the algorithmic infrastructure and more specifically of the network effects and the economies of scale enacted by it, data could not be massively generated and thus used. The issue of the competitive significance of algorithms falls outside the scope of this contribution. Suffice it to say here that it is a relatively young and much-debated field of research. For a comment G. Surblyte, ‘Data-Driven Economy and Artificial Intelligence: Emerging Competition Law Issues?’ (2017) Wirtschaft und Wettbewerb 120. 30 L. Moerel and C. Prins, Privacy for the Homo Digitalis: Proposal for a New Regulatory Framework for Data protection in the Light of Big Data and the Internet of Things (Kluwer Juridisch, 2016), pp. 29–30. 31 T.Z. Zarsky, ‘“Mine Your Own Business!”: Making the Case for the Implications of the Data Mining of Personal Information in the Forum of Public Opinion’ (2003) 5 Yale Journal of Law and Technology 2, 22. 32 M.T. Bodie, M.A. Cherry, M. McCormik and J. Tang, ‘The Law and Policy of People Analytics’, Saint Louis U. Legal Studies Research Paper No. 2016-6 <https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2769980>, 5–10. 33 J. Cohen, Configuring the Networked Self: Law, Code and the Play of Everyday Practice (Yale University Press, New Haven, CT, 2012), pp. 122–4. 34 A. Gal, ‘It’s a Feature, Not a Bug: On Learning Algorithms and What They Teach Us’, OECD Roundtable on ‘Algorithms and Collusion’ 23 June 2017, <https://one.oecd.org/document/DAF/COMP/WD(2017)50/en/pdf>. 35 T.Z. Zarsky (n. 31) 26–30. 36 G Comandè (n. 29) passim. 37 O. Tene and J. Polonetsky, ‘Big Data for All: Privacy and User Control in the Age of Analytics’, (2013) 11 Northwestern Journal of Technology and IP 239, 241. 38 Cf. Proposal of a directive related to the patentability of computer implemented inventions, released on 20 February 2002, COM/2002/0092 final – COD 2002/0047. As it has been underlined, mere algorithms cannot be protected by means of patents, given the fact that algorithms are the underlying tasks of a computer program that are not patentable for their being non-technical in nature: a program’s functionality cannot indeed be patentable because patent protection cannot cover general ideas and business models, that according to the ‘fundamental conception of IP rights should remain free’. See J. Drexl et al, ‘Data Ownership and Access to Data, Position Statement of the Max Planck Institute for Innovation and Competition of the 16th August 2016 on the Current European Debate’, Max Planck Institute for Innovation and Competition Research Paper No. 16-10 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2833165>, 5–6, stressing that patent ‘protection (of algorithms) would pose a risk of two negative effects: first, protection of abstract subject-matter would cause needless – and, in the case of algorithms, unreasonable – restraints on competition that, according to current knowledge, would not be economically justified. In particular the resulting monopolisation of ideas would hinder technical progress and industrial development (judgment in SAS Institute Inc., C-406/10, EU:C:2012:259, para 40). Second, it is barely foreseeable what markets and sectors would be affected. This makes finding suitable approaches to a regulation seem unrealistic’. 39 Software can only be patented if it satisfies the requirement of novelty, inventive step and technical effectiveness. These requirements are very difficult to meet. Cf. M.A. Lemley, ‘Software Patents and the Return of Functional Claiming’ (2013) Wisconsin Law Review 905, 928. 40 P. Torremans, IP Law (Oxford University Press, Oxford, 2016), p. 124, arguing, thus, that in the current technological environment the distinction between artistic and industrial IP has fallen. 41 It should be however recalled that it is very difficult for a database to accomplish the originality threshold required under European law. On the issue see D.J. Gervais, ‘The Internationalisation of IP: New Challenges from the Very Old and the Very New’ (2002) 12 Fordham IP, Media & Entertainment Law Journal 929, 935. 42 Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal Protection of Databases. For the literature see T. Aplin, ‘The EU Database Directive: Taking Stock’ in F. Macmillan, New Directions in Copyright Law (Edward Elgar, 2006) 99–126. 43 Cf. art. 7, 4 par. Database Directive. Recently, see I. Gupta, Footprints of Feist in European Database Directive: a Legal Analysis of IP Making in Europe (Springer, Singapore, 2017), pp. 11–37. 44 Directive EU 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure. For the literature see J. Lapousterle, C. Geiger, N. Olszak, L. Desaunettes, ‘What protection for trade secrets in the European Union? CEIPI’s Observations on the proposal for a Directive on the Protection of Undisclosed Know-How and Business Information’ 2016 (38) European Intellectual Property Review 255. 45 D. Sousa-Silva, ‘What Exactly is a Trade Secret under the Proposed Directive?’ (2014) 9 Journal of IP Law & Practice 11, 15. 46 Recital 10 of the same proposed Directive affirming that its ‘provisions (…) should not create any exclusive right on the know-how or information protected as trade secrets’. 47 T. Aplin, ‘Right to Property and Trade Secrets’, in C. Geiger, Research Handbook on Human Rights and IP (Edward Elgar, Cheltenham, 2015), pp. 421, 426. 48 A. Wiebe, ‘Protection of Industrial Data: a New Property Right for the Digital Economy?’ (2017) 12(1) Journal of IP Law and Practice 62, 67–70. 49 Drexl et al (n.38) 2. 50 Acknowledging and assessing the current crisis of the patent system, see M. Lemley (n.11) 113, who observes that businesses increasingly view patents as tax and a cost for innovation not as a benefit for innovation. See also O. Liivak, ‘Establishing an Island of Patent Sanity’ (2013) 78(4) Brooklyn Law Review 1335, 1336–44, and more recently Id, ‘Private Law and the Future of Patents’ (2017) 30 Harvard Journal of Law and Technology 33, 41–5, where the author evaluates possible ways out from the crisis through reference to private law schemes. 51 F. Goy and C. Wang, ‘Does Knowledge Tradability Make Secrecy more Attractive than Patents? An Analysis of IPR Strategies and Licensing’ (2013) Murdoch University Research Paper <https://www.murdoch.edu.au/School-of-Business-and-Governance/_document/Australian-Conference-of-Economists/Does-knowledge-tradeability-make-secrecy-more-attractive-than-patent.pdf>, 1–7. European Commission, ‘Study on Trade Secrets and Confidential Business Information in the Internal Market’, published in April 2013 <http://ec.europa.eu/internal_market/iprenforcement/docs/trade-secrets/130711_final-study_en.pdf>. Finally, EUIPO, ‘Protecting Innovation through Trade Secrets and Patents: Determinants for European Union Firms’, July 2017 <https://euipo.europa.eu/tunnel-web/secure/webdav/guest/document_library/observatory/documents/reports/Trade%20Secrets%20Report_en.pdf> 3: ‘The use of trade secrets for protecting innovations is higher than the use of patents by most types of companies, in most economic sectors and in all Member States’. 52 This expression is by K.E. Maskus and J. Reichman, ‘The Globalisation of Private Knowledge Goods and the Privatization of Global Public Goods’ (2004) 7 Journal of International Economic Law 279, 297. 53 H. Ullrich, ‘Expansionist IP Protection and Reductionist Competition Rules: a TRIPS Perspective’ (2004) 7 Journal of International Economic Law 402, 410–15. 54 P.A. David, ‘The Digital Technology Boomerang: New IP Rights Threaten Global “Open Science”’, Stanford Institute for Economic Research, Discussion Paper, October 2000 <http://www-siepr.stanford.edu/workp/swp00016.pdf>, 7. See also G.B. Ramello, ‘IP, Social Justice and Economic Efficiency: Insights from Law and Economics’, in A Flanagan and M.L. Montagnani, IP Law: Economic and Social Justice Perspectives (Edward Elgar, Cheltenham, 2010), pp. 1, 5. 55 Broadly on the issue European Data Protection Supervisor, ‘Privacy and Competitiveness in the Age of Big Data, the Interplay Between Data Protection, Competition Law and Consumer Protection in the Digital Economy’, Press release 26 March 2014 <https://edps.europa.eu/sites/edp/files/publication/14-03-26_competitition_law_big_data_en.pdf>. 56 This has been widely commented in literature, especially by Zarsky (n.31) 18–55. 57 W.G. Voss, ‘European Data Privacy Law Developments’, (2014/2015) 70(1) Business Lawyer 253, 255. 58 Tene and Polonetsky (n.37) 246–8. Cf. also Article 29 Working Party, ‘Opinion 4/2014 on Surveillance of Electronic Communications for Intelligence and National Security Purposes’, 10 April 2014, available at <http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp215_en.pdf>, 8. 59 See the definition of ‘personal data’ given by Article 4 Regulation EU 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation or GDPR), OJ L 4 May 2016, 119/1, where it is stated that ‘personal data means any information relating to an identified or identifiable natural person or “data subject”’. In Recital 24 it is underlined how online identifiers, tools and protocols, including Internet protocol IP addresses, cookie identifiers and other identifiers such as Radio identification tags (RFIS), can identify individuals, in particular if combined with unique identifiers. It must be recalled that the broad definition of ‘personal data’ given by European data protection law including in the category not only identified but also identifiable data, sensitively distinguishes the European notion of personal data from the American one. For a deep assessment on the issue, P.M. Schwartz and D. Solove, ‘Reconciling Personal Information in the United States and European Union’ (2014) 102(4) California Law Review 878, 881–5 and 891–3. On the issue of identifiability, see P. Schwartz and D. Solove, ‘The PII Problem: Privacy and a New Concept of Personally Identifiable Information’ (2011) 86 New York Law Review 1814; O. Tene, ‘The Complexities of Defining Personal Data: Anonymisation’ (2011) Data Protection Law Policy 6, 8. 60 Indeed, analytics can extract from the combination of a set of data, also sensitive data. Stressing the point, A. Spina, ‘Risk Regulation of Big Data: Has the Time Arrived for a Paradigm Shift in EU Data Protection Law?, Case Notes to Case C-293/12 and C-594/12 Digital Rights Ireland and Seitlinger and others’ (2014) 5(2) European Journal of Risk Regulation 248, 252, noticing that ‘big data blurs the distinction between sensitive and non-sensitive data set, provided that an algorithm is able to stablish a refined inference between patterns’ and suggesting to build the ‘boundaries between sensitive or non-sensitive personal data (…) upon the potential use of those who analyse the data, rather than on the nature of information’. 61 On the issue see, P. De Hert and V. Papakonstantinou, ‘The New General Data Protection Regulation: Still a Sound System for the Protection of Individuals?’ (2016) 32 Computer Law & Security Review 179, 183–5, noticing that the assumption that ‘a controller is always identifiable and unaccountable and that it is up to him to decide whether to attribute the data processing to a data processor or to other parties, does not stand anymore in contemporary processing environments’. Hence, also the liability regime according to which the data controller bears all the liability burden whereas the data controller carries less or no liability is questionable. Similarly, C.J. Bennet and R.M. Bayley, ‘Privacy Protection in the Era of Big Data: Regulation Challenges and Social Assessments’, in B. Van Der Sloot, D. Broeders and E. Schrikvers (eds), Exploring the Boundaries of Big Data (Amsterdam University Press, Amseterdam, 2016), p. 207. 62 Iapp Privacy Perspectives, ‘On the Death of Purpose Limitation’ <https://iapp.org/news/a/on-the-death-of-purpose-limitation/>; see also L. Moerel and C. Prins, ‘Further Processing of Data Based on the Legitimate Interest: the End of Purpose Limitation?’, March 2016 <http://textlab.io/doc/1541457/further-processing-of-data-based-on-the-legitimate-intere>. 63 European Commission, ‘How Will the EU’s Reform Adapt Data Protection Rules to New Technological Developments?’, January 2016, <http://ec.europa.eu/justice/data- protection/document/factsheets_2016/factsheet_dp_reform_technological_developments_2016_en.pdf>. 64 R. Gellert, ‘Understanding Data Protection as Risk Regulation’ (2015) Journal of Internet Law 6, 10. 65 C. Kuner, ‘The European Commission’s Proposed Data Protection Regulation: A Copernican Revolution in European Data Protection Law’, issued 6 February 2012 in Bloomberg BNA Privacy and Security Law Report 2012 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2162781>, 1. 66 Article 13, 2 lett. f) General Data Protection Regulation. 67 Article 24 General Data Protection Regulation. 68 Ibid, emphasis added. 69 Cf. various recitals, especially n. 39; 58; 60; 71; 78. For the literature see B. Goodman and S. Flaxman, ‘European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”’, presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York <https://arxiv.org/abs/1606.08813>, 3. 70 Cf. Recital n. 60 and Article 12 General Data Protection Regulation. 71 Article 13, 2 comma let. f) General Data Protection Regulation. 72 P. Paal, ‘DS-GVO Article 13 Informationspflicht bei Erhebung von Personenbezogenen Daten bei der Betroffenen Person’, in P. Paal and D. Pauly (eds), Datenschutz-Grundverordnung (Beck-Online, 2017), passim. 73 J. Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3(1), Big Data Society <http://bds.sagepub.com/content/3/1/ 2053951715622512/>. 74 D. Korff, ‘New Challenges to Data Protection Study – Working Paper No. 2: Data Protection Laws in the EU: The Difficulties in Meeting the Challenges Posed by Global Social and Technical Developments – European Commission DG Justice, Freedom and Security’, 12 July 2010 <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1638949>, 5. 75 Cf. Article 10- 11-12 EC Data Protection Directive. On the issue see S. Wachter, B. Mittelstadt and L. Floridi, ‘Why a Right to Explanation Does not Exist under the General Data Protection Regulation’, 24 January 2017 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2903469>, 19, 21. 76 Wachter, Mittelstadt and Floridi (n. 76), 22. 77 B. Goodman, ‘A Step Towards Accountable Algorithms?: Algorithmic Discrimination and the European General Data Protection’, 29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, <http://www.mlandthelaw.org/papers/goodman1.pdf>, 7–9. 78 Spina (n.60), 248, 251. 79 Goodman (n.78), 3–4. 80 R. Gellert, ‘Understanding Data Protection as Risk Regulation’ (2015) Journal of Internet Law 6, 8–9. 81 B. Van Der Sloot, ‘The Individual in the Big Data Era: Moving Towards an Agent-Based Privacy Paradigm’, in Van Der Sloot, Broeders and Schrikvers (eds) (n.61), p. 177. 82 Spina (n.60), 256, stating how ‘the market failures associated with forms of Big Data encourage us to look at privacy risks and at privacy as not only an individual right but a collective interest’. 83 This had been already foreshadowed by G. Comandè, ‘Tortious Privacy 3.0: a Quest for Research’, in J. Potgieter, J. Knobel and R.M. Jansen, Essays in Honour of / Huldigingsbundel vir Johann Neethling (LexisNexis, London, 2015), pp. 121, 124. 84 For a distinction between an ‘empty’ right to data protection, and a ‘full’ right to privacy, see R. Gellert and S. Gutwirth, ‘The Legal Construction of Privacy and Data Protection’ (2013) 29 Computer Law & Security Review 522, 524. 85 See in particular Goodman (n.78), 4–6, theorizing the implementation of ‘algorithms audits’ that are ‘third party inspections of algorithmic decision-making modelled on audit studies from social sciences’. The link between algorithmic transparency and accountability is currently object of a heated debate also in the United States: D.R. Desai and J.A. Kroll, ‘Trust but Verify: A Guide to Algorithms and the Law’ (2017) Harvard Journal of Law & Technology, forthcoming; M. Perel and N. Elkin-Koren, ‘Black-Box Tinkering: Beyond Transparency in Algorithmic Enforcement’ (2017) 69 Florida Law Review 181; J.A. Kroll et al, ‘Accountable Algorithms’ (2017) University of Pennsylvania Law Review 633. 86 This is what Recital 43 General Data Protection Regulation seems to suggest, by affirming that the right to access personal data ‘should not adversely affect the rights or freedoms of others, including trade secrets or IP and in particular the copyright protecting the software’. © The Author(s) 2017. Published by Oxford University Press. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Intellectual Property Law & Practice Oxford University Press

European intellectual property and data protection in the digital-algorithmic economy: a role reversal(?)

Loading next page...
 
/lp/ou_press/european-intellectual-property-and-data-protection-in-the-digital-q321XEXqcZ
Publisher
Oxford University Press
Copyright
© The Author(s) 2017. Published by Oxford University Press. All rights reserved.
ISSN
1747-1532
eISSN
1747-1540
D.O.I.
10.1093/jiplp/jpx213
Publisher site
See Article on Publisher Site

Abstract

Abstract The author Giulia Schneider holds a law degree from Sant’Anna School of Advanced Studies in Pisa (Italy). She is currently a PhD candidate in Legal Studies at Bocconi University in Milan (Italy). Her research interests focus on the regulation of digital markets. This article This article investigates the evolution of the interaction between European IP and data protection laws in the algorithmic economy. It seeks to show the unsuitability of traditional legal theories in respect to the newly emerging paradigms. As a consequence of the reforms introduced over the past few years, the European IP and data protection frameworks have undergone a shift in terms of the systemic functions respectively carried out. By virtue of the extension of artistic IP tools to industrial information assets and the introduction of sui generis IP tools protecting industrial information as such, the European IP system has taken on a new control and secretization function regarding competitively sensitive information. In the meantime, the General Data Protection Regulation (GDPR) has introduced new transparency requirements, encouraging the production and release of information regarding the logic and the risks stemming from (mostly automated) algorithmic data processing. This paper ultimately demonstrates that a role reversal between IP and Data Protection has occurred: the new Data Protection framework seems indeed to have taken up the new of function of mitigating the ‘secretizing’ pressure of the current European IP System. 1. Introduction The relationship between industrial intellectual property (IP) and data protection laws has increasingly attracted the attention of legal scholarship enquiring the countenance of businesses’ and individuals’ rights in the digital-algorithmic space.1 These two branches of the law have been differently put in connection in terms of structural parallelism,2 supportive exchange3 and ultimately conflict.4 At a general level, such variously oriented literature reflects how, in the current technical environment, the boundaries between industrial IP and data protection are becoming increasingly blurred.5 This article draws upon this literature and contributes to the present debate by tracing the evolution of the relationship between European data protection and IP with the transition from an industrial to an algorithmic-information era. It argues that in the digital-algorithmic market the interaction between the two regulatory branches has become more thought-provoking than ever due to a ‘gene mutation’ that has come to affect both the IP and the data protection paradigm as a consequence of the latest reforms: as a result of the occurred structural changes, European IP and data protection laws appear to fulfil, respectively, new opposed systemic functions, which in turn are giving rise to unexpected overlaps. Both laws have indeed come to regulate the production and availability of digital industrial information,6 differently allocating rights over control and access to algorithms and the data generated by them. The analysis is organized as follows. (i) First reference will be made to classical justification theories of both IP and data protection law. These theories provide the analytical framework against which current European IP and data protection regimes will be tested. (ii) Against the backdrop of such framework, the analysis of newly enacted IP and data protection provisions will show the inadequacy of traditional justification theories. (iii) Ultimately, a new theoretical framework will be outlined, highlighting the functions carried out by each system in the digital-algorithmic economy. Using this framework newly emerging conflicts between the two branches of European law will be identified. 2. The invention-based economy: the origins of IP and data protection laws The origins of industrial IP and data protection laws can be traced back to an economic environment in which industrial products were at the centre of competition courses and where personal data was not a trading commodity.7 Accordingly, both regulatory frameworks were originally supported by precise theoretical justifications. The objects of protection—on the one hand inventions and on the other side personal data—were different and were thus also the functions respectively assigned to each regulatory system. Classical industrial IP theory is rooted in utilitarian and labour principles.8 Although differently tailored, these theories ultimately stress the need to incentivize inventors and economically reward them in return for the innovation brought about9: the inventor is entitled to an exclusive right for having transferred his invention to society. From a different perspective, the inventor will gain an exclusive right over his invention if he discloses it to society.10 This means that at the very core of the invention-based IP system there is a socially oriented transparency pursuit that is well reflected in the structural disclosure requirement of patent protection.11 As has been said, ‘the essence of the patent system is transparency and disclosure’.12 The disclosure requirement is directly aimed at enriching the public domain and enhancing the progress of technology.13 It is thus intimately connected with long-term innovation goals promoted by the public availability of new knowledge.14 This is well-reflected in the most recent discussions carried out at both theoretical15 and practical levels, where the disclosure requirement has been put at the centre of reform proposals aimed at tightening patents’ sharing benefits.16 Things are quite different when it comes to data protection law. As a strand of the literature has stressed, ‘privacy rights are indifferent to the production of information’.17 Indeed, at its roots, data protection does not deal with incentivizing the production of information, but rather with freezing the circulation of (personal) information.18 It primarily seeks to control the distribution of information in accordance with the will of data subjects.19 Accordingly, from a theoretical perspective, data protection has been conceptualized over time as control, limited access and contextual integrity.20 The primary means to achieve these fundamental objectives is consent. Through consent, the data subject is empowered to oversee third parties’ use of its personal data in accordance with the principles of purpose limitation and proportionality. As the law and economics literature stresses—and criticizes—data protection law minimizes information consumption with significant secretization externalities21 and associated anticompetitive effects.22 Given these premises, it appears that in an industrial era IP and data protection law are at opposite ends of a spectrum.23 Indeed, the industrial IP system, as shaped by the Paris Convention for the Protection of Industrial Property24 and the European Patent Convention,25 is product-oriented, thus patent-centric, with a primary transparency function encouraging the sharing of newly produced knowledge. On the other hand, data protection law as conceived under the EC Data Protection Directive,26 refers to a personal right functional to the protection of individuals’ autonomy and self-determination. It hinges on the notion of individual consent as a means of controlling and restricting access to personal data and thus ultimately curbs the domain of publicly available knowledge. 3. The digital-algorithmic economy: some contextual premises With the advent of the so-called digital economy, the original autonomy between the two above-illustrated paradigms has been sensitively challenged. For a proper assessment of the systemic pitfalls triggered by the changed technological environment, some contextualization is needed. In the digital marketplace, data have become a core economic asset.27 As Google chief scientist, Peter Norvig, has put it: ‘we don’t have better algorithms than anyone else, we just have more data’.28 As true as this might be, a vast strand of scholarship has underlined that the commercial and competitive value of personal data cannot be considered separately from the technical infrastructure that enables data-driven businesses to collect and aggregate the same data from various information sources, infer predictions over it and ultimately make judgements over actual or future behaviour of data subjects.29 Algorithms are the main drivers of information governance in the digital markets, where most of the data processing is mastered by machines and very little is left to human intervention.30 On the basis of the collected data, algorithms create models which subsequently govern businesses’ decision-making.31 As data scientists stress, the term algorithm generally describes a highly sophisticated and diversified technical reality.32 It is however possible to spot a few common features which render the phenomenon of algorithmic data processing an extremely challenging regulatory terrain. First, algorithms are affected by intrinsic technical obscurity, given by the difficulty of identifying or interpreting their functioning criteria.33 Secondly, algorithms are mostly self-learning, in the sense that they are capable of extracting new data from the formed models,34 thus triggering potentially infinite circles of further data processing.35 These structural and functional features of algorithmic data processing techniques have increased their economic value but also their invasiveness into data subjects’ personal sphere.36 The rising economic value of computational infrastructures and of machine- (that is, business-) generated information has boosted companies’ urge to shield these new intangible assets from competitors’ free-riding threats through effective means of legal protection. On the other hand, the maximization of companies’ production and consumption of data has quantitatively and qualitatively weakened the protection of users’ personal data and the traditional object of personal ‘data minimization’.37 As a result, in the face of ubiquitous algorithmic data processing, the adequacy of both traditional IP and data protection laws has been questioned, with significant legislative outcomes. Indeed, as the following sections will show, over the past few years, both branches of European law have undergone a substantial revision, directly or indirectly responding to the newly emerged need to render both industrial property rights and the right to data protection more technically sound. 4. The industrial IP framework in the digital-algorithmic economy: from patents to secrets The evolution of the industrial IP framework triggered by the economic advancement of algorithmic data processing practices can be referred to two main reform directions: (i) the extension of an IP tool like copyright, to industrial information assets, for example software; (ii) the introduction of sui generis IP tools protecting industrial information as such, as is the case of the database right. These two reform trajectories have different objectives since the first one mainly addresses businesses’ need to protect their IT infrastructure, whereas the second refers to the different demand to guard the information that is processed and generated through this processing infrastructure. The first reform trajectory is of interpretative nature: it originates from the extensive interpretation of existing IP tools. In this regard, the opportunity to protect algorithms (ie the software to which algorithms are applied to) through patents has been object of a heated debate, resulting at European level in a proposal of a directive related to the patentability of computer-implemented inventions, which however has never been adopted.38 Thus, regulatory answers have been sought in the existing framework: the practical difficulty of extending patent protection to software39 has suggested the extension of copyright to software40 and, even before the software, to the code source in which the algorithm is expressed. Copyright has been also employed for the protection of aggregated data processed by algorithms in case the selection and arrangement of it meets the originality threshold.41 At the same time, the growing importance of data in competition interactions has triggered a new wave of regulatory interventions, which have envisaged the creation of previously unknown tools for the protection of sensitive industrial information. In 1996 the Directive on legal protection of databases42 established an exclusive sui generis right over databases resulting from a ‘substantial investment’.43 Ten years later, the European IP framework was further reformed through the introduction of the Trade Secret Directive,44 providing very broad conditions for protection encompassing nearly every business’s confidential information45 and, despite formal declarations,46 a proprietary-styled protection over this information.47 Lately, discussions have also been going on over the need to introduce a new exclusive right specifically related to digital data.48 The case for a new ‘industrial data right’ has not been made yet. It has, however, been opposed in a Statement of the Max Planck Institute for Innovation and Competition, which has stressed the absence of a justification or a necessity to create exclusive rights in digital data.49 Both lines of development have brought up significant systemic changes to the traditional industrial IP system in terms of both the design of protection sought by businesses and, in turn, in terms of the functions overall carried out by the above-outlined protection tools. The rising importance within the IP system of (sui generis) rights over information goods has determined a retrenchment of the patent system’s primacy50 and, in turn, a shift of the regulatory framework’s barycentre from traditional invention-based protection to information exclusivities. This has been acknowledged both at an empirical and a theoretical level. Some recent quantitative studies have shown that European businesses are increasingly preferring trade secrets over patent protection.51 Already before this empirical evidence, IP scholars have been critically assessing the (over)protectionist effects of ‘hybrid IP regimes’52 eroding the public domain53 and hindering competition mechanisms.54 As a result, both the extensive employment of protection tools alternative to patents and the occurred legislative reforms have determined a substantial turnaround in the functions overall carried out by the newly shaped IP system. Indeed, the IP tools employed in the digital-algorithmic market aim primarily at shielding business’s competitive advantage deriving from investments in the production of information. Accordingly, these tools control and limit access over such information—as it happens with the database right and copyright—or secretize this same information, as is the case of trade secrets. The emerging IP paradigm thus appears to be sensitively different from the original one: it is information-centred, secrecy-oriented, with the primary function of strengthening businesses’ control over information goods, weakening third-party access and ultimately secretizing industrial information. 5. The data protection framework in the digital-algorithmic economy: from consent to transparency In the digital-algorithmic marketplace data is economically exploited through algorithmic processing techniques aggregating collected data for the generation of new data.55 The data refined through analytics thus provides the benchmark of businesses decision-making schemes.56 As widely recognized in the literature,57 most of business-generated data can be qualified as personal information. Indeed, the reversibility of anonymization and pseudonymization techniques58 enabled by the growing computational aggregation capacities, make nearly every data spread online ‘identifiable’.59 Hence, this business-generated data does not only fall under the above-recalled IP regimes, but is also regulated under data protection law. Against the backdrop of the changed technological environment, traditional data protection categories are showing their ineffectiveness: self-generating algorithmic data processing is challenging the distinctions between sensitive and non-sensitive data60; in the whirls of the resulting data flows the identity of data processors and data controllers is easily lost.61 The practical difficulty of spotting the data being processed and the subjects carrying out the processing invalidates consent as a means of defining the purpose and the context of the data processing and, with it, also the principles of purpose limitation and proportionality.62 As a reaction to the incapability of the previously existing data protection regime of addressing the concerns raised by machine-data processing, the GDPR has tried to provide more effective regulatory responses to the ‘rapid technological developments and globalisation’.63 The GDPR has empowered data subjects with new prevention and reaction tools, primarily aimed at enabling them to govern the risks stemming from algorithmic-driven decision-making processes.64 More specifically, the provisions entailed in the Regulation introduce new mechanisms aimed at enhancing the accountability of automated data processing and thus of the subsequent decision-making patterns.65 Accordingly, Articles 13–15 GDPR reaffirm the right of data subjects to have access to the data collected about them, requiring data processors to notify data subjects about occurred treatment. However, they also introduce the right to receive ‘meaningful information’, consisting in the data subject’s right to be informed about the ‘existence of automated decision-making, including profiling, referred to in Article 22(1) and (4), and at least in those cases, as meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject’.66 Under Article 35 GDPR, controllers are required to carry out a ‘Data Impact Assessment’, this being the ‘assessment of the impact of the envisaged processing operations on the protection of personal data’. Its function is exactly that of an ex ante tool of appraisal of the possible threats to the data subjects’ rights through a systematic evaluation of ‘the risks of varying likelihood and severity for the rights and freedoms of natural persons’67 carried out under the responsibility of the controller. Moreover, the controller has the duty to implement adequate appropriate technical and organizational measures to ensure and to be able to demonstrate that processing is performed in accordance with this Regulation.68 Similarly, Article 22 GDPR, referring to the specific case of ‘automated individual decision-making, including profiling’, requires the data controller to ‘implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision’. In addition to this, Article 30 GDPR imposes the creation of an ‘internal record of processing’ that is to “be made available to the supervisory authority on request” and Article 40 GDPR encourages designated bodies to ‘prepare codes of conduct (…) for the purpose of specifying the application of this Regulation, such as with regard to: (a) fair and transparent processing (…)’. Ultimately, Article 42 GDPR authorizes ‘the establishment of data protection certification mechanisms and data protection seals and marks’ that shall ‘be available via a process that is transparent’. In light of the cited provisions, it appears that the European data protection framework is increasingly relying on the principle of transparency.69 This principle ultimately requires the disclosure of ‘intelligible’70 and ‘meaningful’71 information regarding the existence, the nature and the consequences of the algorithmic-data processing. The above-mentioned transparency requirements are currently much debated, especially with regard to their scope72: some scholars stress the technical ineffectiveness of such requirements,73 while others question their novelty in respect of the previous data protection regime.74 It is indeed argued that the Data Protection Directive already entailed a specific right to access processed personal data75 and was thus already permeated by the transparency principle.76 This is certainly true. However, the transparency provisions entailed in the GDPR are numerically superior and qualitatively more pervasive in respect to the past. Under the previous data protection regime, the transparency principle, as objectified in data subjects’ rights to be notified about the processing and to access their data, was strictly related to control rationales. Indeed, it served the purpose of determining data subjects’ fully ‘informed’ consent. Hence, the transparency principle was intimately connected with the principles of purpose limitation and proportionality. To the contrary, the transparency principle as (re)affirmed in the GDPR seems to have a quite different significance. As some strand of literature is observing, transparency in the GDPR system is a means for technical accountability,77 designed not for the purpose of feeding data subjects’ consent but rather for that of increasing their awareness over the risks stemming from machine-driven data processing. Differently from the Data Protection Directive, the GDPR moves from the assumption that algorithmic data processing is a risky practice, which poses substantial threats to data subjects’ fundamental data.78 As a result, transparency in the GDPR is strictly connected to the other truly new data protection law principle of data sanitization—that is the removal from automatically processed datasets of special (ie sensitive categories of data79) in order to ex ante prevent discriminatory outcomes.80 Against this backdrop, the GDPR appears to have significantly changed the design and, in turn, the functions of data protection law. Under the new data protection regime, the regulatory focus has overall shifted from the moment of collection to the subsequent phase of processing and thus, more precisely, from consent to businesses’ conduct. The new provisions of the GDPR have indeed extended data controllers’ obligations leading to what can be defined as a ‘proceduralization’ of data protection law, increasingly centred on the active agent of mass scale processing rather than on the subjects who bear the legal and economic consequences of such a processing.81 The functional implications of these structural changes can be immediately perceived. The increasing importance of the transparency principle suggests that the control and limited access objectives are being overridden by an unfamiliar disclosure task. In the current technological environment, the protection of personal data loses its individual scope82 and becomes crucial for the protection of other fundamental rights.83 As a result, data protection law gains the function of producing publicly accessible information regarding the logic of widespread data processing and its potential effects on these same fundamental rights.84 The emerging data protection paradigm is thus sensitively different from the original one: it is indeed risk-oriented, business-centred, with the primary function of expanding, rather than controlling, access to information. 6. Conclusion: a role reversal and new conflicts in the digital-algorithmic economy As the analysis above has demonstrated, in the digital-algorithmic economy the currently dominant IP paradigm appears to have distanced itself from the traditional patent-centric approach. As a result, the public interest-oriented transparency function previously lying at the heart of the IP system seems to have been progressively marginalized and overtaken by a new control and secretization task regarding competitively sensitive information. To the contrary, under the GDPR, data protection law appears to have curtailed the traditional objective of controlling and limiting access to released personal data and to have significantly expanded businesses’ transparency burdens with regard to the generation of information about their data processing and thus decision-making activities. These transparency requirements lay the foundations for new accountability and self-correction measures that are currently under the scrutiny of data protection specialists.85 Hence, it seems that in the digital-algorithmic economy IP and data protection law have undergone a role reversal. The practical implications of such a functional shift are open to enquiry: more specifically, it is unclear whether the new transparency provisions of the GDPR will be capable of mitigating the algorithms’ secretization trend enabled by the current IP system or whether such a trend will, to the contrary, overwhelm the GDPR’s transparency requirements,86 by encouraging a restrictive interpretation of them and thus showing their unfeasibility not only at a technical but also at a legal level. Footnotes 1 For a general assessment see D. Liebenau, ‘What IP Can Learn from Informational Privacy, and Vice Versa’ (2016) 30(1) Harvard Journal of Law & Technology 285, 286–7. 2 J. Zittrain, ‘What the Publisher Can Teach the Patient: IP and Privacy in an Era of Trusted Privication’, (2000) 52 Stanford Law Review 1201, 1205. 3 On the support that IP can give to privacy enforcement, J.E. Cohen, ‘Copyright and the Jurisprudence of Self-Help’ (1998) 13 Berkeley Technology Law Review 1089, 1093–4. See also the debate regarding the establishment of propriety rights over personal data. P.M. Schwartz, ‘Property, Privacy, and Personal Data’ (2004) 117 Harvard Law Review 2055, 2066–9. 4 For the enforcement of IP, users’ privacy might be compromised. See J.E. Cohen, ‘A Right to Read Anonymously: A Closer Look at “Copyright Management” in Cyberspace’ (1996) 28 Connecticut Law Review 981, 992–3. 5 V. Mayer-Schonberger, ‘Beyond Privacy, Beyond Rights: Toward a “Systems” Theory of Information Governance’ (2010) 98 California Law Review 1853, 1857. 6 It must be clarified that this article will use the terms ‘information’, ‘knowledge’ and ‘data’ without differentiation. Indeed, this article focuses on the functional outcomes of European IP and data protection regulations as applied to industrial intangible assets. This being the object of analysis, the acknowledgment of the substantive differences between ‘data’ and ‘information’ would fall outside the limited target of this contribution. Nonetheless, it is important to recall the basic assumption according to which ‘data’, or better said, ‘data points’ are the raw material upon which analytical processes are carried out and subsequently employed in business practice as ‘information’ or ‘knowledge’. Thus, the notion of data is to be referred to a static dimension, whereas the notion of ‘information’ and ‘knowledge’ refers to a dynamic phase in which aggregated data is contextualized and actively employed in business practice. For a deeper insight on the complexity of information theory see the famous conceptualization of R.L. Ackoff, ‘From Data to Wisdom’ (1989) 15 Journal of Applied System Analysis, 3–9. More recently, M. Boisot and A. Canals, ‘Data, Information and Knowledge: Have We Got it Right?’ (2004) UOC Internet Interdisciplinary Institute <http://www.uoc.edu/in3/dt/20388/index.html>. 7 For a general assessment on the issue see J. Cohen, ‘The Regulatory State in the Information Age’ (2016) 17(2) Theoretical Enquiries in Law 2, 3. 8 R.P. Merges, Justifying IP (Harvard University Press, Cambridge, MA, 2011), pp. 52–6. 9 F. Machlup and E. Penrose, ‘The Patent Controversy in the Nineteenth Century’ (1950) 10(1) The Journal of Economic History 1, 5. More recently recalling this point, M. Lemley, ‘Property, IP and Free Riding’ (2005) 83 Texas Law Review 1031. 10 Cf Article 83 and 100 (b) European Patent Convention. 11 The original public law dimension of the patent system is well stressed by O. Liivak, ‘Private Law and the Future of Patents’ (2017) 30 Harvard Journal of Law and Technology 33, 41-42. On the issue also M. Lemley, ‘Taking the Regulatory Nature of IP Rights Seriously’ (2014) 92 Texas Law Review 107, 110, defining patents as ‘government interventions in the marketplace designed to achieve social policy ends’. 12 WIPO, ‘WIPO Technical Study on Patent Disclosure Requirements Related to Genetic Resources and Traditional Knowledge’, Study No 3-2004 <www.wipo.int/export/sites/www/freepublications/en/tk/786/wipo_pub_786.pdf>. 13 J. Barton et al, ‘Integrating IP Rights and Development – Report of the Commission on IP Rights-Commission on IP Rights’, September 2002, London <http://www.iprcommission.org/papers/pdfs/final_report/ciprfullfinal.pdf>. For the relevant literature see R. Feldman, ‘Transparency’ (2014) 19 Virginia Journal of Law and Technology 27, 30–5. 14 C. Correa, Trade Related Aspects of IP Rights: A Commentary on the TRIPS Agreement (Oxford, Oxford University Press, 2007) 94. 15 Stressing the importance of the disclosure requirement as substantiated in the ‘written description’ requirement, M. Risch, ‘A Brief Defense of the Written Description Requirement’ (2010) 119 Yale Law Journal Online 127. 16 Especially with regards to the disclosure of traditional knowledge, See J. Gibson, IP, International Trade and Protection of Traditional Knowledge (London, Earthscan, 2005), pp. 23–6. WIPO, ‘Declaration of the Source of Genetic Resources and Traditional Knowledge in Patent Applications: Proposals by Switzerland – Document submitted by Switzerland’, 6 June 2007, 4. 17 D. Liebenau (n. 1) 290. 18 M.A. Lemley, ‘Private Property’ (2000) 52 Stanford Law Review 1545, 1549–52. 19 J. Zittrain (n. 2) 1214–15. A.F. Westin, ‘Privacy and Freedom’ (1969) 22 Administrative Law Review 101, 103, pioneering the privacy as control theory. 20 R. Gavison, ‘Privacy and the Limits of Law’ (1980) 89 Yale Law Journal 421, 423, conceptualizing privacy as limited access theory; M. Nissenbaum, ‘Privacy as Contextual Integrity’ (2004) 79 Washington Law Review 119, 125. 21 R.A. Posner, ‘Privacy, Secrecy, and Reputation’ (1979) 28 Buffalo Law Review 1, 10–12. 22 R. Calo, ‘Privacy and Markets: A Love Story’ (2015) 91 Notre Dame Law Review 649, 651. On the issue also T. Zarsky, ‘The Privacy-Innovation Conundrum’ (2015) 19 Lewis & Clark Law Review 116, 121–8. 23 D. Liebenau (n. 1) 286. 24 Paris Convention for the Protection of Industrial Property of 20 March 1883 refers to industrial property in the ‘widest sense’. See WIPO, ‘Paris Convention for the Protection of Industrial Property’ <http://www.wipo.int/treaties/en/ip/paris/>. 25 European Patent Convention of 5 October 1973. EPO, ‘The European Patent Convention’ <https://www.epo.org/law-practice/legal-texts/html/epc/2016/e/ma1.html>. 26 Directive 95/46/EC of the European Parliament and the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, OJ L 281, 23 November 1995, 31–50. 27 On the issue see Autorité de la Concurrence and Bundeskartellamt, ‘Competition Law and Data’, Joint Position Paper, released on 10 May 2016 <http://www.autoritedelaconcurrence.fr/doc/reportcompetitionlawanddatafinal.pdf> 11–22; J. Drexl, ‘Designing Competitive Markets for Industrial Data – between Propertisation and Access’ (2016), Max Planck Institute for Innovation and Competition Research Paper No. 16-13 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2862975>, 8–11. 28 The quote is reported by C. Cleveland, ‘Google’s “Infringenovation” Secrets’, 3 October 2011, <https://www.forbes.com/sites/scottcleland/2011/10/03/googles-infringenovation-secrets/#309d8ef330a6>. More recently also Mr Ng affirmed that ‘data is the defensible barrier, not algorithms’. See S. Lohr, ‘Data Could Be the Next Hot Button for Regulators’, New York Times, 8 January 2018, <https://www.nytimes.com/2017/01/08/technology/data-regulators-google-facebook-monopoly.html>. 29 Stressing this point, G. Comandè, ‘Regulating Algorithms’ Regulation? First Ethico-Legal Principles, Problems and Opportunities of Algorithms’, in T. Cerquitelli, D. Quercia and F. Pasquale (eds), Transparent Data Mining for Small and Big Data (Cham, Springer, 2017), pp. 169, 175–7. Indeed, without the algorithmic infrastructure and more specifically of the network effects and the economies of scale enacted by it, data could not be massively generated and thus used. The issue of the competitive significance of algorithms falls outside the scope of this contribution. Suffice it to say here that it is a relatively young and much-debated field of research. For a comment G. Surblyte, ‘Data-Driven Economy and Artificial Intelligence: Emerging Competition Law Issues?’ (2017) Wirtschaft und Wettbewerb 120. 30 L. Moerel and C. Prins, Privacy for the Homo Digitalis: Proposal for a New Regulatory Framework for Data protection in the Light of Big Data and the Internet of Things (Kluwer Juridisch, 2016), pp. 29–30. 31 T.Z. Zarsky, ‘“Mine Your Own Business!”: Making the Case for the Implications of the Data Mining of Personal Information in the Forum of Public Opinion’ (2003) 5 Yale Journal of Law and Technology 2, 22. 32 M.T. Bodie, M.A. Cherry, M. McCormik and J. Tang, ‘The Law and Policy of People Analytics’, Saint Louis U. Legal Studies Research Paper No. 2016-6 <https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2769980>, 5–10. 33 J. Cohen, Configuring the Networked Self: Law, Code and the Play of Everyday Practice (Yale University Press, New Haven, CT, 2012), pp. 122–4. 34 A. Gal, ‘It’s a Feature, Not a Bug: On Learning Algorithms and What They Teach Us’, OECD Roundtable on ‘Algorithms and Collusion’ 23 June 2017, <https://one.oecd.org/document/DAF/COMP/WD(2017)50/en/pdf>. 35 T.Z. Zarsky (n. 31) 26–30. 36 G Comandè (n. 29) passim. 37 O. Tene and J. Polonetsky, ‘Big Data for All: Privacy and User Control in the Age of Analytics’, (2013) 11 Northwestern Journal of Technology and IP 239, 241. 38 Cf. Proposal of a directive related to the patentability of computer implemented inventions, released on 20 February 2002, COM/2002/0092 final – COD 2002/0047. As it has been underlined, mere algorithms cannot be protected by means of patents, given the fact that algorithms are the underlying tasks of a computer program that are not patentable for their being non-technical in nature: a program’s functionality cannot indeed be patentable because patent protection cannot cover general ideas and business models, that according to the ‘fundamental conception of IP rights should remain free’. See J. Drexl et al, ‘Data Ownership and Access to Data, Position Statement of the Max Planck Institute for Innovation and Competition of the 16th August 2016 on the Current European Debate’, Max Planck Institute for Innovation and Competition Research Paper No. 16-10 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2833165>, 5–6, stressing that patent ‘protection (of algorithms) would pose a risk of two negative effects: first, protection of abstract subject-matter would cause needless – and, in the case of algorithms, unreasonable – restraints on competition that, according to current knowledge, would not be economically justified. In particular the resulting monopolisation of ideas would hinder technical progress and industrial development (judgment in SAS Institute Inc., C-406/10, EU:C:2012:259, para 40). Second, it is barely foreseeable what markets and sectors would be affected. This makes finding suitable approaches to a regulation seem unrealistic’. 39 Software can only be patented if it satisfies the requirement of novelty, inventive step and technical effectiveness. These requirements are very difficult to meet. Cf. M.A. Lemley, ‘Software Patents and the Return of Functional Claiming’ (2013) Wisconsin Law Review 905, 928. 40 P. Torremans, IP Law (Oxford University Press, Oxford, 2016), p. 124, arguing, thus, that in the current technological environment the distinction between artistic and industrial IP has fallen. 41 It should be however recalled that it is very difficult for a database to accomplish the originality threshold required under European law. On the issue see D.J. Gervais, ‘The Internationalisation of IP: New Challenges from the Very Old and the Very New’ (2002) 12 Fordham IP, Media & Entertainment Law Journal 929, 935. 42 Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal Protection of Databases. For the literature see T. Aplin, ‘The EU Database Directive: Taking Stock’ in F. Macmillan, New Directions in Copyright Law (Edward Elgar, 2006) 99–126. 43 Cf. art. 7, 4 par. Database Directive. Recently, see I. Gupta, Footprints of Feist in European Database Directive: a Legal Analysis of IP Making in Europe (Springer, Singapore, 2017), pp. 11–37. 44 Directive EU 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure. For the literature see J. Lapousterle, C. Geiger, N. Olszak, L. Desaunettes, ‘What protection for trade secrets in the European Union? CEIPI’s Observations on the proposal for a Directive on the Protection of Undisclosed Know-How and Business Information’ 2016 (38) European Intellectual Property Review 255. 45 D. Sousa-Silva, ‘What Exactly is a Trade Secret under the Proposed Directive?’ (2014) 9 Journal of IP Law & Practice 11, 15. 46 Recital 10 of the same proposed Directive affirming that its ‘provisions (…) should not create any exclusive right on the know-how or information protected as trade secrets’. 47 T. Aplin, ‘Right to Property and Trade Secrets’, in C. Geiger, Research Handbook on Human Rights and IP (Edward Elgar, Cheltenham, 2015), pp. 421, 426. 48 A. Wiebe, ‘Protection of Industrial Data: a New Property Right for the Digital Economy?’ (2017) 12(1) Journal of IP Law and Practice 62, 67–70. 49 Drexl et al (n.38) 2. 50 Acknowledging and assessing the current crisis of the patent system, see M. Lemley (n.11) 113, who observes that businesses increasingly view patents as tax and a cost for innovation not as a benefit for innovation. See also O. Liivak, ‘Establishing an Island of Patent Sanity’ (2013) 78(4) Brooklyn Law Review 1335, 1336–44, and more recently Id, ‘Private Law and the Future of Patents’ (2017) 30 Harvard Journal of Law and Technology 33, 41–5, where the author evaluates possible ways out from the crisis through reference to private law schemes. 51 F. Goy and C. Wang, ‘Does Knowledge Tradability Make Secrecy more Attractive than Patents? An Analysis of IPR Strategies and Licensing’ (2013) Murdoch University Research Paper <https://www.murdoch.edu.au/School-of-Business-and-Governance/_document/Australian-Conference-of-Economists/Does-knowledge-tradeability-make-secrecy-more-attractive-than-patent.pdf>, 1–7. European Commission, ‘Study on Trade Secrets and Confidential Business Information in the Internal Market’, published in April 2013 <http://ec.europa.eu/internal_market/iprenforcement/docs/trade-secrets/130711_final-study_en.pdf>. Finally, EUIPO, ‘Protecting Innovation through Trade Secrets and Patents: Determinants for European Union Firms’, July 2017 <https://euipo.europa.eu/tunnel-web/secure/webdav/guest/document_library/observatory/documents/reports/Trade%20Secrets%20Report_en.pdf> 3: ‘The use of trade secrets for protecting innovations is higher than the use of patents by most types of companies, in most economic sectors and in all Member States’. 52 This expression is by K.E. Maskus and J. Reichman, ‘The Globalisation of Private Knowledge Goods and the Privatization of Global Public Goods’ (2004) 7 Journal of International Economic Law 279, 297. 53 H. Ullrich, ‘Expansionist IP Protection and Reductionist Competition Rules: a TRIPS Perspective’ (2004) 7 Journal of International Economic Law 402, 410–15. 54 P.A. David, ‘The Digital Technology Boomerang: New IP Rights Threaten Global “Open Science”’, Stanford Institute for Economic Research, Discussion Paper, October 2000 <http://www-siepr.stanford.edu/workp/swp00016.pdf>, 7. See also G.B. Ramello, ‘IP, Social Justice and Economic Efficiency: Insights from Law and Economics’, in A Flanagan and M.L. Montagnani, IP Law: Economic and Social Justice Perspectives (Edward Elgar, Cheltenham, 2010), pp. 1, 5. 55 Broadly on the issue European Data Protection Supervisor, ‘Privacy and Competitiveness in the Age of Big Data, the Interplay Between Data Protection, Competition Law and Consumer Protection in the Digital Economy’, Press release 26 March 2014 <https://edps.europa.eu/sites/edp/files/publication/14-03-26_competitition_law_big_data_en.pdf>. 56 This has been widely commented in literature, especially by Zarsky (n.31) 18–55. 57 W.G. Voss, ‘European Data Privacy Law Developments’, (2014/2015) 70(1) Business Lawyer 253, 255. 58 Tene and Polonetsky (n.37) 246–8. Cf. also Article 29 Working Party, ‘Opinion 4/2014 on Surveillance of Electronic Communications for Intelligence and National Security Purposes’, 10 April 2014, available at <http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp215_en.pdf>, 8. 59 See the definition of ‘personal data’ given by Article 4 Regulation EU 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation or GDPR), OJ L 4 May 2016, 119/1, where it is stated that ‘personal data means any information relating to an identified or identifiable natural person or “data subject”’. In Recital 24 it is underlined how online identifiers, tools and protocols, including Internet protocol IP addresses, cookie identifiers and other identifiers such as Radio identification tags (RFIS), can identify individuals, in particular if combined with unique identifiers. It must be recalled that the broad definition of ‘personal data’ given by European data protection law including in the category not only identified but also identifiable data, sensitively distinguishes the European notion of personal data from the American one. For a deep assessment on the issue, P.M. Schwartz and D. Solove, ‘Reconciling Personal Information in the United States and European Union’ (2014) 102(4) California Law Review 878, 881–5 and 891–3. On the issue of identifiability, see P. Schwartz and D. Solove, ‘The PII Problem: Privacy and a New Concept of Personally Identifiable Information’ (2011) 86 New York Law Review 1814; O. Tene, ‘The Complexities of Defining Personal Data: Anonymisation’ (2011) Data Protection Law Policy 6, 8. 60 Indeed, analytics can extract from the combination of a set of data, also sensitive data. Stressing the point, A. Spina, ‘Risk Regulation of Big Data: Has the Time Arrived for a Paradigm Shift in EU Data Protection Law?, Case Notes to Case C-293/12 and C-594/12 Digital Rights Ireland and Seitlinger and others’ (2014) 5(2) European Journal of Risk Regulation 248, 252, noticing that ‘big data blurs the distinction between sensitive and non-sensitive data set, provided that an algorithm is able to stablish a refined inference between patterns’ and suggesting to build the ‘boundaries between sensitive or non-sensitive personal data (…) upon the potential use of those who analyse the data, rather than on the nature of information’. 61 On the issue see, P. De Hert and V. Papakonstantinou, ‘The New General Data Protection Regulation: Still a Sound System for the Protection of Individuals?’ (2016) 32 Computer Law & Security Review 179, 183–5, noticing that the assumption that ‘a controller is always identifiable and unaccountable and that it is up to him to decide whether to attribute the data processing to a data processor or to other parties, does not stand anymore in contemporary processing environments’. Hence, also the liability regime according to which the data controller bears all the liability burden whereas the data controller carries less or no liability is questionable. Similarly, C.J. Bennet and R.M. Bayley, ‘Privacy Protection in the Era of Big Data: Regulation Challenges and Social Assessments’, in B. Van Der Sloot, D. Broeders and E. Schrikvers (eds), Exploring the Boundaries of Big Data (Amsterdam University Press, Amseterdam, 2016), p. 207. 62 Iapp Privacy Perspectives, ‘On the Death of Purpose Limitation’ <https://iapp.org/news/a/on-the-death-of-purpose-limitation/>; see also L. Moerel and C. Prins, ‘Further Processing of Data Based on the Legitimate Interest: the End of Purpose Limitation?’, March 2016 <http://textlab.io/doc/1541457/further-processing-of-data-based-on-the-legitimate-intere>. 63 European Commission, ‘How Will the EU’s Reform Adapt Data Protection Rules to New Technological Developments?’, January 2016, <http://ec.europa.eu/justice/data- protection/document/factsheets_2016/factsheet_dp_reform_technological_developments_2016_en.pdf>. 64 R. Gellert, ‘Understanding Data Protection as Risk Regulation’ (2015) Journal of Internet Law 6, 10. 65 C. Kuner, ‘The European Commission’s Proposed Data Protection Regulation: A Copernican Revolution in European Data Protection Law’, issued 6 February 2012 in Bloomberg BNA Privacy and Security Law Report 2012 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2162781>, 1. 66 Article 13, 2 lett. f) General Data Protection Regulation. 67 Article 24 General Data Protection Regulation. 68 Ibid, emphasis added. 69 Cf. various recitals, especially n. 39; 58; 60; 71; 78. For the literature see B. Goodman and S. Flaxman, ‘European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”’, presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York <https://arxiv.org/abs/1606.08813>, 3. 70 Cf. Recital n. 60 and Article 12 General Data Protection Regulation. 71 Article 13, 2 comma let. f) General Data Protection Regulation. 72 P. Paal, ‘DS-GVO Article 13 Informationspflicht bei Erhebung von Personenbezogenen Daten bei der Betroffenen Person’, in P. Paal and D. Pauly (eds), Datenschutz-Grundverordnung (Beck-Online, 2017), passim. 73 J. Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3(1), Big Data Society <http://bds.sagepub.com/content/3/1/ 2053951715622512/>. 74 D. Korff, ‘New Challenges to Data Protection Study – Working Paper No. 2: Data Protection Laws in the EU: The Difficulties in Meeting the Challenges Posed by Global Social and Technical Developments – European Commission DG Justice, Freedom and Security’, 12 July 2010 <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1638949>, 5. 75 Cf. Article 10- 11-12 EC Data Protection Directive. On the issue see S. Wachter, B. Mittelstadt and L. Floridi, ‘Why a Right to Explanation Does not Exist under the General Data Protection Regulation’, 24 January 2017 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2903469>, 19, 21. 76 Wachter, Mittelstadt and Floridi (n. 76), 22. 77 B. Goodman, ‘A Step Towards Accountable Algorithms?: Algorithmic Discrimination and the European General Data Protection’, 29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, <http://www.mlandthelaw.org/papers/goodman1.pdf>, 7–9. 78 Spina (n.60), 248, 251. 79 Goodman (n.78), 3–4. 80 R. Gellert, ‘Understanding Data Protection as Risk Regulation’ (2015) Journal of Internet Law 6, 8–9. 81 B. Van Der Sloot, ‘The Individual in the Big Data Era: Moving Towards an Agent-Based Privacy Paradigm’, in Van Der Sloot, Broeders and Schrikvers (eds) (n.61), p. 177. 82 Spina (n.60), 256, stating how ‘the market failures associated with forms of Big Data encourage us to look at privacy risks and at privacy as not only an individual right but a collective interest’. 83 This had been already foreshadowed by G. Comandè, ‘Tortious Privacy 3.0: a Quest for Research’, in J. Potgieter, J. Knobel and R.M. Jansen, Essays in Honour of / Huldigingsbundel vir Johann Neethling (LexisNexis, London, 2015), pp. 121, 124. 84 For a distinction between an ‘empty’ right to data protection, and a ‘full’ right to privacy, see R. Gellert and S. Gutwirth, ‘The Legal Construction of Privacy and Data Protection’ (2013) 29 Computer Law & Security Review 522, 524. 85 See in particular Goodman (n.78), 4–6, theorizing the implementation of ‘algorithms audits’ that are ‘third party inspections of algorithmic decision-making modelled on audit studies from social sciences’. The link between algorithmic transparency and accountability is currently object of a heated debate also in the United States: D.R. Desai and J.A. Kroll, ‘Trust but Verify: A Guide to Algorithms and the Law’ (2017) Harvard Journal of Law & Technology, forthcoming; M. Perel and N. Elkin-Koren, ‘Black-Box Tinkering: Beyond Transparency in Algorithmic Enforcement’ (2017) 69 Florida Law Review 181; J.A. Kroll et al, ‘Accountable Algorithms’ (2017) University of Pennsylvania Law Review 633. 86 This is what Recital 43 General Data Protection Regulation seems to suggest, by affirming that the right to access personal data ‘should not adversely affect the rights or freedoms of others, including trade secrets or IP and in particular the copyright protecting the software’. © The Author(s) 2017. Published by Oxford University Press. All rights reserved.

Journal

Journal of Intellectual Property Law & PracticeOxford University Press

Published: Mar 1, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off