A Introduction
Commercial use of personal and other data facilitates digital trade and generates economic growth at unprecedented levels. A dramatic shift in the composition of the top twenty companies by market capitalisation speaks vividly to this point. While, in 2009, 35 per cent of those companies were from the oil and gas sector, in 2018 – just nine years later – 56 per cent of those companies were from the technology and consumer services sectors.Footnote 1 Meanwhile, the share of oil and gas companies, a pillar among traditional industries, declined to just 7 per cent. The share of digitally deliverable services in global services exports more than doubled in the last thirteen years: it increased from USD 1.2 trillion in 2005 to USD 2.9 trillion in 2018.Footnote 2
Data also constitutes a crucial resource for the development, continuous refinement and application of artificial intelligence (AI). The availability of data and its free flow across borders are often viewed as pre-requisites for the development and flourishing of AI technology.Footnote 3 However, in the context of AI, it is not the data itself, but the knowledge and insights obtained with the help of AI algorithms from that data (in other words, the ‘fruits’ of the data) that constitute the main added value. Learning, or ‘digital intelligence’, in the words of UNCTAD, is crucial for the market of big data. One of the upshots of this is that without the necessary infrastructure and technologies, data concerning individual persons or even aggregated data cannot by itself generate value. It is the ‘learning’, and not raw data itself, that constitutes a valuable economic resource and can be used in targeted online advertising, the operation of electronic commerce platforms, the digitisation of traditional goods into rentable services and the renting out of cloud services.Footnote 4 For example, personalisation, which is an important component in the production, marketing and distribution of online services, uses AI systems to transform individuals’ online behaviour, preferences, likes, moods and opinions (all of which constitute personal data, at least in the European Union) into commercially valuable insights.Footnote 5 Focusing solely on data in the context of regulatory conversations on AI – both in domestic and international trade contexts – may be misguided.
AI development is at the top of the domestic and international policy agendas in many countries around the world. Just in the last couple of years, more than thirty countries and several international and regional stakeholders, including the European Union (EU), G20 and Nordic-Baltic Region adopted AI policy documentsFootnote 6 revealing their ambitions to compete for dominance in AI. Digital trade provisions, including rules governing cross-border data flows, access to proprietary algorithms and technology transfers and access to open government data, have taken centre stage in bilateral, regional and international trade negotiations.Footnote 7
Different levels of advancement in digital technologies in general, and in AI specifically, as well as the concentration of data in the hands of a few countries, make international negotiations on digital trade challenging. To illustrate the point, according to the 2019 UNCTAD Digital Economy Report, China and the United States account for 90 per cent of the market capitalisation value of the worlds’ seventy largest digital platform companies and ‘are set to reap the largest economic gains from AI’.Footnote 8 In contrast, the EU accounts for only 3.6 per cent of this market capitalisation.Footnote 9 The report further demonstrates that China, the United States and Japan together account for 78 per cent of all AI patent filings in the world.Footnote 10 Data – one of the key components of data analytics – is highly concentrated in Asia Pacific and the United States: 70 per cent of all traffic between 2017 and 2022 is expected to be attributed to these two regions.Footnote 11 Representing 87 per cent of the B2B e-commerce, the United States is the market leader in global e-commerce, while China is the leader in B2C e-commerce followed by the United States.Footnote 12 As a result, economic value derived from data is captured by countries where companies having control over storage and processing of data reside.Footnote 13
The high concentration of control over AI technologies, digital platforms and data in specific parts of the world raise concerns about ‘digital sovereignty’ related to control, access and rights of the data and appropriation of the value generated by the monetisation of the data.Footnote 14 This issue is not limited to the dynamics of negotiations between developed and developing countries. For example, the new European Commission’s Digital Strategy is strongly anchored in the principles of digital sovereignty and shaping technology in a way respecting European values.Footnote 15 Public policy interests implicated by international data governance and data flows, indispensable for the global governance of AI, stretch far beyond issues of economic growth and development. They also involve a broader set of national and regional priorities, such as national security, fundamental rights protection (such as the rights to privacy and to protection of personal data) and cultural values, to name just a few. Differences in the relative weight accorded to each such priority when contrasted with the economic and political gains from cross-border data flows have resulted in a diversity of domestic rules governing cross-border flows of information, especially when it relates to personal data, and a diversity of approaches to govern the use of AI in both private and public law contexts.
Against this backdrop, this chapter’s aim is twofold. First, it provides an overview of the state of the art in international trade agreements and negotiations on issues related to AI, in particular, the governance of cross-border data flows. In doing so it juxtaposes the EU and the US approaches and demonstrates that the key public policy interests behind the dynamics of digital trade negotiations on the EU’s side are privacy and data protection. Second, building on the divergent EU and US approaches to governing cross-border data flows, and the EU policy priorities in this respect in international trade negotiations, this chapter argues that the set of EU public policy objectives weighted against the benefits of digital trade in international trade negotiations, especially with a view to AI, should be broader than just privacy and data protection. It also argues that an individual rights approach has limitations in governing data flows in the context of AI and should be expanded to factor in a clearer understanding of who wins and who loses from unrestricted cross-border data flows in an age of data-driven services and services production.
The chapter proceeds as follows. The next section maps out the recent developments on digital trade on the international trade law landscape. The third section discusses, from an EU perspective, the limits of data protection in regulating AI domestically and as a catch-all public policy interest counterbalancing international trade commitments on cross-border data flows. The fourth section contains a brief conclusion.
B Cross-Border Digital Trade and Artificial Intelligence
The immense potential of data to generate economic value has given rise to a so-called ‘digital trade discourse’, which, on the one hand, views the freedom of cross-border data flows as one of the pre-requisites of international digital trade and AI-driven innovation and, on the other hand, predicts that restrictions on data flows will hamper economic growth and undermine innovation.Footnote 16 This discourse is advanced not only by the United States, which has a strong competitive advantage in digital technologies, and the big tech companies, which invest millions of dollars in lobbying activities on digital trade, but also by the EU.Footnote 17
Policy debates in international trade negotiations on digital trade, relevant in the AI context, revolve around the liberalisation of cross-border data flows in order to enable accumulation of large data sets to train AI systems and restrictions on those data flows in the public interest. The following subsections provide an overview of recent developments in this area.
Countries have not yet achieved a multilateral consensus on the design and scope of digital trade provisions, which have so far only appeared in bilateral and regional trade agreements and have somewhat overshadowed the multilateral efforts of the WTO in this area.Footnote 18 Although proposals on electronic commerce in the WTO increasingly focus on barriers to digital trade and ‘digital protectionism’,Footnote 19 the WTO has not yet made any tangible progress on this issue.Footnote 20 The discussions continue, however. In early 2019, seventy-six WTO members, including Canada, China, the EU, and the United States, started a new round of negotiations on electronic commerce at the WTO in order to create rules governing e-commerce and cross-border data flows.Footnote 21 It remains to be seen how these negotiations will play out. Despite a seemingly firm consensus on the use of the terms ‘digital trade’ and ‘digital protectionism’ – the axes around which the discourses governing international negotiations revolve – the value structures underlying these discourses diverge,Footnote 22 as the US and the EU examples below will illustrate. The next section on international trade law governance of cross-border data flows then explicates how trade provisions on cross-border data flows, advanced by the US and the EU, mirror this divergence.
In the spirit of its ‘digital agenda’, the United States has been a pioneer in including provisions on free cross-border data flows in international trade agreements.Footnote 23 The United States has managed successfully to advance broad and binding horizontal obligations enabling unrestricted data flows in the digital trade (or electronic commerce) chapters of its recent trade agreements. The Comprehensive and Progressive Agreement on Trans-Pacific Partnership (CPTPP), (where the US led digital trade discussions before its withdrawal from the TPP agreementFootnote 24), the United States–Mexico–Canada Agreement (USMCA) and the Digital Trade Agreement with Japan examples are of trade agreements to contain a binding provision requiring each party to allow (or not to restrict) the cross-border transfer of information by electronic means, including personal information, when this activity is for the conduct of the business of a covered person.Footnote 25 The US proposal for the ongoing e-commerce talks at the WTO replicates this ‘gold standard’ provisions on digital trade.Footnote 26 All of the earlier mentioned free trade agreements (FTAs) also contain an exception which allows the parties to adopt or maintain measures inconsistent with this obligation to achieve a legitimate public policy objective, provided that the measure (i) is not applied in a manner which would constitute a means of arbitrary or unjustifiable discrimination or a disguised restriction on trade; and (ii) does not impose restrictions on transfers of information greater than are required (necessary – in the USMCA and US–Japan Digital Trade Agreement) to achieve the objective.Footnote 27
The exception closely resembles the general exception under Article XIV(c)(ii) of the General Agreement on Trade in Services (GATS),Footnote 28 a threshold which has been particularly hard to meet in the past.Footnote 29 Similar to the general exception clause, the FTA text requires that a measure prima facie inconsistent with the data flow obligation should be subject to a two-level assessment. First, it should pass the so-called ‘necessity test’, where the necessity of the contested measure is assessed, based on an objective standard of ‘necessity’ by trade adjudicators. Second, its application should not amount to arbitrary or unjustifiable discrimination or a disguised restriction on trade (pursuant to the chapeau of the general exception provision). Under WTO case law, the ‘necessity test’ requires that a WTO law–inconsistent measure be the least trade restrictive of all reasonably available alternatives allowing to achieve the same level of protection of a public interest, raised by the claimant in a dispute.Footnote 30 In short, just like the GATS general exception, the FTA exception sets a high threshold for justifying a domestic measure inconsistent with relevant trade disciplines. An important difference of the earlier quoted FTA exception from the GATS general exception, however, is that it does not specify the public policy objectives that may be invoked to justify a restriction on the free cross-border data flows. In this sense, the exception is more ‘future-proof’, as it can rest on any public policy interest that may be implicated by the cross-border data flow obligation in the future, such as cybersecurity or even technological sovereignty (not mentioned in Article XIV GATS exception), provided of course that the measure passes the two-level assessment of the exception.
In addition, the digital trade (electronic commerce) chapters of the earlier mentioned agreements contain an article on the protection of personal information (the term used to refer to personal data in the United States), which contains a mixture of binding and aspirational provisions on the protection of privacy by the parties to the agreements.Footnote 31
The EU largely shares the ‘digital trade’ discourse on the benefits of cross-border data flows for global economic growth with the United States and, in principle, supports the idea of regulating cross-border data flows in international trade agreements.Footnote 32 Largely but not completely, because there is one important point on which the EU approach diverges very significantly from that of the United States: namely, with regard to the protection of the rights to privacy and personal data. It is for this reason that the EU has until recently been cautious in including provisions on cross-border data flows in its trade agreements.Footnote 33 Understanding the EU’s domestic framework on the protection of personal data and, in particular, its approach to transfers of personal data outside the European Economic Area (EEA), is essential for explaining its trade policy in the domain of cross-border data flows. Therefore, before delving into the EU’s proposed provisions on the latter topic, let us first briefly discuss the EU’s domestic regime for transfers of personal data outside the EEA.
The rights to privacy and the protection of personal data are protected as binding fundamental rights in the EU.Footnote 34 From an EU data protection law perspective, personal data is distinct from other types of information because of its inextricable link to the data source: individuals. One of the pillars of this protection, as the CJEU has ruled,Footnote 35 is the restriction on transfers of personal data outside the EEA in order to ensure that the level of protection guaranteed in the EU by the General Data Protection Regulation (GDPR)Footnote 36 is not undermined or circumvented as personal data crosses EEA borders.Footnote 37 As a consequence of the broad definition of ‘personal data’, EU restrictions on transfers of personal data apply to a broad range of data that can be essential for developing, fine tuning and application of AI systems. Furthermore, the restrictions also apply to mixed data sets, in which personal and non-personal data are ‘inextricably linked’ – which, as mentioned earlier, fall under the scope of the GDPR.Footnote 38 The restrictions do not apply to non-personal data, including non-personal data in mixed data sets, under the condition that those can be separated from personal data. At the same time, the distinction between personal and non-personal data is not set in stone. If, due to technological developments, this anonymised data can be reidentified, it will become ‘personal’ and the GDPR restrictions will again apply.Footnote 39 Some scholars argue that these restrictions limit the cross-border aggregation of data and thus stifle the development of AI.Footnote 40
The GDPR’s restrictions on transfers of personal data apply when personal data is transferred or is accessed from outside the EEA, including when this is done for training AI systems, and in the phase of fine-tuning or cross-border application of already existing AI systems located outside the EEA to individuals located in the EEA.Footnote 41 This is because feeding an EEA individual’s data to the non-EEA AI system will most likely constitute a transfer of personal data.
Turning to the intersection of the GDPR with international trade law, only one FTA to which the EU is a party includes a binding provision on cross-border data flows. The 2019 Economic Partnership Agreement with Japan (Japan–EU EPA), where such a provision was initially proposed by Japan, merely includes a review clause allowing the parties to revisit the issue in three years’ time after the agreement’s entry into force.Footnote 42 The EU and Japan have agreed to use a mutual adequacy decision following the route for cross-border transfers of personal data laid down in the GDPR.Footnote 43 This was due to the inability of EU institutions to reach a common position on the breadth of the data flows provision and exceptions from it for the protection of privacy and personal data, following a strong push back from academics and civil society to an attempt of including such provisions in the – currently stalled – plurilateral Trade in Services Agreement (TiSA) and the Transatlantic Trade and Investment Partnership (TTIP) between the EU and the US.Footnote 44
In 2018, the European Commission reached a political agreement on the EU position on cross-border data flows. This position was expressed in the model clauses, which, in particular, include a model provision on cross-border data flows (Article A) and an exception for the protection of privacy and personal data (Article B).Footnote 45 The EU has included these model clauses in its proposals for digital trade chapters in the currently negotiated trade agreements with Australia, Indonesia, New Zealand and Tunisia,Footnote 46 as well as into the EU proposal for the WTO rules on electronic commerce,Footnote 47 which are intended to co-exist with the general exception for privacy and data protection modelled after Article XIV(c)(ii) GATS included in the same agreements.Footnote 48 The 2021 EU-UK Trade and Cooperation Agreement (TCA), however, contains provisions different and, arguably, awarding less regulatory autonomy to protect privacy and personal data, than those in the above-mentioned model clauses.Footnote 49 It is unclear whether the TCA provisions are merely outliers or represent the new model approach of the EU. Given that the above-mentioned model clauses have not been amended following the TCA and still represent the EU position in multiple ongoing trade negotiations, including those at the WTO, this chapter assumes that they still represent the EU mainstream approach and, therefore, the discussion below focuses solely on these clauses.
Model Article A provides for an exhaustive list of prohibited restrictions on cross-border data flows. Model Article B on the protection of personal data and privacy states that the protection of personal data and privacy is a fundamental right and includes an exception from the provision on cross-border data flows. The model clauses, on their face, safeguard the EU’s broad regulatory autonomy, much more so than the general exception for privacy and data protection in existing trade agreements. This is made manifest in five different ways. First, as compared to the US model provision on cross-border data flows, the prohibition of restrictions on cross-border data flows in Article A is formulated more narrowly, in that it specifically names the types of restrictions that are outlawed by this provision. Second, the provisions of Article B(1) assert that the normative rationale for the protection of personal data and privacy is the protection of fundamental rights. This rationale – as opposed to economic reasons for protecting privacy and personal data – signals a higher level of protection and, therefore, arguably requires a broader autonomy to regulate vis-à-vis international trade commitments.Footnote 50 This provision is likely to be interpreted as a part of the digital trade exception for privacy and data protection in Article B(2) of the proposal. Third, the proposed exception for privacy and the protection of personal data establishes a significantly more lenient threshold – ‘it deems appropriate’ – than the ‘necessity test’ of the general exception under the GATS. Drawing the parallel with the threshold in the GATS national security exception – ‘it considers necessary’Footnote 51 – one can argue that the proposed exception affords an almost unlimited autonomy to adopt measures inconsistent with Article B(2) to protection of privacy and personal data.Footnote 52 Fourth, the exception in Article B(2) explicitly recognises the adoption and application of rules for cross-border transfers of personal data – the gist of the EU’s framework for transfers of personal data – as one of the measures that a party may deem appropriate to protect personal data and privacy, in spite of its international trade commitments. Fifth and finally, the provision of Article B(2) protects the safeguards afforded by a party for personal data and privacy from being affected by any other provision of the trade agreement.
At the same time, despite these apparent strengths of the EU proposal in view of privacy and data protection, Article B suffers from at least four clear weaknesses. First, declaring that the protection of privacy and personal data are fundamental rights is EU-centric and does not leave the EU’s trading partners any autonomy to choose another level of protection of these public policy interests they might see fit for their own legal and cultural tradition. Given that, as things stand now at least, the fundamental rights protection of privacy and personal data is, essentially, a European phenomenon, EU trading partners may be reluctant to commit to this level of protection in a trade agreement. Second, the exception for privacy and data protection in Article B(2) of the EU’s proposal is designed for digital trade chapters and fails to clarify its relationship with the general exception for data protection, which remains intact – at least in available draft trade agreements – in which the EU has included the proposed model clauses.Footnote 53 Third, modelling an exception for privacy and data protection after the national security exception essentially creates an almost unconditional escape valve from virtually any trade commitment, as long as there is at least a remote nexus to the protection of privacy and personal data. Although this may seem justified at first glance given that privacy and data protection are fundamental rights in the EU, it creates a precedent for using this wide margin for a variety of public policy interests (other than national security), which may undermine the global rules-based trading system. Fourth, and most relevant in the context of this chapter’s discussion, the public policy interests that can justify violation of Article A under Article B(2) are limited to the protection of privacy and personal data. Although this underscores the relative importance of the rights to data protection and privacy as opposed to the goal of digital trade liberalisation on the values scale, the limitation of the exception to these particular rights may have negative effects. Given that the threshold for important public policy interests, such as public morals, safety, human, animal or plant life, in the general exception clause is narrower than the threshold in model Article B(2), the regulatory autonomy to protect personal data and privacy ends up being much broader than the protection of other rights that are also recognised under the EU Charter of Fundamental Rights.Footnote 54 This elevates privacy and the protection of personal data above other rights that are equally protectedFootnote 55 and may even create an incentive to – artificially – frame other public policy interests, especially those not mentioned in the GATS general exception, as protection of privacy and personal data. In the context of AI, this could steer domestic AI regulation in the EU deeper into the realm of data protection as opposed to creating a separate regulatory framework – an issue currently discussed in the EU institutions.Footnote 56 Public policy interests, such as industrial policy,Footnote 57 cybersecurityFootnote 58 and digital sovereignty,Footnote 59 are cited as public policy interests that may require restricting digital trade in general or data flows in particular. The first is especially relevant for developing countries, for which free data flows essentially mean ‘one-way flows’, as these countries’ data flows are constrained by the limited availability of digital technologies and of the skills necessary to produce digital intelligence from data.Footnote 60 This issue, as already mentioned, has gained prominence in the European Commission’s 2020 digital strategy. In its European Strategy for Data, the European Commission stated:
The functioning of the European data space will depend on the capacity of the EU to invest in next-generation technologies and infrastructures as well as in digital competences like data literacy. This in turn will increase Europe’s technological sovereignty in key enabling technologies and infrastructures for the data economy. The infrastructures should support the creation of European data pools enabling Big Data analytics and machine learning, in a manner compliant with data protection legislation and competition law, allowing the emergence of data-driven ecosystems.Footnote 61
Turning to cybersecurity interests, they may require restrictions on data flows, data localisation or restrictions on import of certain information technology products.Footnote 62 These interests are relevant for both developing and developed countries. The blurring boundary between public and private spheres in the surveillance context – where governments increasingly rely on private actors for access to data for surveillance purposes – explains why cross-border data flows may raise sovereignty concerns as well.Footnote 63
To sum up, although the regulation of cross-border data flows, especially in the context of AI, implicates a variety of public policy interests, the EU trade policy on this topic has solely focused on one of them – namely privacy and the protection of personal data. This, arguably, has something to do with the institutional dynamics between EU institutions. However, it may not be sustainable either in the EU or in a multilateral context, such as with regard to the electronic commerce negotiations at the WTO. According to UNCTAD, the early meetings of the group on data flows at the WTO have, so far, mainly reflected the views of proponents of the free flow of data.Footnote 64 However, for these negotiations to result in concrete WTO legal norms, members will have to reach a consensus on how to balance the economic gains of free data flows with multiple competing interests, which include not only the protection of privacy and personal data – the main point of contention for the EU – but also other fundamental rights, as well as industrial policy, cybersecurity and economic development interests of other countries involved in the negotiations.Footnote 65
In contrast to the position taken both by the United States and the EU that data flows should be free (unless their restriction can be justified by an exception), when it comes to the protection of the source code, or algorithms expressed in that source code incorporating the learning derived from processing of data – the position is the exact opposite. As explained in the introduction, learning, or digital intelligence, is where the real economic value of personal and other data lies. Thus, while data and data flows are viewed as ‘free’, the value obtained from data are up for grabs by whomever possesses the infrastructure and resources necessary to process that data. At this juncture, these entities are concentrated in the United States and China. Two recent US-led FTAs, namely the USMCA and the US–Japan Digital Trade Agreement (DTA), contain specific provisions on the protection of source code and algorithms.Footnote 66 The EU’s proposal for the WTO negotiations on e-commerce also contains a prohibition on access to and forced transfer of the source code of software owned by a natural or juridical person of other members.Footnote 67 Similar provisions are included in the EU proposals for digital trade chapters of currently negotiated FTAs, such as with Mexico,Footnote 68 AustraliaFootnote 69 and New Zealand.Footnote 70
C The Limits of Personal Data Protection in the Context of Trade Law Policy on Cross-Border Data Flows in AI Context
The earlier discussion demonstrates that the only public policy interests that are fully accounted for in the exception from a proposed provision on the free cross-border flow of data in draft EU trade agreements are privacy and the protection of personal data. In the context of AI, this mirrors the currently prevailing approach in the EU to regulate AI through the governance structure of the GDPR. This section focuses on two limitations of this approach. First, this approach is based on a distinction between personal and non-personal data, because only data that qualifies as personal falls under the EU data protection framework. The distinction is increasingly hard to make, especially in the context of AI. Second, EU privacy and personal data protection takes us to an individual rights framework that does not account for the value produced from data and the impact of applying the learning derived from AI to larger societal groups or populations.
I Thin Borderline between Personal and Non-personal Data in AI Context
EU law maintains a rigid distinction between personal and non-personal data,Footnote 71 in the sense that there are two different legal frameworks for personal and non-personal data. While cross-border transfers of personal data are subject to a ‘border control’Footnote 72 regime, as discussed earlier, transfers of non-personal data outside the EEA are unrestricted. This distinction is increasingly unworkable in practice as it is becoming ever more difficult to draw a line between personal and non-personal (or anonymous) data, especially in the AI context.Footnote 73
Schwartz and Solove succinctly summarise four main problems with the distinction. First, ‘built-in identifiability’ in cyberspace makes anonymity online a ‘myth’, as essentially all online data can be linked to some identifier.Footnote 74 Second, non-personal information can be transformed into personal data over time.Footnote 75 Third, the distinction between personal and non-personal data has a dynamic nature, as the line between the two depends on technological developments. Fourth and finally, the borderline between personal and non-personal data is not firm, but rather contextual, as many types of data are not non-identifiable or identifiable in the abstract.Footnote 76
The EU regulation on a framework for the flow of non-personal data illustrates a number of those points. It specifically mentions that examples of non-personal data include ‘aggregate and anonymised datasets used for big data analytics, data on precision farming that can help to monitor and optimise the use of pesticides and water, or data on maintenance needs for industrial machines’.Footnote 77 The regulation also notes, however, that ‘[i]f technological developments make it possible to turn anonymised data into personal data, such data are to be treated as personal data, and [the GDPR] is to apply accordingly’.Footnote 78 As can be seen, although the very existence of this regulation is grounded on the possibility of separating the notions of personal and non-personal data, the regulation itself suggests that such distinction is not clear-cut and requires constant reassessment.
Another limitation of a data protection approach to restrictions on cross-border data flows in the AI context is that its scope is limited to data that qualifies as personal data. However, it is not the data fed into an AI system itself, but the knowledge derived from the data through learning that integrates the value of big data into different organisational processes. Training of AI systems transforms personal data into an aggregate representation of such data, which may no longer qualify as personal data. Interestingly, some scholars have argued in this context that AI models vulnerable to inversion attacks can still be considered personal data.Footnote 79 Moreover, it is not only personal, but also non-personal – machine-generated – data that is extremely useful and valuable in AI context. As the European Commission rightly noted in its 2020 White Paper on AI:
AI is one of the most important applications of the data economy. Today most data are related to consumers and are stored and processed on central cloud-based infrastructure. By contrast a large share of tomorrow’s far more abundant data will come from industry, business and the public sector, and will be stored on a variety of systems, notably on computing devices working at the edge of the network.Footnote 80
Although cross-border flows of non-personal data and learning produced from it may not have implications for individual rights to privacy and the protection of personal data, they may present risks for other policy objectives, such as cybersecurity or digital sovereignty. The argument in this chapter is not to suggest that cross-border flows of non-personal data should be restricted, although a possibility of such restrictions already features in the European Commission’s proposal for a Data Governance Act.Footnote 81 Neither does it suggest that a strong exception for domestic privacy and data protection rules is inappropriate. Rather, it underscores the importance of assessing the implications of cross-border data flows in the context of AI against a broader set of public policy interests that matter for the EU and its trading partners in the long term. For example, Gürses and van Hoboken are doubtful that, in the context of digital services produced in an agile way where users also act as producers of such services, privacy law, traditionally centred around regulating information flows, is able to tackle the implications for individuals of such agile production.Footnote 82 They argue that such problems should not all be framed as questions of information flows and data protection, but instead addressed by other, or complementary regulatory tools, such as consumer protection, software regulation or treatment of certain services as new types of utility providers.Footnote 83
II Individual Rights Framework Does Not Factor in the Value of Knowledge Derived from Data
In the digital trade discourse where unrestricted cross-border data flows are viewed as a source of tremendous – aggregated – value gains, not every country participating in data flows ‘wins’ from those data flows. Yet, the issue of who wins and who loses from unrestricted data flows is typically not raised in this discourse. As mentioned earlier, only countries that possess the necessary infrastructure and skills to refine data and extract value from large corpora of data generated in the course of the provision of online services will really benefit from the free flow of data. As a result, countries that lack these resources are merely supplying primary goods, which are worth much less than the learning that can be derived from them, just as countries that produce raw materials are rarely the largest winners when compared to countries where those materials are transformed. Just as the real value lies in the transformation of raw materials, the real value in AI lies in the value of processing the data. Against this backdrop, focusing on data instead of learning derived from data misses the point.
This brings us to the second limitation of the data protection framework being central in cross-border provision of AI, especially in the way it is designed in the EU, where personal data is primarily viewed as the subject matter of a fundamental right rather than an economic asset. This is manifested, for example, in regulatory choices that avoided recognising personal data as consideration for online services (in other words, as a form of currency) in the 2019 Digital Content Directive.Footnote 84 In its opinion on the draft of this directive, the European Data Protection Supervisor (EDPS) underscored that ‘personal data cannot be considered as a mere commodity’.Footnote 85 Although the fact that the personal data cannot be considered as a ‘mere’ commodity does not mean that it cannot have economic value, viewing the protection of personal data as a fundamental right could be one of the reasons why the EU could be restrained in putting a price tag on personal data in trade negotiations on cross-border data flows.
UNCTAD stresses that platforms harnessing data generated by individuals, businesses and organisations of other countries, while based in only a few countries, raises concerns about ‘digital sovereignty’, in view of the control, access and rights with respect to the data and the appropriation of the value generated from monetising the data.Footnote 86 UNCTAD explains that economic value derived from data is captured by developed countries where companies having control over storage and processing of data reside.Footnote 87 It follows, that ‘[t]he only way for developing countries to exercise effective economic “ownership” of and control over the data generated in their territories may be to restrict cross-border flows of important personal and community data’.Footnote 88 Although this particular report makes an argument in the context of imbalance between developed and developing countries, given the high concentration of digital technologies in the very few developed countries, it could also be relevant in relations between those few and other developed countries. It should be emphasised that restricting the outgoing flows of personal data does not mean that those countries that impose such restrictions will have the means to process and generate value from such data within their borders. It may be about sovereignty, but it is not necessarily about endogenous economic development unless measures to ensure this development accompany the data flow restrictions.
In a similar vein, Couldry and Mejias speak about ‘data colonialism’, by which they mean that big data processing practices make human relations and social life overall ‘an “open” resource for extraction’.Footnote 89 They compare big data to appropriation or extraction of resourcesFootnote 90 – another parallel between data and oil. Global data flows, they argue, ‘are as expansive as historic colonialism’s appropriation of land, resources, and bodies, although the epicentre has somewhat shifted’.Footnote 91 In their view, the transformation of human actors and social relations formalised as data into value leads to a fundamental power imbalance (colonial power and colonised subjects).Footnote 92 In a similar vein, Zuboff has famously labelled the business of accumulation and monetising data ‘surveillance capitalism’, which leads not only to the accumulation of capital, but also of individual rights.Footnote 93
There is some movement in the governance of data reflecting those concerns. A 2019 Opinion of the German Ethics Commission shows a tendency towards expanding the scope of individual rights in data beyond the non-economic rights to privacy and personal data protection. According to the commission, under certain circumstances individuals should be granted data-specific rights, which include a right to obtain an economic share in profits derived with the help of the data.Footnote 94 The potential design of a legal framework of distribution of economic gains from the use of data is addressed in a growing body of scholarly and policy research. This research explores frameworks or organisations acting as intermediaries between individuals and entities wishing to use (and profit from) their data, such as data trusts or collective data ownership (such as data funds).Footnote 95 Data trusts are viewed as an attractive tool to facilitate access to large data sets of aggregated data for the purposes of developing and applying AI, to generate trust around the use of data by various stakeholders, and as mechanisms for paying back a fair share of benefits from the use of data to individuals.Footnote 96 There is, however, little clarity regarding the structure that data trusts should take and the method for sharing value derived from the commercial use of personal data.Footnote 97 The German Ministry of Economic Affairs and the Dutch Government are investigating the possibilities of setting up data trusts in their respective countries.Footnote 98 Research on data funds views personal data as a public resource, drawing a parallel with natural resources that constitute the country’s resource. From this perspective, data collected within a certain jurisdiction should ‘belong’ to that jurisdiction.Footnote 99 Data funds are viewed as a form of collective data ownership, allowing individuals to exercise control over which data is collected about them and how it is used, as well as to receive payment for commercial access to the data in the fund.Footnote 100
These economic rights are unlikely to become a part of the EU data protection framework precisely due to their economic nature. At the same time, they could interfere with international trade disciplines which aim to facilitate the unrestricted cross-border data flows. This is why they should form part, in addition to the fundamental rights to protection of privacy and personal data, of a nuanced rebalancing of the EU’s trade policy on this issue.
D Conclusion
The analysis in this chapter of recent developments in the governance of cross-border data flows in international trade law showed that the main public policy interests discussed in the context of EU trade policy on this issue are the protection of the fundamental rights to privacy and personal data. This chapter argued that other policy objectives, such as cybersecurity and digital sovereignty – which have recently become one of the anchors of EU’s internal AI policy – should also be considered. The chapter has also shown that the individual rights–centred data protection framework has limits in governing AI both in domestic and international trade policy.