Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-11T02:41:03.268Z Has data issue: false hasContentIssue false

1 - AI in the Financial Sector

Policy Challenges and Regulatory Needs

from Part I - Automated Banks

Published online by Cambridge University Press:  16 November 2023

Zofia Bednarz
Affiliation:
University of Sydney
Monika Zalnieriute
Affiliation:
University of New South Wales, Sydney

Summary

The potential of AI solutions to enhance effective decision-making, reduce costs, personalise offers and products, and improve risk management have not gone unnoticed by the financial industry. On the contrary, the characteristics of AI systems seem to perfectly accommodate to the features of financial services and to masterly address their most distinctive and challenging needs. Thus, the financial industry proves to provide a receptive and conducive environment to the growing application of AI solutions in a variety of tasks, activities, and decision-making processing. The aim of this paper is to examine the current state of the legal regime applicable in the European Union to the use of AI systems in the financial sector and to reflect on the need to formulate principles and rules that ensure responsible automation of decision-making and that serve as a guide for widely and extensively implementing AI solutions in banking activity.

Type
Chapter
Information
Money, Power, and AI
Automated Banks and Automated States
, pp. 9 - 28
Publisher: Cambridge University Press
Print publication year: 2023
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

1.1 Setting the Scene: AI in the Financial Sector

The progressive, but irrepressible, automation of activities, tasks, and decision-making processes through the systematic, pervasive application of AI techniques and systems is ushering in a new era in the digital transformation of the contemporary society and the modern economy.Footnote 1 The financial sector, traditionally receptive and permeableFootnote 2 to technological advances, is not oblivious to this process of intense and extensive incorporation of AI, for multiple purposes and under variegated forms.Footnote 3 The advantages and opportunities that AI solutions offer in terms of efficiency, personalisation potential, risk management, and cost reduction have not gone unnoticed by the financial sector. On the contrary, the characteristics of AI systems seem to perfectly fit in with the features of financial services and to masterly address their most distinctive and challenging needs. Thus, the financial industry provides a receptive and conducive environment to the growing application of AI solutions.

Despite the spotlight on AI, the fact that AI solutions are usually applied, implemented, and incorporated in the financial activity in synergetic combination with other transformative and emerging technologies should not be disregarded. These are technologies such as big data, Internet of Things (IoT), cloud computing, distributed ledger technology (DLT), quantum computing, platform model, virtual reality, and augmented realityFootnote 4 that are synchronously present in the market, with similar levels of technical maturity,Footnote 5 commercial viability, and practical applicability. In fact, the multiplying effects triggered by such a combination of sophisticated technological ecosystems largely explain the perceived disruptive nature of AI and its actual impact.

With very diverse uses and applications, AI has penetrated financial markets across the board in an increasingly visible way.Footnote 6 Its alliance with analytical and predictive processing of big data by financial institutionsFootnote 7 is perhaps the most telling dimension of a profound transformation of the industry, business strategies, risks, and operations.Footnote 8

The perception of their usefulnessFootnote 9 and, above all, of the timeliness and desirability of their increasingly pressing incorporation has been encouraged by markedly different competitive conditions, precisely because of the impact of technology on market architecture and exceptional circumstances arising from the pandemic.Footnote 10 Indeed, this process of intense digital migration has altered the structure and conditions of competition in the market with the opening of new niches for the emergence of innovative fintech firmsFootnote 11 and the sweeping entry of Big Tech in the financial services sector. The essential function of financial markets as mechanisms for the efficient allocation of savings to investment can take many different forms. Technological innovation has endowed the sector with new architecturesFootnote 12 on a continuum that shifts from platform modelsFootnote 13 based on a centralised structure to decentralised or distributed modelsFootnote 14 – to varying degrees – that DLTFootnote 15 allows to articulate.Footnote 16

Changes in market architecture and opportunities for the provision of new services and intermediation in the distribution of new financial assets and products have driven the emergence of new market players – crowdfunding platform operators, aggregators, comparators, robo-advisers, algorithm providers, social trading platform operators, and multilateral trading system operators – encouraged by low barriers to entry, promising business opportunities, cost reduction, and economies of scale.

In this new landscape, complex relationships of cooperationFootnote 17 and competitionFootnote 18 are established between entrants and incumbents.Footnote 19 The presence of new players in the market – offering complementary or instrumental services, creating new environments and channels of communication and intermediation, and adding value to traditional services and products – challenges the traditional scope of regulation and the classical limits of supervision.Footnote 20

On the other hand, mobility restrictions, with closures, confinements, and limitations on travel aimed at containing the spread of the Covid-19 pandemic from the first quarter of 2020, although temporary, have turned the opportunity of digital banking into a survival necessity and even an obligation, in practice, for the proper provision of service and customer care. In a fully digital context for all customer interactions and operations, the use of AI for optimisation, personalisation, or recommendation is key. The processing of increasing amounts of data requires automated means. At this forced and exceptional juncture, many digitalisation initiatives have been prioritised to meet the needs of the changed circumstances. A bank that has completed its digital migration is in a very favourable and receptive position for AI solutions.

This trend, as a response to market demands, is met with increasing regulatory attention seeking to unleash the possibilities and contain the risks of AI. The European Union (EU) provides a perfect illustration. Efforts to define a harmonised regulatory framework for the market introduction, operation, and use of AI systems under certain prohibitions, requirements, and obligations crystallised in the proposed Regulation known as the AI Act.Footnote 21 From a sectoral perspective, the European Banking Authority (EBA) had already advocated the need to incorporate a set of fundamental principles to ensure the responsible use and implementation of safe and reliable AI in the banking sector.Footnote 22 Indeed, promoting safe, reliable, and high-quality AI in Europe has become one of the backbones of the EU’s digital strategy as defined in the strategic package adopted on 19 February 2020. The White Paper on AIFootnote 23 and the European Commission Report on Safety and Liability Implications of AI, the Internet of Things and RoboticsFootnote 24 define the coordinates for Europe’s digital future.Footnote 25 The Ethics Guidelines for Trustworthy AI prepared by the independent High-Level Expert Group on AI in the European Union,Footnote 26 which takes the EBA as a reference, marked the first step towards the consolidation of a body of principles and rules for AI – explainability, traceability, avoidance of bias, fairness, data quality, security, and data protection. But the legal regime for the development, implementation, marketing, or use of AI systems requires incorporating other rules found in European legislation, in particular, the recently adopted Regulations on digital services and digital markets – Digital Services Act (DSA)Footnote 27 and Digital Markets Act (DMA),Footnote 28 or in some of the forthcoming instruments related to AI liability. Even so, it does not result in a coherent and comprehensive body of rules relating to the use of AI systems in the banking sector. It is necessary to compose a heterogeneous and plural set of rules that derive from sectoral regulations, result from the inference of general principles, apply standards from international harmonisation instruments, or project the rules on obligations, contracts, or liability through more or less successful schemes based on functional equivalence and technological neutrality.Footnote 29

The aim of this chapter is to follow this path, which starts with the observation of a growing and visible use of AI in the financial sector, moves into the regulatory and normative debate, and concludes with a reflection on the principles that should guide the design, development, and implementation of AI systems in decision-making (ADM) in the sector. To this end, the chapter is structured as follows. First, it explores the concept of an AI system, considering definitions proposed in the EU, especially in the AI Act, and the interaction of this term with other related terms such as ADM (Section 1.2.1). The various applications of AI in the financial sector in general and in the banking sector in particular are then explored (Section 1.2.2). This provides the conceptual basis for analysing the regulatory framework, including existing and emerging standards, applicable to AI systems, and concludes (Section 1.3) with a proposal of the main principles that should guide the design, implementation, and use of AI systems in the financial sector (Section 1.4).

1.2 Concept and Taxonomy: AI System and ADM

The digital transformation is generating an intimate and intense intertwining of various technologies with socioeconomic reality. This implies not only recalibrating principles and rules, but also terminology and concepts. The legal response must be articulated with appropriate definitions and concepts with legal relevance that adequately grasp the distinctive features of technological solutions without falling into a mere technical description, which would make the law irremediably and forever obsolete in the face of technological progress. The law would rather opt for a functional categorisation, which understands the functions without prejudging the technological solution or the business model.

1.2.1 AI Systems: Concept and Definition

In the European legislation, whether in force or pending adoption, references to automation appear scattered and with disparate terminology. Both in the General Data Protection Regulation (GDPR),Footnote 30 or in the Digital Services Act references to automated individual decisions, algorithmic decisions, algorithmic content recommendation or moderation systems, algorithmic prioritisation, or the use of automatic or automated means for various purposes can be spotted. But there is no definition or explicit reference to ‘AI’ in the said texts. It is the future, and still evolving, AI Act that expressly defines ‘AI systems’, for the purposes of the regulation, in order to delimit its material scope of application.

The initial definition of AI system for the purposes of the proposed instrument in the European Commission’s proposal was as follows: artificial intelligence system (AI system) means ‘software that is developed using one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions that influence the environments with which it interacts’ (Art. 3.1 AI Act).

With this definition, AI systems are defined on the basis of two components. First, the qualification as learning systems and thus separated from more traditional computational systems. That is, the fact that they employ or are developed using ‘AI’ techniques, which the AI Act would define in an Annex I, subject to further extension or modification, and which currently includes: machine learning strategies, including supervised, unsupervised, and reinforcement learning, employing a wide variety of methods, including deep learning; logic and knowledge-based strategies, especially knowledge representation, inductive (logic) programming, knowledge bases, inference and deduction engines, expert systems and (symbolic) reasoning; statistical strategies, Bayesian estimation, search and optimisation methods. Second, the influence on the environment with which they interact, generating outcomes such as predictions, recommendations, content, or actual decisions. Behind this definition lies the assumption that it is precisely the ‘learning’ capabilities of these systems that largely determine their disruptive effectsFootnote 31 (opacity, vulnerability, complexity, data dependence, autonomy) and hence the need to reconsider the adequacy of traditional rules. This is, in fact, the reasoning that leads to rethink the rules of liability and thus assess their adequacy in the face of the distinctive features of AI as proposed in the report published on 21 November 2019, titled Report on Liability for Artificial Intelligence and other emerging technologies.Footnote 32 And it was issued by the Expert GroupFootnote 33 on New Technologies and Liability advising the European Commission.Footnote 34

Along the same lines, the Commission adopted two related proposals in 2022: proposal for a directive on adapting non-contractual civil liability rules to artificial intelligenceFootnote 35 and proposal for a directive of the European Parliament and of the Council on liability for defective products.Footnote 36

However, the wording of this definition for AI systems in the Commission’s proposal has been subject to significant reconsideration and might still evolve into its final wording. The compromise text submitted at the end of November 2021 by the Slovenian Presidency of the European Council (Council of the European Union, Presidency compromise text, 29 November 2021, 2021/0106(COD), hereafter simply Joint Undertaking) proposed some changes to this definition.Footnote 37 The text, in its preamble, explains that the changes make an explicit reference to the fact that the AI system should be able to determine how to achieve a given set of pre-defined human objectives through learning, reasoning, or modelling, in order to distinguish them more clearly and unambiguously from more traditional software systems, which should not fall within the scope of the proposed Regulation. But also with this proposal, the definition of an AI system is stylised and structurally reflects the three basic building blocks: inputs, processes, and outputs.

Yet, a subsequent version version of the compromise textFootnote 38 of the AI Act offers another definition that refines the previous drafting and provides sufficiently clear criteria for distinguishing AI from simpler software systems. Thus, AI system means a system that is designed to operate with a certain level of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of human-defined objectives using machine learning and/or logic- and knowledge-based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations, or decisions influencing the environments with which the AI system interacts. Some key elements of the initial definition are preserved or recovered in this recent version that finally narrows down the description of ‘learning systems’ to the definition to systems developed through machine learning approaches and logic- and knowledge-based approaches.

The definition is still evolving. In the latest compromise textFootnote 39 the new definition of AI system is ‘a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments’.

Also, the European Parliament Resolution on liability for the operation of artificial intelligence systemsFootnote 40 referred expressly to AI systems and formulated its own definition (Art. 3.a).Footnote 41 This Resolution contains a set of recommendations for a Regulation of the European Parliament and of the Council on civil liability for damage caused by the operation of AI systems. The proposal has not been adopted. Instead, the Commission proposed the abovementioned tandem of draft Directives, that follow a substantially different approach aimed to revise the Defective Product Liability rules so as to accommodate AI-driven products and to alleviate the burden of proof in fault-based liability scenarios on damages caused by AI systems.

The rest of the regulatory texts do not explicitly refer to AI, although they contain rules on algorithms, algorithmic systems of various types, automation or automatic decision-making. Thus, as mentioned above, the GDPR, the DSA, the DMA or, among others, the P2B RegulationFootnote 42 refer to algorithmic rating, algorithmic decision-making, algorithmic recommendation systems, algorithmic content moderation, algorithmic structures, automated profiling, or a variety of activities and actions performed by automated means. They include rules related to algorithms, such as disclosure, risk assessment, accountability and transparency audits, on-site inspections, obtaining consent, etc. As the definition of AI systems proposed by the AI Act reveals, recommendations, decisions, predictions, or other digital content of any kind, as well as actions resulting from the system in or in relation to the environment, are natural and frequent outputs of AI systems. Consequently, regulatory provisions that in some way regulate algorithmic processes and decision-making by automated means in a variety of scenarios and for a variety of purposes are also relevant for the construction of the regulatory framework for AI in the European Union.

Provided that the AI system falls under the scope of application of the proposed AI Act, an AI system may be subject to the AI Act as well as to other rules depending on the specific purpose, the purpose for which it is intended or the specific action. As an illustration, if the system is intended to produce recommendations by a very large banking platform, the DSA (Art. 27) – applicable to any online platform – applies, or if the system is intended for profiling, the GDPR (Art. 22) would be relevant.

In conclusion, understanding the complementarity between the various legal texts that directly or indirectly address the use of AI systems for a variety of purposes and from a range of legal perspectives is fundamental to composing the current and future regulatory framework for AI, as discussed below.

1.2.2 Current and Potential Uses of AI in the Financial Sector

With varying degrees of intensity, AI systems are used transversally in the banking sector along the entire front-line-mid-office-back-office value chain. For customer service and interaction, AI systems offer extraordinary possibilities for personalisation, recommendation and profiling, account management, trading and financial advice (robo advisers), continuous service via chatbots and virtual assistants, and sophisticated Know Your Customer (KYC) solutions.Footnote 43 In the internal management of operations, AI solutions are applied in the automation of corporate, administrative and transactional processes, in the optimisation of various activities, or compliance management. For risk management, AI solutions are projected to improve fraud prevention mechanisms, early warning and cybersecurity systems, as well as being incorporated in predictive models for recruitment and promotion. Another interesting useFootnote 44 of advanced analysis models with machine learning is the calculation and determination of regulatory capital. Significant cost savings are estimatedFootnote 45 if these models are used to calculate risk-weighted assets.

Acknowledging this transversal and multipurpose use allows to anticipate some considerations of interest and relevance for legal analysis. It can be seen that automation has an impact on decision-making processes, actions, or operations of a diverse nature, which will be decisive in determining at least three elements.

First, the applicable regulatory regime – for example, whether it is used to automate compliance with reporting rules, to prevent fraud, to personalise customer offers, or to handle complaints via a chatbot. Second, the possible liability scenarios – for example, whether algorithmic biases and data obtained from social media for the credit scoring and creditworthiness assessment system could lead to systematic discriminatory actions. Third, the transactional context in which it is used – for example, in consumer relations with retail customers, in relations with the supervisor, or in internal relations with employees or partners.

The benefits deriving from the use of automation and AI and the expected gains from systematic and extensive application are numerous.Footnote 46 Algorithm-driven systems provide speed, simplicity, and efficiency in solving a multitude of problems. Automation drastically reduces transaction costs, enabling services that would otherwise be unprofitable, unaffordable, or unviable to be provided on reasonable and competitive terms. Cost reduction explains, for example, the burgeoning sector of robo-advisersFootnote 47 that have expanded the market beyond traditional financial advisers with appreciable benefits for consumers by diversifying supply, increasing competition, and improving financial inclusion.Footnote 48 Such expansion has facilitated financial advice to small investment and low-income investors on market terms.

ADM systems can therefore perform automated tasks and make mass decisions efficiently (high-frequency algorithmic trading, search engines, facial recognition, personal assistants, machine translation, predictive algorithms, and recommender systems). The use of automated means is critical for the large-scale provision of critical services in our society that would otherwise be impossible or highly inefficient (search, sorting, filtering, rating, and ranking).

However, the expansive and growing use of algorithms in our society can also be a source of new risks, can lead to unintended outcomes, have unintended consequences, or raise legal concerns and social challenges of many different kinds. ADM may be biased or discriminatoryFootnote 49 as a result of prejudiced preconditions, based on stereotypes or aimed at exploiting user vulnerabilities, inadequate algorithm design, or an insufficient or inaccurate training and learning data set.Footnote 50 The automation of ADM makes bias massive, amplified and distorted, and easily gain virality. In a densely connected society such as ours, virality acts as an amplifier of the harmful effects of any action. Negative impacts spread rapidly, the magnitude of the damage increases, and the reversibility of the effects becomes less likely and increasingly impractical. The incorporation of decision and learning techniques into increasingly sophisticated AI systems adds to the growing unpredictability of future response. This leads to greater unpredictability and unstoppable complexity that is not always consistent with traditional rules and formulas for attribution of legal effects and allocations of risk and liability (infra 3.1.2.2 and 3.1.2.3).

1.3 An Initial Review of the Policy and Regulatory Framework in the European Union

The use of AI systems for decision-making and the automation of tasks and activities in the financial sector does not have a comprehensive and specific legal framework, either across the board or in its various sectoral applications.

The legal and regulatory framework needs to be assembled by the interlocking of legal provisions from various instruments and completed by the inference of certain principles from rules applicable to equivalent non-automated decisions. The application of the principle of functional equivalence (between automated and non-automated decisions with equivalent functions) guided by technological neutrality makes it possible to extract or extrapolate existing rules to the use of AI systems. However, as argued in the final part of this chapter, this effort to accommodate existing rules to the use of different technologies, under a non-discrimination approach on a medium basis, presents difficulties due to the distinctive characteristics of AI systems, thus compromising legal certainty and consistency. It is therefore suggested that a set of principles be formulated and a critical review of regulation be conducted to ensure that the European Union has a framework that provides certainty and encourages the responsible use of AI systems in the financial sector.

1.3.1 The Expected Application of the Future AI Law to the Uses of AI in the Financial Sector

The (future) AI Act is based on a risk-based classification of AI uses, applications, and practices, to which a specific legal regime is attached: prohibition, requirements for high-risk systems, transparency, and other obligations for certain low-risk systems. The classification of AI systems is not done on the basis of the employed technology but in conformity with the (intended, actual, reasonably expected) specific uses or applications. This means that there is no explicit sectoral selection, but certain practices can be identified with preferred sectoral uses such as creditworthiness assessment, and automated credit rating determination.

1.3.1.1 Prohibited Practices under the AI Act and Their Relevance to Financial Activity

The prohibited practices under Article 5 of the AI Act does not at a first sight naturally embrace the expected uses of AI in the financial sector, but, to the extent that they are defined on the basis of certain effects, they cannot be fully ruled out and should be taken into consideration as red lines. Thus, for example, a personalised-marketing AI system that uses subliminal techniques to substantially alter behaviour in a way that may cause physical or psychological harm (Art. 5.a) or a loan offering and marketing system that exploits any of the vulnerabilities of a group of people based on age, physical or mental disability, or a specific social or economic situation (Art. 5.b).

As initially drafted, although slightly nuanced in subsequent versions (in the Joint Undertaking), the scenarios for the use of biometric identification systems or the assessment or classification of the trustworthiness of natural persons according to their social behaviour or personality are less likely to cover AI applications in the financial sector. The reason is because the prohibition is linked to their use by (or on behalf of) public authorities, or in publicly accessible spaces for law enforcement purposes (although there are still scenarios in which they could apply, such as precisely banks, mentioned in Recital 9 as ‘publicly accessible spaces’). These requirements, questioned for being excessively restrictive, would leave outside the scope of prohibited use of the application in a private space – of an institution – of biometric recognition systems or even assessment systems (social scoring) that could be implemented to accompany a creditworthiness assessment or to profile the eligibility of applicants for banking products. Therefore, in the latest compromise text (14 June 2023), these restrictive criteria have been deleted. The prohibition is extended now to (Article 5.1.d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces.

While the potential impact of the AI Act’s on prohibitions of certain practices in the financial sector appears limited, the likelihood of these being systems classified as high risk is certainly much higher.

1.3.1.2 High-Risk Systems in the AI Act

Annex III of the AI Act provides a list of AI systems, related to eight areas (pursuant to the most recent version of the compromise text), defined by their use, purpose or aim, which will very easily reflect frequent applications of AI in the financial sector: systems for the remote biometric identification of natural persons (1.a Annex III), systems for recruitment or selection of natural persons or for making decisions on promotion or termination of employment relationships or assignment of tasks and monitoring and evaluation of performance (4.a and b Annex III), and, directly and obviously, systems for assessing the creditworthiness of natural persons or establishing their credit rating – with the exception of AI systems used for the purpose of detecting financial fraud – (5.b Annex III).

Confirming the application of the AI Act to certain uses of AI systems proposed by a financial institution will mean it being subject to certain more intensive requirements if it is qualified as high risk (Art. 8 AI Act). These are essentially audit, risk assessment and management, data governance (training, validation, and testing), technical documentation, event logging, cybersecurity, and transparency and reporting obligations to which financial institutions are by no means neither oblivious nor unfamiliar. They respond to a regulatory strategy of supervision and risk management that is well known in regulated sectors such as the financial sector. In fact, the need to avoid duplications, contradictions, or overlaps with sectoral regulations has been taken into account in the AI Act, in particular in relation to the financial sector already subject to risk management, assessment, and supervision obligations similar to those envisaged in the future Regulation – (see Recital 80, and Articles 17.3, 18.2, 20.2, 29.4, 61.4, 62.3). In this regard, the AI Act articulates some solutions to ensure consistency between the obligations of credit institutions under Directive 2013/36/EUFootnote 51 when they employ, operate, or place on the market IA systems in the exercise of their activity.

1.3.2 Principles and Rules for the Use of AI Systems in Decision-Making

However, the eventual application of the AI Act does not exhaust the regulatory framework of reference for the use of AI systems in the financial sector, nor, in fact, does it resolve a good number of questions that the implementation and subsequent operation of such systems in the course of their activity will generate. To this end, and for this reason, it is essential to explore other regulatory instruments and to discover legal avenues to answer a number of important questions. First, to what extent can automated systems be used with full functional equivalence for any activity, decision, or process without prior legal authorisation? Second, to what extent are decisions taken or assisted by AI systems attributed to the financial institution operating the system? Third, who bears the risks and liability for damage caused by the AI systems used?

1.3.2.1 On the Principle of Non-discrimination for the Use of AI in Decision-Making

Neither the AI Act nor, in principle, any other regulation expressly enables the use of AI systems to support decision-making or to automate specific tasks, processes, or activities.Footnote 52 Occasionally and incidentally, reference to automation is found in some texts, even simply in the recitals – Regulation (EU) 2020/1503/EUFootnote 53 to automatic investment in par. 20 – without further specification or development in the legal provisions. In other cases, this possibility is confirmed because reference is made to ‘with or without human intervention’ or ‘by automatic means’, as in the DSA – Art. 3(s) on recommender systems, Art. 16(6) on means of notification and action, Art. 17(3) on the statement of reasons. And in other cases, an express limitation to full automation of a decision-making process such as complaints handling on a platform – DSA, Art. 20(6) – is provided for.

Within this regulatory context, the question on the admissibility, validity, and enforceability of the use of AI systems must be approached on the basis of two backbone principles: the principle of functional equivalence and the principle of non-discrimination and technological neutrality. These principles lead to a positive and enabling initial response that allows the use of AI systems for decision-making or to assist in decision-making, to automate tasks, processes, and activities in a general way and without the need for prior express legislative recognition. There is no reason to deny this functional equivalence or to generally discriminate against the use of AI systems under analogous conditions. Subject-specific limits or sector-specific regulatory requirements might in practice restrict certain applications in the financial sector, but the basic rule is the feasibility of using AI in any activity and for any decision-making.

Naturally, the implementation of an IA system will require ensuring that the automated process is in compliance with the rules applicable to the same process, situation, or transaction if it were not automated. AI systems have to be designed, implemented, and operated in such a way that they comply with the rules that would apply to the legal nature of the decision or activity and, therefore, also to its regulatory treatment in the financial field. If the marketing of certain financial products is automated through a digital banking application, it should be ensured that the legal requirements for pre-contractual information are met. If an automated robo-adviser system is implemented, the requirements for financial advice must be met, if it is indeed categorised as such.

Despite the apparent simplicity of this principle of non-discriminatory recognition of AI, its effect is intense and powerful. It constitutes, in practice, a natural enabler for the multiple and intensive integration of AI in any area of financial activity, as a principle. As long as compliance with the rules and requirements applicable to the action or the equivalent non-automated process can be ensured, AI can be employed to make or assist in making any decision.

1.3.2.2 On the Attribution of Legal Effects

The particular complexity in the chain of design, development, implementation, and operation of AI systems with a set of actors involved, very often without prior agreement or coordination among them, raises a legal question of indisputable business relevance: to whom the legal effects, and thus the risks of a decision, or an action resulting from an automated process, are attributed.

Although this issue can be interpreted as a single attribution problem from a business perspective, from a legal point of view, it is useful to distinguish between two different, albeit related, issues.

First is the question to whom the decision – any decision with contractual relevance (offer, acceptance, modification, renegotiation, termination) – or the resulting action – commercial practice, compliance with supervision request – is to be attributed. That is, if a bank implements an application that incorporates a credit scoring system leading to the automated granting or refusal (without human intervention in each decision) of consumer credit applications, assessing creditworthiness and the decision to accept or deny the credit request are attributable to the bank. Thus, if the credit is granted, the bank is the counterparty to the resulting credit contract; whereas, if it is unjustifiably denied, discriminating against certain groups, the bank would be the offender, violating, for example, the right not to be discriminated against. Similar reasoning would apply to the use of an AI system in an employee recruitment or promotion programme, or to a fraud detection and prevention system.

This attribution of legal effects is based on the formulation of the concept of ‘operator’. This concept proposed by the Report on Liability for Artificial Intelligence and Other Emerging Technologies and subsequently taken over by the European Parliament Resolution of 2020Footnote 54 is based on two factors: control and benefit. Thus, the operator will be the centre of imputation of the legal effects insofar as it controls (or should be able to control) the risks of operating an AI system that it decides to integrate into its activity and, therefore, benefit from its operation.

This attribution of legal effects to the operator also has another important consequence. The operator cannot hide behind the automated or increasingly autonomous nature of the AI system used in order not to assume the consequences of the action or decision taken, nor can the bank consider attributing such effects to other actors involved in the life cycle of the AI system. Thus, for example, it cannot be attributed to the developer of the system, the distributor, or the provider of the data per se and vis-à-vis the bank customer. This is without prejudice to the possibility for the operator (the bank) to bring subsequent actions or remedies against these actors. However, the operator is who assumes the legal – legal or contractual – effects vis-à-vis the affected person concerned (customer).

Second, a question arises as to who should bear the risks and liability for damage caused by the operation of AI systems, as expounded below.

1.3.2.3 On Liability for Damage Caused by the Operation of AI Systems

The operation of an AI system can cause a wide range of damages. In certain sectors, substantial property damage and personal injury can be anticipated (autonomous vehicles, drones, home automation, care robots). Their applications in financial activities are linked to systemic risks, threats to economic stability and financial integrity, or cyclical responses and market shocks. But their malfunctioning can also simply cause massive data loss, disrupt access to services and products, generate misleading messages to customers about the status of their accounts, recommend unsuitable investments according to risk profile, or result in non-compliance with certain obligations vis-à-vis supervisory authorities. The use in rankings, recruitment services, content filtering, or virtual assistants for complaint handling opens the door to a far-reaching debate on their impact on fundamental rights and freedoms – freedom of expression, the right not to be discriminated against, the right to honour, and personality rights – but also on the competitive structure of the market or on the fairness of the commercial practices. Hence, the approach adopted by the proposed AI Act in Europe is based on the identification of certain AI practices, uses, or applications which, due to their particular risk or criticality, are prohibited, qualified as high risk and therefore subject to certain obligations and requirements, or subject to harmonised rules regulating their introduction on the market, their putting into service, and their use.Footnote 55

However, in the face of such potentially negative effects, the fundamental question is whether, beyond the adoption of specific rules for AI systems aimed at controlling their use and mitigating their negative effects, traditional legal liability regimes are adequately equipped to manage the risks and effectively resolve the conflicts arising from such situations in complex technological environments.

In this respect, the European Union faces important legislative policy choices. First, to assess whether a thorough reform of the product liability regimeFootnote 56 is necessary to accommodate AI systems.Footnote 57 The questions are manifold: are AI systems products?, is a decision of the AI system that causes damage necessarily the result of a defect?, and do the provisions of the Directive work adequately in the face of an AI system that has been updated since it was put on the market? Second, to consider whether it is appropriate to establish a harmonised liability regime specific to AI, as suggested in the abovementioned Parliament Resolution,Footnote 58 and if so, whether it should be an operator’s liability and whether the distinction between strict liability for high-risk systems and fault-based liability for the rest is appropriate. The Proposal of the Commission in 2022Footnote 59 departs from the route initiated by the Parliament in 2020 as it proposes a Directive instead of a Regulation, and it puts forward a minimum and complementary harmonisation to national rules on (fault-based) (non-contractual) civil liability in a targeted manner with rules on specific aspects of fault-based liability rules at Union level.

1.4 Concluding Remarks: Principles for the Responsible Use of AI in Decision-Making

The principle of non-discrimination against the use of AI systems in any activity and for any decision-making enables intense and extensive automation in the banking (financial) sector through the implementation of AI solutions. Within this favourable and automation-friendly framework, compliance with the regulatory requirements demanded by the nature of the sectoral activity (law-compliant AI systems) must nevertheless be ensured and some specific limitations must be added which, by reason of their use or purpose (e.g. credit scoring, recruitment and promotion, biometric recognition), the future AI Act could prohibit or subject to certain obligations. To the extent that these AI systems are also employed to provide recommendations, personalise offers, produce rankings, or moderate content, additional rules (DSA, DMA, GDPR) could apply if they are used by financial institutions that have transformed their business model into an online platform.

Even so, there is neither a compact and coherent set of principles capable of guiding automation strategies nor a comprehensive body of rules that would provide full legal certainty for the implementation of AI systems in the banking sector. The highly distinctive characteristics of AI do not always make an application of existing rules under a technology-neutral and functional equivalence approach fully satisfactory, nor are the existing rules always feasible or workable in the AI context. Therefore, there are calls in the European Union for the complementation of the legal framework with other specific principles to crystallise a body of rules suitable for AI. The EBA also advocated for this strategy at sectoral level.

The formulation of ethical principles is certainly a starting point, but the integration of AI systems in the course of an economic activity, throughout the transactional cycle and for business management requires a clear framework of duties and obligations. This is the endeavour that policymakers in the European Union and internationally must face now.Footnote 60 It is necessary to specify how AI systems should be designed, implemented, and commissioned to satisfy the principles of traceability, explainability, transparency, human oversight, auditability, non-discrimination, reasoned explanation of decisions, and access to a review mechanism for significant decisions. It will be key to understand how the provisions of the future AI Act interact with contract law and liability rules,Footnote 61 to what extent the classification of an AI system as high risk under the AI Act could imply the application of a strict liability regime (as previously proposed under the Parliament’s resolution scheme, even if this approach has not been followed by the recent Commission’s proposals for Directives), or what effects the failure to articulate a human-intervention mechanism under Art. 22 GDPR would have on the validity and effectiveness of an automated decision based on profiling, or what implications the failure of the bank operator to comply with the requirements of the AI Act would have on the validity and the enforceability of the contract or on the eventual categorisation of certain bank practices as unfair commercial practices.It is essential for financial firms, referred to in this book as Automated Banks, to be provided with clear and coherent rules for the use and implementation of AI systems in decision-making. The law must be developed in combination with, and accompanied by, detailed (technical) standards, best practices, and protocols progressively and increasingly harmonised in the financial sector.

Footnotes

1 This marks the beginning of a second generation of digital transformation. The terminology ‘first and second generation’ to refer to the successive waves of emerging technologies is used and explained by the author in other previous publications. T Rodríguez de las Heras Ballell, Challenges of Fintech to Financial Regulatory Strategies (Madrid: Marcial Pons, 2019), in particular, pp. 61 et seq.

2 Financial markets have been incorporating state-of-the-art digital communication channels and technological applications for more than two decades – International Finance Corporation (IFC), Digital Financial Services: Challenges and Opportunities for Emerging Market Banks (Report, 2017) footnote 42, p. 1. Regulation has been gradually accommodating these transformations: J Dermine, ‘Digital Banking and Market Disruption: A Sense of déjà vu?’ (2016) 20 Financial Stability Review, Bank of France 17.

3 The study resulting from the survey conducted by the Institute of International Finance – Machine Learning in Credit Risk, May 2018 – revealed that traditional commercial banks are adopting technological solutions (artificial intelligence and machine learning and deep learning techniques) as a strategy to gain efficiency and compete effectively with new fintech entrants (Institute of International Finance, Machine Learning in Credit Risk (Report, May 2018). PwC’s 2021 Digital Banking Consumer Survey (Survey 2021) confirms this same attitude of traditional banks to rethink their sales, marketing and customer interaction practices, models, and strategies (PwC, Digital Banking Consumer Survey (Report, 2021) <www.pwc.com/us/en/industries/banking-capital-markets/library/digital-banking-consumer-survey.html>. In this overhaul and modernisation strategy, the incorporation of digital technologies – in particular, the use of AI and machine learning models to deliver highly accurate personalised services – is a crucial piece.

4 Capgemini, World Fintech Report 2018 (Report, 2018) highlights the possibilities offered by emerging technologies for the delivery of customer-facing financial services – artificial intelligence, data analytics, robotics, DLT, biometrics, platforms, IoT, augmented reality, chatbots, and virtual assistants – pp. 20 et seq. Capgemini, World Fintech Report 2021 (Report, 2021) confirms how the synergistic combination of these transformative technologies has opened up four routes for innovation in the financial sector: establishing ecosystems, integrating physical and digital processes, reorienting transactional flows, and reimagining core functions.

5 World Economic Forum, Forging New Pathways: The next evolution of innovation in Financial Services (Report, 2020) 14 <www.weforum.org/reports/forging-new-pathways-the-next-evolution-of-innovation-in-financial-services>.

6 According to the European Banking Authority (EBA), 64 per cent of European banks have already implemented AI-based solutions in services and processes, primarily with the aim of reducing costs, increasing productivity, and facilitating new ways of competing. EBA, Risk assessment of the European Banking System (Report, December 2020) 75.

9 European Securities and Markets Authority (ESMA), European Banking Authority (EBA), European Insurance and Occupational Pensions Authority (EIOPA), Joint Committee Discussion Paper on automation in financial advice, (Discussion Paper JC 2015 080, 4 December 2015) <https://esas-joint-committee.europa.eu/Publications/Discussion%20Paper/20151204_JC_2015_080_discussion_paper_on_Automation_in_Financial_Advice.pdf>. PwC, Global Fintech Survey 2016, Beyond Automated Advice. How FinTech Is Shaping Asset & Wealth Management (Report, 2016) 8, <www.pwc.com/gx/en/financial-services/pdf/fin-tech-asset-and-wealth-management.pdf>.

10 Capgemini, World Fintech Report 2021 (Report, 2021) <https://fintechworldreport.com/>: ‘The consequences of the pandemic have made the traditional retail banking environment even more demanding’.

11 According to the definition of the Financial Stability Board (FSB), Financial Stability Implications from Fintech (Report, June 2017) 7 <www.fsb.org/wpcontent/uploads/R270617.pdf>, fintech is defined as ‘technology-enabled innovation in financial services that could result in new business models, applications, processes or products, with an associated material effect on the provision of financial services’.

12 TF Dapp, ‘Fintech Reloaded-Traditional Banks as Digital Ecosystems’ (2015) Deutsche Bank Research 5.

13 T Rodríguez de las Heras Ballell, ‘The Legal Anatomy of Electronic Platforms: A Prior Study to Assess the Need of a Law of Platforms in the EU’ (2017) 1 The Italian Law Journal 3, 149–76.

14 IH-Y Chiu, ‘Fintech and Disruptive Business Models in Financial Products, Intermediation and Markets – Policy Implications for Financial Regulators’ (2016) 21 Journal of Technology Law and Policy 55.

15 A Wright and P De Filippi, ‘Decentralized Blockchain Technology and the Rise of Lex Cryptographia’ (2015) <https://ssrn.com/abstract=2580664> .

16 R Lewis et al, ‘Blockchain and Financial Market Innovation’ (2017) Federal Reserve Bank of Chicago, Economic Perspectives 7.

17 According to the KPMG-Funcas report, Comparison of Banking vs. Fintech Offerings (Report, 2018) <https://assets.kpmg/content/dam/kpmg/es/pdf/2018/06/comparativa-oferta-%20banca-fintech.pdf> 48 per cent of domestic fintech firms are complementary to banks, 32 per cent are collaborative, and 20 per cent are competitors. It is estimated that 26 per cent of financial institutions have partnered with Big Tech or technology giants and a similar percentage plan to do so within the next twelve months – KPMG – Funcas, La banca ante las BigTech (Report, December 2019), presented in the framework of the Observatorio de la Digitalización Financiera (ODF).

18 World Economic Forum, Beyond Fintech: A Pragmatic Assessment of Disruptive Potential in Financial Services (Report, 2017) <www.weforum.org/reports/beyond-Fintech-a-pragmatic-assessment-of-disruptive-potential-in-financial-services>.

19 G Biglaiser, E Calvano, and J Crémer, ‘Incumbency Advantage and Its Value’ (2019) 28 Journal of Economics & Management Strategy 1, 41–48.

20 Spanish Fintech and Insurtech Association (AEFI), White Paper on Fintech Regulation in Spain (White Paper, 2017) <https://asociacionfintech.es/wp-content/uploads/2018/06/AEFI_LibroBlanco_02_10_2017.pdf>. Basel Committee on Banking Supervision, Sound Practices. Implications of Fintech Developments for Banks and Bank Supervisors (Report, 2018).

21 Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules in the field of artificial intelligence (Artificial Intelligence Act) and amending certain legislative acts of the Union, {SEC(2021) 167 final}. – {SWD(2021) 84 final}, {SWD(2021) 85 final}. – {SWD(2021) 85 final}, Brussels, 21.4.2021, COM(2021) 206 final, 2021/0106(COD). References to draft provisions will be made in this Paper to the drafting of the compromise text adopted on 3 November 2022 submitted to Coreper on 11 November 2022 for a discussion scheduled on 18 November 2022 with the amendments subsequently adopted by the European Parliament on 14 June 2023.

22 EBA, Report on Big Data and Advanced Analytics (Report EBA/REP/2020/01, 2020), 33–42.

23 White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, COM(2020) 65 final, Brussels, 19 February 2020.

24 Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee, Report on the Security and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics (Report COM(2020) 64, 19 February 2020).

25 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Shaping Europe’s Digital Future, COM(2020) 67 final, Brussels, 19 February 2020.

26 ‘Building Trust in Human-Centric AI’, European Commission (Web Page) <https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html>.

27 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance), OJ L 277, 1–102.

28 Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (Text with EEA relevance), OJ L 265, 1–66.

29 Principles enshrined in international harmonisation instruments adopted by the United Nations: notably and essentially, 1996 Model Law on Electronic Commerce, 2001 Model Law on Electronic Signatures, 2005 Convention on the Use of Electronic Communications in International Trade, 2017 Model Law on Electronic Transmittable Documents <www.uncitral.un.org>.

30 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1.

31 T Rodríguez de las Heras Ballell, ‘Legal Challenges of Artificial Intelligence: Modelling the Disruptive Features of Emerging Technologies and Assessing Their Possible Legal Impact’ (2019) 1 Uniform Law Review 113.

32 European Commission, Report of the Expert Group in Its New Technologies Formation, Report on Liability for Artificial Intelligence and Other Emerging Technologies (Report, November 2019) <https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608>.

33 European Commission, Expert Group on Liability and New Technologies, in Its Two Trainings, New Technologies Formation and Product Liability Formation <https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupDetail&groupID=3592->.

34 The author is a member of the Expert Group on Liability and New Technologies (New Technologies Formation), which assists the European Commission in developing principles and guidelines for the adaptation of European and national regulatory frameworks for liability in the face of the challenges of emerging digital technologies (Artificial Intelligence, Internet of Things, Big Data, Blockchain, and DLT). The Expert Group issued its Report on Liability for Artificial Intelligence and Other Emerging Technologies which was published on 21 November 2019. The views expressed by the author in this paper are personal and do not necessarily reflect either the opinion of the Expert Group or the position of the European Commission.

35 Proposal COM/2022/496 of 28 September 2022 for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive).

36 Proposal COM/2022/495 of 28 September 2022 for a Directive of the European Parliament and of the Council on liability for defective products.

37 Artificial intelligence system (AI system) means a system that

  1. (i) receives machine and/or human-based data and inputs,

  2. (ii) infers how to achieve a given set of human-defined objectives using learning, reasoning, or modelling implemented with the techniques and approaches listed in Annex I, and

  3. (iii) generates outputs in the form of content (generative AI systems), predictions, recommendations, or decisions, which influence the environments it interacts with.

38 The fourth compromise text is as follows:

On 5 July 2022, the Czech Presidency held a policy debate in WP TELECOM on the basis of a policy options paper, the outcomes of which were used to prepare the second compromise text. Based on the reactions of the delegations to this compromise, the Czech Presidency prepared the third compromise text, which was presented and discussed in WP TELECOM on 22 and 29 September 2022. After these discussions, the delegations were asked to send in their written comments on the points they felt most strongly about. Based on those comments, as well as using the input obtained during bilateral contacts with the Member States, the Czech Presidency drafted the fourth compromise proposal, which was discussed in the WP TELECOM meeting on 25 October 2022. Based on these discussions, and taking into account final written remarks from the Member States, the Czech Presidency has now prepared the final version of the compromise text.

39 Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)).

40 Report with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)), 5 October 2020 <www.europarl.europa.eu/doceo/document/A-9-2020-0178_ES.pdf>.

41 (a) ‘Artificial intelligence system’ means any software-based or hardware-embedded system that exhibits behaviour simulating intelligence, inter alia, by collecting and processing data, analysing and interpreting its environment and taking action, with a degree of autonomy, to achieve specific objectives.

42 Regulation (EU) 2019/1150 of 20 June 2019 of the European Parliament and of the Council on promoting fairness and transparency for professional users of online intermediation services (Text with EEA relevance) [2019] OJ L 186/57.

43 See also discussion in Chapters 24 in this book.

44 Although their effective use is still limited, there are very significant advantages that herald very promising expected implementation rates. EBA, Report on Big Data and Advanced Analytics (Report EBA/REP/2020/01, 2020) 20, figure 2.1.

45 A Alonso and JM Carbó, ‘Understanding the Performance of Machine Learning Models to Predict Credit Default: A Novel Approach for Supervisory Evaluation’ (Working Paper No 2105, Banco de España March 2021) <www.bde.es/f/webbde/SES/Secciones/Publicaciones/PublicacionesSeriadas/DocumentosTrabajo/21/Files/dt2105e.pdf>.

46 Deloitte, Artificial Intelligence. Innovation Report (Report, 2018).

47 O Kaya, ‘Robo-Advice: A True Innovation in Asset Management’ (Research Paper, Deutsche Bank Research, EU Monitor Global Financial Markets, 10 August 2017) 9.

48 T Bucher-Koenen, ‘Financial Literacy, Cognitive Abilities, and Long-Term Decision Making: Five Essays on Individual Behavior’ (2010) Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Wirtschaftswissenschaften der Universität Mannheim.

49 A Chander, ‘The Racist Algorithm’ (2017) 115 Michigan Law Review 1023.

50 S Barocas and A Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671.

51 Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 relating to the taking up and pursuit of the business of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC Text with EEA relevance [2013] OJ L 176/338.

52 See also arguments raised by Bednarz and Przhedetsky in Chapter 4 in this book as to the legal rules that incentivise the use of ADM and AI tools by financial entities.

53 Regulation (EU) 2020/1503 of the European Parliament and of the Council of 7 October 2020 on European providers of equity finance services to enterprises, and amending Regulation (EU) 2017/1129 and Directive (EU) 2019/1937 (Text with EEA relevance) [2020] OJ L 347/1.

54 Report with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)), 5 October 2020 <www.europarl.europa.eu/doceo/document/A-9-2020-0178_ES.pdf>.

55 According to Article 1 of the proposal, the Regulation of the European Parliament and of the Council laying down harmonised rules in the field of artificial intelligence (Artificial Intelligence Act) states:

  1. (a) harmonised rules for the placing on the market, putting into service and use of artificial intelligence systems (“AI systems”) in the Union;

  2. (b) prohibitions of certain artificial intelligence practices;

  3. (c) specific requirements for high-risk AI systems and obligations for operators of such systems;

  4. (d) harmonised transparency rules for certain AI systems;

  5. (e) rules on market monitoring, and market surveillance governance; and enforcement

56 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products [1985] OJ L 210/29.

57 Proposal for a Directive on liability for defective products COM(2022) 495. BA Koch et al, ‘Response of the European Law Institute to the Public Consultation on Civil Liability – Adapting Liability Rules to the Digital Age and Artificial Intelligence’ (2022) 13 Journal of European Tort Law 1, 25–63 <https://doi.org/10.1515/jetl-2022-0002>.

58 Proposal for a Directive on liability for defective products COM(2022) 495.

60 The European Law Institute’s projects on Smart Contract and Blockchain, Algorithmic Contracts and Innovation Paper on Guiding Principles for Automated Decision-Making in Europe seek to contribute to this pre-legislative debate in the Union (‘ELI Projects and Other Activities’, European Law Institute (Web Page) <www.europeanlawinstitute.eu/projects-publications/>). At the international level, work has also started in the same direction, such as the new UNCITRAL/UNCITRAL work plan project on automation and the use of AI in international trade (‘Working Group IV: Electronic Commerce’, United National Commission on International Trade Law (Web Page) <https://uncitral.un.org/es/working_groups/4/electronic_commerce>).

61 C Codagnone, G Liva, and T Rodríguez de las Heras Ballell, Identification and Assessment of Existing and Draft EU Legislation in the Digital Field (Study, 2022) <www.europarl.europa.eu/thinktank/de/document/IPOL_STU(2022)703345>. Study requested by the AIDA special committee, European Parliament.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×