4.1 Introduction
Automated Banks – the financial entities using ADM and AI – feed off the culture of secrecy that is pervasive and entrenched in automated processes across sectors from ‘Big Tech’ to finance to government agencies, allowing them to avoid scrutiny, accountability, and liability.Footnote 1 As Pasquale points out, ‘finance industries profit by keeping us in the dark’.Footnote 2
An integral part of the financial industry’s business model is the use of risk scoring to profile consumers of financial services, for example in the form of credit scoring, which is a notoriously opaque process.Footnote 3 The use of non-transparent, almost ‘invisible’ surveillance processes and the harvesting of people’s data is not new: financial firms have always been concerned with collecting, aggregating, and combining data for the purposes of predicting the value of their customers through risk scoring.Footnote 4 AutomationFootnote 5 introduces a new level of opacity in the financial industry, for example through the creation of AI models for which explanations are not provided – either deliberately, or due to technical explainability challenges.Footnote 6
In this chapter we argue that the rise of AI and ADM tools contributes to opacity within the financial services sector, including through the intentional use of the legal system as a ‘shield’ to prevent scrutiny and blur accountability for harms suffered by consumers of financial services. A wealth of literature critiques the status quo, showing that consumers are disadvantaged by information asymmetries,Footnote 7 complicated consent agreements,Footnote 8 information overload,Footnote 9 and other tactics that leave consumers clueless if, when, and how they have been subject to automated systems. If consumers seek to access a product or service, it is often a requirement that they be analysed and assessed using an automated tool, for example, one that determines a credit score.Footnote 10 The potential harms are interlinked and range from financial exclusion to digital manipulation to targeting of vulnerable consumers and privacy invasions.Footnote 11 In our analysis we are mostly concerned with discrimination as an example of such harm,Footnote 12 as it provides a useful illustration of problems enabled by opacity, such as significant difficulty in determining if unfair discrimination has occurred at all, understanding the reasons for the decision affecting the person or group, and accessing redress.
The rules we examine will differ among jurisdictions, and our aim is not to provide a comprehensive comparative analysis of all laws that provide potential protections against scrutiny and increase the opacity of ADM-related processes of Automated Banks. We are interested in exploring certain overarching tendencies, using examples from various legal systems, and showing how financial firms may take advantage of the complex legal and regulatory frameworks applicable to their operations in relation to the use of AI and ADM tools.
As the use of AI and ADM continues to grow in financial services markets, consumers are faced with the additional challenge of knowing about, and considering how their ever-expanding digital footprint may be used by financial institutions. The more data exists about a person, the better their credit score (of course within certain limits, such as paying off debts on time).Footnote 13 The exact same mechanism may underpin ‘open banking’ schemes: consumers who do not have sufficient data – often vulnerable people, such as domestic violence victims, new immigrants, or Indigenous people – cannot share their data with financial entities, may be excluded from accessing some products or offered higher prices, even if their actual risk is low.Footnote 14
In Australia, consumers have claimed that they have been denied loans due to their use of takeaway food services and digital media subscriptions.Footnote 15 Credit rating agencies such as Experian explicitly state that they access data sources that reflect consumers’ use of new financial products, including ‘Buy Now Pay Later’ schemes.Footnote 16 As more advanced data collection, analysis, and manipulation technologies continue to be developed, there is potential for new categories of data to emerge. Already, companies can draw surprising inferences from big data. For example, studies have shown that seemingly trivial Facebook data can, with reasonable accuracy, predict a range of attributes that have not been disclosed by users: in one study, liking the ‘Hello Kitty’ page correlated strongly with a user having ‘[d]emocratic political views and to be of African-American origin, predominantly Christian, and slightly below average age’.Footnote 17
Unless deliberate efforts are made, both in the selection of data sets and the design and auditing of AMD tools, inferences and proxy data will continue to produce correlations that may result in discriminatory treatment.Footnote 18
This chapter proceeds as follows. We begin Section 4.2 with discussion of rules that allow corporate secrecy around AI models and their data sources to exist, focusing on three examples of such rules. We discuss the opacity of credit scoring processes and the limited explanations that consumers can expect in relation to a financial decision made about them (Section 4.2.1), trade secrecy laws (Section 4.2.2), and data protection rules which do not protect de-identified or anonymised information (Section 4.2.3). In Section 4.3 we analyse frameworks that incentivise the use of ADM tools by the financial industry, thus providing another ‘protective layer’ for Automated Banks, again discussing two examples: financial product governance regimes (Section 4.3.1) and ‘open banking’ rules (Section 4.3.2). The focus of Section 4.4 is on potential solutions. We argue it is not possible for corporate secrecy and consumer rights to coexist, and provide an overview of potential regulatory interventions, focusing on preventing Automated Banks from using harmful AI systems (Section 4.4.1), aiding consumers understand when ADM is used (Section 4.4.2), and facilitating regulator monitoring and enforcement (Section 4.4.3). The chapter concludes with Section 4.5.
4.2 Rules That Allow Corporate Secrecy to Exist
4.2.1 Opacity of Credit Scoring and the (Lack of) Explanation of Financial Decisions
Despite their widespread use in the financial industry, credit scores are difficult for consumers to understand or interpret. A person’s credit risk has traditionally been calculated based on ‘three C’s’: collateral, capacity, and character.Footnote 19 Due to the rise of AI and ADM tools in the financial industry, the ‘three C’s’ are increasingly being supplemented and replaced by diverse categories of data.Footnote 20 An interesting example can be found through FICO scores, which are arguably the first large-scale process in which automated computer models replaced human decision-making.Footnote 21 FICO, one of the best-known credit scoring companies,Footnote 22 explains that their scores are calculated according to five categories: ‘payment history (35%), amounts owed (30%), length of credit history (15%), new credit (10%), and credit mix (10%)’.Footnote 23 These percentage scores are determined by the company to give consumers an understanding of how different pieces of information are weighted in the calculation of a score, and the ratios identified within FICO scores will not necessarily reflect the weightings used by other scoring companies. Further, while FICO provides a degree of transparency, the ways in which a category such as ‘payment history’ is calculated remains opaque: consumers are not privy to what is considered a ‘good’ or a ‘bad’ behaviour, as represented by data points in their transaction records.Footnote 24
Globally, many credit scoring systems (both public and private) produce three-digit numbers within a specified range to determine a consumer’s creditworthiness. For example, privately operated Equifax and Trans Union Empirica score consumers in Canada between 300 and 900,Footnote 25 whereas credit bureaus in Brazil score consumers between 1 and 1,000.Footnote 26 In an Australian context, scores range between 0 and 1,000, or 1,200, depending on the credit reporting agency.Footnote 27 By contrast, other jurisdictions use letter-based ratings, such as Singapore’s HH to AA scale which corresponds with a score range of 1,000–2,000,Footnote 28 or blacklists, such as Sweden’s payment default records.Footnote 29
Credit scoring, it turns out, is surprisingly accurate in predicting financial breakdowns or future loan delinquency,Footnote 30 but the way different data points are combined by models is not something even the model designer can understand using just intuition.Footnote 31 Automated scoring processes become even more complex as credit scoring companies increasingly rely on alternative data sources to assess consumers’ creditworthiness, including ‘predictions about a consumer’s friends, neighbors, and people with similar interests, income levels, and backgrounds’.Footnote 32 And a person’s credit score is just one of the elements lenders, Automated Banks, feed into their models to determine a consumer’s risk score. It has been reported that college grades, and the time of day an individual applies for a loan have been used to determine a person’s access to credit.Footnote 33 These types of data constitute ‘extrinsic data’ sources, which consumers are unknowingly sharing.Footnote 34
The use of alternative data sources is purported as a way of expanding consumers’ access to credit in instances where there is a lack of quality data (such as previous loan repayment history) to support the underwriting of consumers’ loan.Footnote 35 Applicants are often faced with a ‘Catch-22 dilemma: to qualify for a loan, one must have a credit history, but to have a credit history one must have had loans’.Footnote 36 This shows how ADM tools offer more than just new means to analyse greater than ever quantities of data: they also offer a convenient excuse for Automated Banks to effectively use more data.
Of course, increasing reliance on automated risk scoring is not the origin of unlawful discrimination in financial contexts. However, it is certainly not eliminating discriminatory practices either: greater availability of more granular data, even when facially neutral, leads to reinforcing of existing inequalities.Footnote 37 Automated Banks have been also shown to use alternative data to target more vulnerable consumers, who they were not able to reach or identify when only using traditional data on existing customers.Footnote 38 The quality change that AI tools promise to bring is to ‘make the data talk’: all data is credit data, if we have the right automated tools to analyse them.Footnote 39
Collection, aggregation, and use of such high volumes of data, including ‘extrinsic data’, also make it more difficult, if not impossible, for consumers to challenge financial decisions affecting them. While laws relating to consumer lending (or consumer financial products in general) in most jurisdictions provide that some form of explanation of a financial decision needs to be made available to consumers,Footnote 40 these rules will rarely be useful in the context of ADM and AI tools used in processes such as risk scoring.
This is because AI tools operate on big data. Too many features of a person are potentially taken into account for any feedback to be meaningful. The fact that risk scores and lending decisions are personalised make it even more complicated for consumers to compare their offer with anyone else’s. This can be illustrated by the case of Apple credit card,Footnote 41 which has shown the complexity of investigation necessary for people to be able to access potential redress: when applying for personalised financial products, consumers cannot immediately know what features are being taken into account by financial firms assessing their risk, and subsequent investigation by regulators or courts may be required.Footnote 42 The lack of a right to meaningful explanation of credit scores and lending decisions based on the scores makes consumers facing Automated Banks and the automated credit scoring system quite literally powerless.Footnote 43
4.2.2 Trade Secrets and ADM Tools in Credit Scoring
The opacity of credit scoring, or risk scoring more generally, and other automated assessment of clients that Automated Banks engage in, is enabled by ADM tools which ‘are highly valuable, closely guarded intellectual property’.Footnote 44 Complementing the limited duty to provide explanation of financial decisions to consumers, trade secrets laws allow for even more effective shielding of the ADM tools from scrutiny, including regulators’ and researchers’ scrutiny.
While trade secrets rules differ between jurisdictions, the origin and general principles that underpin these rules are common across all the legal systems: trade secrets evolved as a mechanism to protect diverse pieces of commercial information, such as formulas, devices, or patterns from competitors.Footnote 45 These rules fill the gap where classic intellectual property law, such as copyright and patent law, fails – and it notably fails in relation to AI systems, since algorithms are specifically excluded from its protection.Footnote 46 Recent legal developments, for example the European Union Trade Secrets Directive,Footnote 47 or the US Supreme Court case of Alice Corp. v CLS Bank,Footnote 48 mean that to protect their proprietary technologies, companies are now turning to trade secrets.Footnote 49 In practice, this greatly reduces the transparency of the ADM tools used: if these cannot be protected through patent rights, they need to be kept secret.Footnote 50
The application of trade secrets rules leads to a situation in which financial entities, for example lenders or insurers, who apply third party automated tools to assess creditworthiness of their prospective clients might not be able to access the models and data they use. Using third party tools is a common practice, and the proprietary nature of the tools and data used to develop and train the models will mean financial entities using these tools may be forced to rely on the supplier’s specifications in relation to their fairness as they may not be able to access the code themselves.Footnote 51
Secrecy of ADM tools of course has implications for end users, who will be prevented from challenging credit models, and is also a barrier for enforcement and research.Footnote 52 Trade secret protections apply not only to risk scoring models, but often extend also to data sets and inferences generated from information provided by individuals.Footnote 53 Commercial entities openly admit they ‘invest significant amounts of time, money and resources’ to draw inferences about individuals ‘using […] proprietary data analysis tools’, a process ‘only made possible because of the [companies’] technical capabilities and value add’.Footnote 54 This, they argue, makes the data sets containing inferred information a company’s intellectual property.Footnote 55
The application of trade secrets rules to credit scoring in a way that affects the transparency of the financial system is not exactly new: ‘[t]he trade secrecy surrounding credit scoring risk models, and the misuse of the models coupled with the lack of governmental control concerning their use, contributed to a financial industry wide recession (2007–2008)’.Footnote 56
In addition to trade secrets laws, a sui generis protection of source code of algorithms is being introduced in international trade law through free trade agreements,Footnote 57 which limit governments from mandating access to the source code. The members of the World Trade Organization (WTO) are currently negotiating a new E-commerce trade agreement, which may potentially include a prohibition on government-mandated access to software source code.Footnote 58 WTO members, including Canada, the EU, Japan, South Korea, Singapore, Ukraine, and the United States support such a prohibition,Footnote 59 which in practice will mean a limited ability for states to adopt laws that would require independent audits of AI and ADM systems.Footnote 60 It is argued that adoption of the WTO trade agreement could thwart the adoption of the EU’s AI Act,Footnote 61 demonstrating how free trade agreements can impose another layer of rules enhancing the opacity of AI and ADM tools.
4.2.3 ‘Depersonalising’ Information to Avoid Data and Privacy Protection Laws: Anonymisation, De-identification, and Inferences
Automated Banks’ opacity is enabled by the express exclusion of ‘anonymised’ or ‘de-identified’ data from the scope of data and privacy protection laws such as the GDPR.Footnote 62 In its Recital 26, the GDPR defines anonymised information as not relating to ‘an identified or identifiable natural person’ or as ‘data rendered anonymous in such a manner that the data subject is not or no longer identifiable’. This allows firms to engage in various data practices, which purport to use anonymised data.Footnote 63 They argue they do not collect or process ‘personal information’, thus avoiding the application of the rules, and regulatory enforcement.Footnote 64 Also, consumers to whom privacy policies are addressed believe that practices focusing on information that does not directly identify them have no impact on their privacy.Footnote 65 This in turn may mean privacy policies are misrepresenting data practices to consumers, which could potentially invalidate their consent.Footnote 66
There is an inherent inconsistency between privacy and data protection rules and the uses and benefits that ADM tools using big data analytics promise. Principles of purpose limitation and data minimisationFootnote 67 require entities to delimit, quite strictly and in advance, how the data collected are going to be used, and prevent them from collecting and processing more data than necessary for that specific purpose. However, this is not how big data analytics, which fuels ADM and AI models, works.Footnote 68 Big data means that ‘all data is credit data’, incentivising the Automated Banks to collect as much data as possible, for any possible future purpose, potentially not known yet.Footnote 69 The exclusion of anonymised or de-identified data from the scope of the protection frameworks opens doors for firms to take advantage of enhanced analytics powered by new technologies. The contentious question is at which point information becomes, or ceases to be, personal information. If firms purchase, collect, and aggregate streams of data, producing inferences allowing them to describe someone in great detail, including their age, preferences, dislikes, size of clothes they wear and health issues they suffer from, their household size and income level,Footnote 70 but do not link this profile to the person’s name, email, physical address, or IP address – would it be personal information? Such a profile, it could be argued, represents a theoretical, ‘model’ person or consumer, built for commercial purposes through aggregation of demographic and other information available.Footnote 71
De-identified data may still allow a financial firm to achieve more detailed segmentation and profiling of their clients. There are risks of harms in terms of ‘loss of privacy, equality, fairness and due process’ even when anonymised data is used.Footnote 72 Consumers are left unprotected against profiling harms due to such ‘narrow interpretation of the right to privacy as the right to anonymity’.Footnote 73
There is also discussion as to the status of inferences under data and privacy protection laws. Credit scoring processes are often based on inferences, where a model predicts someone’s features (and ultimately their riskiness or value as a client) on the basis of other characteristics that they share with others deemed risky by the model.Footnote 74 AI models may thus penalise individuals for ‘shopping at low-end stores’, membership in particular communities or families, and affiliations with certain political, religious, and other groups.Footnote 75 While AI-powered predictions about people’s characteristics are often claimed to be more accurate than those made by humans,Footnote 76 they may also be inaccurate.Footnote 77 The question is if such inferences are considered personal information protected by privacy and data laws.
Entities using consumers’ data, such as technology companies, are resisting against expressly including inferred information in the scope of data and privacy protections. For example, Facebook openly admitted that ‘[t]o protect the investment made in generating inferred information and to protect the inferred information from inappropriate interference, inferred information should not be subject to all of the same aspects of the [Australian Privacy Act] as personal information’.Footnote 78 The ‘inappropriate interference’ they mention refers to extending data correction and erasure rights to inferred information.
Second, there is an inherent clash between the operation of privacy and data protection rules and the inference processes AI tools are capable of carrying out. Any information, including sensitive information, may be effectively used by an ADM system, even though it only materialises as an internal encoding of the model and is not recorded in a human understandable way. The lack of explicit inclusion of inferred information, and its use, within the privacy and data protection frameworks provides another layer of opacity shielding financial firms (as well as other entities) from scrutiny of their ADM tools.
When information is ‘depersonalised’ in some way: de-identified on purpose through the elimination of strictly personal identifiers,Footnote 79 through use of anonymous ‘demographic’ data, through ‘pseudonymisation’ practices, or because it is inferred from data held (either personal or already de-identified), the result is the same – privacy and data protection rules do not apply. The firms take advantage of that exclusion, sometimes balancing on the thin line between legal and illegal data processing, making their data practices non-transparent to avoid scrutiny by consumers and regulators.
As a US judge in a recent ruling put it: ‘[i]t is well established that there is an undeniable link between race and poverty, and any policy that discriminates based on credit worthiness correspondingly results in a disparate impact on communities of color’.Footnote 80 The data used in large-scale AI and ADM models is often de-identified or anonymised, but it inherently mirrors historical inequalities and biases, thus allowing the Automated Banks to claim impartiality and avoid responsibility for the unfairness of data used.
The reason why privacy and data protection rules lack clear consideration of certain data practices and processes enabled by AI may be due to these tools and processes being relatively new and poorly understood phenomena.Footnote 81 This status quo is however very convenient for the companies, who will often raise the argument that ‘innovation’ will suffer if more stringent regulation is introduced.Footnote 82
4.3 Rules That Incentivise the Use of ADM Tools by Financial Entities
In addition to offering direct pathways allowing Automated Banks to evade scrutiny of their AI and ADM models, legal systems and markets in the developed world have also evolved to incentivise the use of automated technology by financial entities. In fact, the use of ADM and AI tools is encouraged, or sometimes even mandated,Footnote 83 by legal and regulatory frameworks. After all, the fact that they are told to either use the technology, or to achieve outcomes that can effectively only be reached with the application of the tools in question, provides a basis for a very convenient excuse. Though this is mainly an unintended effect of the rules, it should not be ignored.
In this section, we discuss two examples of rules that increase the secrecy of AI or ADM tools used in the context of risk scoring: financial products governance rules and ‘open banking’ regimes.
4.3.1 Financial Products Governance Rules
Financial firms have always been concerned with collecting and using data about their consumers, to differentiate between more and less valuable customers. For example, insurance firms, even before AI profiling tools were invented (or at least before they were applied at a greater scale) were known to engage in practices referred to as ‘cherry-picking’ and ‘lemon-dropping’, setting up firms’ offices at higher floors in buildings with no lifts, so that it would be harder for disabled (potential) clients to reach them.Footnote 84 There is a risk that the widespread data profiling and use of AI tools may exacerbate issues relating to consumers’ access to financial products and services. AI tools may introduce new or replicate historical biases present in data,Footnote 85 doing so more efficiently, in a way that is more difficult to discover, and at a greater scale than was possible previously.Footnote 86
An additional disadvantage resulting from opaque risk scoring systems is that consumers may miss out on the opportunity to improve their score (for example, through the provision of counterfactual explanations, or the use of techniques including ‘nearby possible worlds’).Footnote 87 In instances where potential customers who would have no trouble paying back loans are given low risk scores, two key issues arise: first, the bank misses out on valuable customers, and second, there is a risk that these customers’ rejections, if used as input data to train the selection algorithm, will reinforce existing biases.Footnote 88
Guaranteeing suitability of financial services is a notoriously complicated task for policymakers and regulators. With disclosure duties alone proving largely unsuccessful in addressing the issue of consumers being offered financial products that are unfit for purpose, policymakers in a number of jurisdictions, such as the EU and its Member States, the United Kingdom, Hong Kong, Australia, and Singapore, have started turning to product governance regimes.Footnote 89 An important component of these financial product governance regimes is an obligation placed on financial firms, which issue and distribute financial products, to ensure their products are fitness-for-purpose and to adopt a consumer-centric approach in design and distribution of the products. In particular, a number of jurisdictions require financial firms to delimit the target market for their financial products directed at retail customers, and ensure the distribution of the products within this target market. Such target market is a group of consumers of a certain financial product who are defined by some general characteristics.Footnote 90
Guides issued by regulators, such as the European Securities and Markets AuthorityFootnote 91 and the Australian Securities and Investment Commission,Footnote 92 indicate which consumers’ characteristics are to be taken into account by financial firms. The consumers for whom the product is intended are to be identified according to their ‘likely objectives, financial situation, and needs’,Footnote 93 or five ‘categories’: the type of client, their knowledge and experience, financial situation, risk tolerance, and objective and needs.Footnote 94 For issuers or manufacturers of financial products these considerations are mostly theoretical: as they might not have direct contact with clients, they need to prepare a potential target market, aiming at theoretical consumers and their likely needs and characteristics.Footnote 95 Both issuers and distributors need to take reasonable steps to ensure that products are distributed within the target market, which then translates to the identification of real consumers with specific needs and characteristics that should be compatible with the potential target markets identified. Distributors have to hold sufficient information about their end clients to be able to assess if they can be included in the target market,Footnote 96 including:
– indicators about the likely circumstances of the consumer or a class of consumers (e.g. concession card status, income, employment status);
– reasonable inferences about the likely circumstances of the consumer or a class of consumers (e.g. for insurance, information inferred from the postcode of the consumer’s residential address); or
– data that the distributor may already hold about the consumer or similar consumers, or results derived from analyses of that data (e.g. analysis undertaken by the distributor of common characteristics of consumers who have purchased a product).Footnote 97
Financial products governance frameworks invite financial firms to collect data on consumers’ vulnerabilities. For example in Australia, financial firms need to consider vulnerabilities consumers may have, such as those resulting from ‘personal or social characteristics that can affect a person’s ability to manage financial interactions’,Footnote 98 as well as those brought about by ‘specific life events or temporary difficulties’,Footnote 99 in addition to vulnerabilities stemming from the product design or market actions.
The rationale of product governance rules is to protect financial consumers, including vulnerable consumers,Footnote 100 yet the same vulnerable consumers may be disproportionately affected by data profiling, thus inhibiting their access to financial products. Financial law is actively asking firms to collect even more data about their current, prospective, and past customers, as well as the general public. It provides more than a convenient excuse to carry out digital profiling and collect data for even more precise risk scoring – it actually mandates this.
4.3.2 How ‘Open Banking’ Increases Opacity
Use of AI and ADM tools, together with ever-increasing data collection feeding the data hungry models,Footnote 101 is promoted as beneficial to consumers and markets, and endorsed by companies and governments. Data collection is thus held out as a necessary component of fostering AI innovation. Companies boast how AI insights allow them to offer personalised services, ‘tailored’ to individual consumer’s needs. McKinsey consulting firm hails ‘harnessing the power of external data’ noting how ‘few organizations take full advantage of data generated outside their walls. A well-structured plan for using external data can provide a competitive edge’.Footnote 102
Policymakers use the same rhetoric of promoting ‘innovation’ and encourage data collection through schemes such as open banking.Footnote 103 The aim of open banking is to give consumers the ability to direct companies that hold financial data about themselves to make it available to financial (or other) companies of the consumer’s choice. Thus, it makes it possible for organisations to get access to consumers’ information they could never get from a consumer directly, such as for example their transaction data for the past ten years.
Jurisdictions such as the EU, United Kingdom, Australia, and Hong Kong have recently adopted regulation promoting open banking, or ‘open finance’ more generally.Footnote 104 The frameworks are praised by the industry as ‘encourag[ing] the development of innovative products and services that help consumers better engage with their finances, make empowered decisions and access tailored products and services’.Footnote 105
While open banking is making it possible for financial firms to develop new products for consumers, the jury is still out as to the scheme’s universally positive implications for consumers and markets.Footnote 106 One thing that is clear, however, is that because of its very nature, open banking contributes to information and power asymmetry between consumers and Automated Banks.
Traditionally, in order to receive a financial product, such as a loan or an insurance product, consumers would have to actively provide relevant data, answering questions or prompts, in relation to their income, spending, age, history of loan repayments, and so on. Open banking – or open finance more broadly – means that consumers can access financial products without answering any questions. But these questions provided a level of transparency to consumers: they knew what they were being asked, and were likely to understand why they were being asked such questions. But when an individual shares their ‘bulk’ data, such as their banking transaction history, through the open banking scheme, do they really know what a financial firm is looking for and how it is being used? At the same time, in such a setting, consumers are deprived of control over which data to share (for example, they cannot just hide transaction data on payments they made to merchants such as liquor stores or pharmacies). The transparency for financial firms when data is shared is therefore significantly higher than in ‘traditional’ settings – but for consumers the process becomes more opaque.Footnote 107
4.4 Can Corporate Secrecy Coexist with Consumer Rights? Possible Regulatory Solutions
ADM tools contribute to maintaining corporate secrecy of Automated Banks, and as we argue in this chapter, legal systems perpetuate, encourage, and feed the opacity further. The opacity then increases the risk of consumer harm, such as discrimination, which is more difficult to observe, and more challenging to prove.
In this section we provide a brief outline of potential interventions that may protect against AI-facilitated harms, particularly if applied synchronously. This discussion does not aim to be exhaustive, but rather aims to show something can be done to combat the opacity and resulting harms.
Interventions described in academic and grey literature can be divided into three broad categories: (1) regulations that prevent businesses from using harmful AI systems in financial markets, (2) regulations that aid consumers to understand when ADM systems are used in financial markets, and (3) regulations that facilitate regulator monitoring and enforcement against AI-driven harms in financial markets. Approaches to design (including Transparency by DesignFootnote 108) are not included in this list, and while they may contribute to improved consumer outcomes, they are beyond the scope of this chapter.
The somewhat provocative title of this section asks if corporate secrecy is the real source of the AI-related harms in the described context. The interventions outlined below focus on preventing harms, but can the harms really be prevented if the opacity of corporate practices and processes is not addressed first? Corporate secrecy is the major challenge to accountability and scrutiny, and consumer rights, including right to non-discrimination, cannot be guaranteed in an environment as opaque as it currently is. We submit that the regulatory interventions urgently needed are the ones that prevent secrecy first and foremost. AI and ADM tools will continue to evolve, and technology as such is not a good regulatory targetFootnote 109 – the focus must be on harm prevention. Harms can only be prevented if the practices of financial firms, such as credit scoring discussed in this chapter, are transparent and easily monitored both by regulators and consumers.
4.4.1 Preventing Automated Banks from Designing Harmful AI Systems
International and national bodies in multiple jurisdictions have recently adopted, or are currently debating, various measures with an overarching aim of protecting consumers from harm. For example, the US Federal Trade Commission has provided guidance to businesses using AI, explaining that discriminatory outcomes resulting from the use of AI would contravene federal law.Footnote 110 The most comprehensive approach to limiting the use of particular AI tools can be found in the EU’s proposed Artificial Intelligence Act. Its Recital 37 specifically recommends that ‘AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems’. This proposal is a step towards overcoming some opaque practices, through the provision of ‘clear and adequate information to the user’ along with other protections that enable authorities to scrutinise elements of ADM tools in high-risk contexts.Footnote 111 Early criticisms of the proposed Act note that while a regulatory approach informed by the context in which ADM is used has some merit, it does not cover potentially harmful practices such as emotion recognition and remote biometric identification,Footnote 112 which could be used across a range of contexts, generating data sets that may later be used in other markets such as financial services.
An alternative approach to regulating AI systems before they are used in markets is to limit the sources of information that can be used by ADM tools, or restrict the ways in which information can be processed. In addition to privacy protections, some jurisdictions have placed limitations on the kinds of information that can be used to calculate a credit score. For example, in Denmark, the financial services sector can use consumers’ social media data for marketing purposes but is explicitly prohibited from using this information to determine creditworthiness.Footnote 113 Similarly, the EU is considering a Directive preventing the use of personal social media and health data (including cancer data) in the determination of creditworthiness.Footnote 114 Such prohibitions are, however, a rather tricky solution: it may be difficult for the regulation to keep up with a growing list of data that should be excluded from analysis.Footnote 115 One way of overcoming this challenge would be to avoid focusing on restricted data sources, and instead create a list of acceptable data sources, which is a solution applied for example in some types of health insurance.Footnote 116
Imposing limits on how long scores can be kept and/or relied on by Automated Banks is another important consideration. In Australia, credit providers are bound by limits that stipulate the length of time that different pieces of information are held on a consumer’s file: credit providers may only keep financial hardship information for twelve months from the date the monthly payment was made under a financial hardship arrangement, whereas court judgements may be kept on record for five years after the date of the decision.Footnote 117 In Denmark, where the credit reporting system operates as a ‘blacklist’ of people deemed more likely to default, a negative record (for instance, an unpaid debt) is deleted after five years, regardless of whether or not the debt has been paid.Footnote 118 A challenge with these approaches is that the amount of time particular categories of data may be kept may not account for proxy data, purchased data sets, and/or proprietary scoring and profiling systems that group consumers according to complex predictions that are impossible to decode.
4.4.2 Aiding Consumers Understand When ADM Systems Are Used in Financial Services
Despite the development of many principles-based regulatory initiatives by governments, corporates, and think tanks,Footnote 119 few jurisdictions have legislated protections that require consumers to be notified if and when they have been assessed by an automated system.Footnote 120 In instances where consumers are notified, they may be unable to receive an understandable explanation of the decision-making process, or to seek redress through timely and accessible avenues.
Consumers face a number of challenges in navigating financial markets, such as understanding credit card repayment requirementsFootnote 121 and failing to accurately assess their credit.Footnote 122 For individuals, it is crucial to understand how they are being scored, as this will make it possible for them to be able to identify inaccuracies,Footnote 123 and question decisions made about them. Credit scoring is notoriously opaque and difficult to understand, so consumers are likely to benefit from requirements for agencies to simplify and harmonise how scores are presented.Footnote 124An example of a single scoring system can be found in Sri Lanka, where credit ratings, or ‘CRIB Scores’ are provided by the Central Information Bureau of Sri Lanka, a public-private partnership between the nation’s Central Bank and a number of financial institutions that hold equity in the Bureau. The Bureau issues CRIB Score reports to consumers in a consistent manner, utilising an algorithm to produce a three-digit number ranging from 250 to 900.Footnote 125 In Sri Lanka’s case, consumers are provided with a singular rating from a central agency, and although this rating is subject to change over time, there is no possibility of consumers receiving two different credit scores from separate providers.
Providing consumers with the opportunity to access their credit scores is another (and in many ways complementary) regulatory intervention. A number of jurisdictions provide consumers with the option to check their credit report and/or credit score online. For example, consumers in CanadaFootnote 126 and AustraliaFootnote 127 are able to access free copies of their credit reports by requesting this information directly from major credit bureaus. In Australia, consumers are able to receive a free copy of their credit report once every three months.Footnote 128
However, such approaches have important limitations. Credit ratings are just one of many automated processes within the financial services industry. Automated Banks, with access to enough data, can create their own tools going outside the well-established credit rating systems. Also, it is consumers who are forced to carry the burden of correcting inaccurate information which is used to make consequential decisions about them, while often being required to pay for the opportunity to do so.Footnote 129
In addition, explainability challenges are faced in every sector that uses AI, and there is considerable investigation ahead to determine the most effective ways of explaining automated decisions in financial markets. It has been suggested that a good explanation is provided when the receiver ‘can no longer keep asking why’.Footnote 130 The recent EU Digital Services ActFootnote 131 emphasises such approach by noting that recipients of online advertisements should have access to ‘meaningful explanations of the logic used’ for ‘determining that specific advertisement is to be displayed to them’.Footnote 132
Consumer experience of an AI system will depend on a number of parameters, including format of explanations (visual, rule-based, or highlighted key features), their complexity and specificity, application context, and variations suiting users’ cognitive styles (for example, providing some users with more complex information, and others with less).Footnote 133 The development of consumer-facing explainable AI tools is an emerging area of research and practice.Footnote 134
A requirement of providing meaningful feedback to consumers, for example, through counterfactual demonstrations,Footnote 135 would make it possible for individuals to understand what factors they might need to change to receive a different decision. It would also be an incentive for Automated Banks to be more transparent.
4.4.3 Facilitating Regulator Monitoring and Enforcement of ADM Harms in Financial Services
The third category of potential measures relies on empowering regulators, thus shifting the burden away from consumers. For example, regulators need to be able to ‘look under the hood’ of any ADM tools, including these of proprietary character.Footnote 136 This could be in a form of using explainable AI tools, access to raw code, or ability to use dummy data to test the model. A certification scheme, such as quality standards, is another option, the problem however is the risk of ‘set and forget approach’. Another approach to providing regulators insight into industry practices is the establishment of regulatory sandboxes, which nevertheless have limitations.Footnote 137
Financial institutions could also be required to prove a causal link between the data that they use to generate consumer scores, and likely risk. Such approach would likely reduce the use of certain categories of data, where correlations between data points would not be supported by a valid causal relationship. For example, Android phone users are reportedly safer drivers than iPhone users,Footnote 138 but such rule would prevent insurers from taking this into account when offering a quote on car insurance (while we do not suggest they are currently doing so, in many legal systems they could). In practice, some regulators are looking at this solution. For example, while not going as far as requiring direct causal link, the New York State financial regulator requires a ‘valid explanation or rationale’ for underwriting of life insurance, where external data or external predictive models are used.Footnote 139 However, such approach could result in encouraging financial services providers to collect more data, just to be able to prove the causal link,Footnote 140 which may again further disadvantage consumers and introduce more, not less, opacity.
4.5 Conclusions
Far from being unique to credit scoring, the secrecy of ADM tools is a problem affecting multiple sectors and industries.Footnote 141 Human decisions are also unexplainable and opaque, and ADM tools are often made out to be a potential, fairer and more transparent, alternative. But the problem is secrecy increases, not decreases, with automation.Footnote 142
There are many reasons for this, including purely technological barriers to explainability. But also, it is obviously cheaper and easier not to design and use transparent systems. As we argue in this chapter, opacity is a choice made by organisations, often on purpose, as it allows them to evade scrutiny and hide their practices from the public and regulators. Opacity of ADM and AI tools used is a logical consequence of the secrecy of corporate practices.
Despite many harms caused by opacity, the legal systems and market practice have evolved to enable or even promote that secrecy surrounding AI and ADM tools, as we have discussed using examples of rules applying to Automated Banks. However, the opacity and harms could be prevented with some of the potential solutions which we have discussed in this chapter. The question is whether there is sufficient motivation to achieve positive social impact with automated tools, without just focusing on optimisation and profits.