We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As posthumous data use policy within the broader scope of navigating postmortem data privacy is a procedurally complex landscape, our study addresses this by exploring patterns in individuals’ willingness to donate data with health researchers after death and developing practical recommendations.
Methods:
An electronic survey was conducted in April 2021 among adults (≥18 years of age) registered in ResearchMatch (www.researchmatch.org), a national health research registry. Descriptive statistics were used to observe trends in, and multinomial logistic regression analyses were conducted at a 95% confidence interval to determine the association between, willingness to donate some, all, or no data after death with researchers based on the participants’ demographics (education level, age range, duration of using online medical websites, and annual frequency of getting ill).
Results:
Of 399 responses, most participants were willing to donate health data (electronic medical record data [67%], prescription history data [63%], genetic data [54%], and fitness tracker data [53%]) after death. Among 397 respondents, we identified that individuals were more likely to donate some data after death (vs. no data) if they had longer duration of using online medical websites (adjusted relative risk ratio = 1.22, p= 0.04, 95% CI: 1.01 to 1.48). No additional significant findings were observed between willingness to donate all, some, or none of their data after death and other demographic factors.
Conclusions:
Engaging patients in online medical websites may be one potential mechanism to encourage or inspire individuals to participate in posthumous data donation for health research purposes.
Mental health apps (MHAs) are increasingly popular in India due to rising mental health awareness and app accessibility. Despite their benefits, like mood tracking, sleep tools and virtual therapy, MHAs lack regulatory oversight. India's framework, including the Central Drugs Standard Control Organization (CDSCO) and Medical Device Rules 2017, does not cover standalone health apps, raising concerns about data privacy and accuracy. Establishing a centralised regulatory body with guidelines for MHAs is essential for user safety and efficacy. This paper examines the current regulatory landscape, compares international approaches and proposes a tiered regulatory framework to foster responsible innovation while safeguarding user interests in digital mental health services.
Scientists must be ethical and conscientious, always. Data bring with them much promise to improve our understanding of the world around us, and improve our lives within it. But there are risks as well. Scientists must understand the potential harms of their work, and follow norms and standards of conduct to mitigate those concerns. But network data are different. As we discuss in this chapter, network data are some of the most important but also most sensitive data. Before we dive into the data, we discuss the ethics of data science in general and network data in specific. The ethical issues that we face often do not have clear solutions but require thoughtful approaches and understanding complex contexts and difficult circumstances.
The past decade has seen a marked shift in the regulatory landscape of UK higher education. Institutions are increasingly assuming responsibility for preventing campus sexual misconduct, and are responding to its occurrence through – amongst other things – codes of (mis)conduct, consent and/or active bystander training, and improved safety and security measures. They are also required to support victim-survivors in continuing with their education, and to implement fair and robust procedures through which complaints of sexual misconduct are investigated, with sanctions available that respond proportionately to the seriousness of the behaviour and its harms. This paper examines the challenges and prospects for the success of university disciplinary processes for sexual misconduct. It focuses in particular on how to balance the potentially conflicting rights to privacy held by reporting and responding parties within proceedings, while respecting parties’ rights to equality of access to education, protection from degrading treatment, due process, and the interests of the wider campus community. More specifically, we explore three key moments where private data is engaged: (1) in the fact and details of the complaint itself; (2) in information about the parties or circumstances of the complaint that arise during the process of an investigation and/or resultant university disciplinary process; and (3) in the retention and disclosure (to reporting parties or the university community) of information regarding the outcomes of, and sanctions applied as part of, a disciplinary process. We consider whether current data protection processes – and their interpretation – are compatible with trauma-informed practice and a wider commitment to safety, equality and dignity, and reflect on the ramifications for all parties where that balance between rights or interests is not struck.
The Dobbs opinion emphasizes that the state’s interest in the fetus extends to “all stages of development.” This essay briefly explores whether state legislators, agencies, and courts could use the “all stages of development” language to expand reproductive surveillance by using novel developments in consumer health technologies to augment those efforts.
Data is the lifeblood of the digital economy. Much of the data in use today is generated by the everyday activities of consumers as they communicate, shop, travel, work, or engage in routine interactions with other consumers, businesses, and government entities through digital systems, platforms, and media. This has led to an enormous accumulation of data about individual consumers that can directly or indirectly provide information about their characteristics, preferences, activities, or behaviors.
Digital technologies are reshaping the global economy and complicating cooperation over its governance. Innovations in technology and business propel a new, digitally-driven phase of globalization defined by the expansion of cross-border information flows that is provoking political conflict and policy discord. This Element argues that the activities of digital value chains (DVCs), the central economic actors in digital globalization, complicate international economic relations. DVC activities can erode individual privacy, shift tax burdens, and cement monopoly positions. These outcomes generate a new politics of globalization, and governments are responding with increasing restrictions on cross-border data flows. This monograph: 1) explains the new sources of political division stemming from digital globalization; 2) documents policy barriers to digital trade; 3) presents a framework to explain digital trade barriers across countries; and 4) assesses the prospects for international cooperation on digital governance, which requires countries move beyond coordinated liberalization and toward coordinated regulation.
The EU’s actions show how the exercise of extraterritorial jurisdiction by one actor (the EU) based on local approaches to human rights standards and specific values (privacy and the protection of personal data) could lead to the convergence of values and laws on the global stage. Likely directions are decreasing territorialism or broad interpretations of ‘territory’, increasing elevation of fundamental rights to the disadvantage of certain competing interests, and the EU acting to set a high global data protection norm, enabled by the fundamental right to data protection conditioning its exercise of extraterritorial jurisdiction. Convergence could be resisted. If, however, the EU’s reach were strong enough to avoid or counter resistance, this would ultimately lead to fewer conflicts in jurisdiction as global standards would converge and, even in the EU–US data privacy law interface, commonalities and shared approaches to rights protection would emerge.
This chapter explores the ways in which personal data and digital assets might be misused by strangers and third parties in the context of asset management. As global asset management markets move increasingly to handling investments with the aid of autonomous platforms and algorithms, it is important to be able to identify entities, investors and their agents. Decision-making and transactions need to be trackable. Asset and wealth managers also gather, store and use the personal information of their customers. At the heart of these arrangements is trust. However, the threat of cyberattack or systemic failures gives rise to the question of who should be made liable when losses arise from malicious or negligent data breaches. Often, it is not possible to identify a nefarious individual. Importantly, in the context of asset management, liability for knowing receipt, knowing dealing or knowing assistance may attach to persons dealing with trustees and other types of fiduciaries. This is not an area of law to which a universal regime of absolute liability applies. For this reason, it is useful to identify types of risk and the means to address them.
The transition to open data practices is straightforward albeit surprisingly challenging to implement largely due to cultural and policy issues. A general data sharing framework is presented along with two case studies that highlight these challenges and offer practical solutions that can be adjusted depending on the type of data collected, the country in which the study is initiated, and the prevailing research culture. Embracing the constraints imposed by data privacy considerations, especially for biomedical data, must be emphasized for data outside of the United States until data privacy law(s) are established at the Federal and/or State level.
Against the backdrop of an evolving landscape describing data driven research, this article discusses the role of data protection laws in shaping a free flow of research data. In particular, the analysis inquires whether European data protection law hampers or encourages data-driven research. The analysis critically challenges the shared belief that the more severe data protection regime laid down by the European legislator adversely affects data flows and with that data-driven research. This is contrary to what occurs in the United States, where the more fragmented and less developed data protection framework facilitates data flows and related innovation patterns. We show how research objectives through data re-usability have been very recently given primary importance in the GDPR, where they find a formidable ally enabling the re-usability of public data by businesses and of private data by public institutions, for either public interest-related research purposes or commercially oriented innovation purposes. We argue that the GDPR differently promotes research-valuable data flows in consistency with an emerging principle of free movement of personal data. In order to ground this statement, our analysis links to this principle three-directional research regimes emerging from the GDPR.
This final chapter does not cover any new principles; instead it presents case studies that have a huge global impact in terms of both managerial and government decision making. These case studies relate to: the role of big tech firms in the economy and the opportunities and threats that they present; the problems that the Covid-19 pandemic has posed for governments at the global level; and the problems that climate change is posing for both governments and firms, again at the global level. The last two cases involve geopolitical issues that go beyond the scope of the text, but it is important for managers to have a general appreciation of these issues in order to anticipate government policy and respond appropriately. The questions at the end of the case studies are intended to prompt students to utilize principles explained throughout the text to develop an understanding of the relevant issues and determine optimal courses of action.
In this article, we focus on data trust and data privacy, and how attitudes may be changing during the COVID-19 period. On balance, it appears that Australians are more trusting of organizations with regards to data privacy and less concerned about their own personal information and data than they were prior to the spread of COVID-19. The major determinant of this change in trust with regards to data was changes in general confidence in government institutions. Despite this improvement in trust with regards to data privacy, trust levels are still low.
This chapter demonstrates the extent of the data protection problems in China, and the public’s growing concern about loss of privacy and abuse of their personal data. It proceeds to show that under China’s Cyber Security Law, the government has responded to this issue by strengthening ‘data protection’ from abuse by private companies but without shielding ‘data privacy’ from government intervention. In particular, enforced real-name user registration for online services potentially allows the Chinese government to demand access to the local data of any person who uses an online service in China, for national security or criminal investigation purposes. The chapter argues that this internal contradiction within the Cyber Security Law – increased data protection while demanding real-name user registration – may also benefit AI development. This is due, in part, to the vagueness of key terms within the Cyber Security Law, and the accompanying fuzzy logic within the Privacy Standards issued under that law, which allow both tech firms and government regulators considerable discretion in how they comply with and enforce data protection provisions. In the final part of the chapter, it is argued that due to the potential benefits of AI in solving serious governance problems, the Chinese government will only selectively enforce the data privacy provisions in the Cyber Security Law, seeking to prevent commercial abuse without hindering useful technological advances.
The key provisions of China’s Cyber Security Law relating to data localisation and data exits still allow for competing interpretations by regulators, which makes compliance difficult, even in 2021. The further attempt to include ‘backdoor’ keys to encryption in this law is also noted, although foreign companies have managed to exert some influence on this point and other implementation issues. The Cyber Security Law is an important and high-profile development in Chinese cyber policy history. It created much more controversy than the Anti-Terrorism Law explained in the previous chapter. In recent years, China has gradually adopted a series of laws, regulations and macro policies in the field of cyber security and data protection aimed at turning the country into a ‘cyber superpower’ and boosting its digital economy. The Cyber Security Law, which came into partial effect from 1 June 2017 (with an official 18-month phase-in period for the data localisation provisions), is a milestone in the development of China’s legal framework for cyber security and data protection. The law also provides further evidence of the inherent tensions underlying the innovation policies described in Chapter 3. However, vague regulations allow regulators leeway to adjust their aims in response to broader economic and political trends by means of implementing rules. Finally, clarifying the vaguest provisions in the Cyber Security Law through more transparent rules may provide an opportunity for the Chinese government to decide which way it is heading: towards further innovation or further restriction beyond the ongoing US–China trade war.
Recent findings have shown that the continued expansion of the scope and scale of data collected in electronic health records are making the protection of personally identifiable information (PII) more challenging and may inadvertently put our institutions and patients at risk if not addressed. As clinical terminologies expand to include new terms that may capture PII (e.g., Patient First Name, Patient Phone Number), institutions may start using them in clinical data capture (and in some cases, they already have). Once in use, PII-containing values associated with these terms may find their way into laboratory or observation data tables via extract-transform-load jobs intended to process structured data, putting institutions at risk of unintended disclosure. Here we aim to inform the informatics community of these findings, as well as put out a call to action for remediation by the community.
Much has changed since CompuServe introduced its “Electronic Mall” as the first major retail e-commerce platform in 1984. Digital technologies have driven down costs and improved access and opportunities for producers and consumers, manufacturers and farmers, and above all, users. Digital technologies have transformed how a large part of the global economy operates. A broad range of goods are now digital and thereby intangible, being made up of bytes. Likewise, many services that previously required costly face-to-face contact between the firm and consumer are now available remotely.
Technology has opened international trade – in the form of e-commerce – to a broad range of firms and sectors that beforehand would have been the sole domain of larger multinational companies. Most importantly, technology and the Internet reduced the transaction costs and information asymmetries associated with international trade. Social networks like Facebook and Twitter and search engines like Google and Bing give large and small businesses alike easy ways to advertise services to people around the world.