We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The circulation of data ranked high among the objectives adopted by CGIAR at its founding in 1971. This chapter considers how agricultural experts attempted to realize a desired “full exchange of information” among scientists working at geographically distant sites, in different languages and cultural contexts, with different organisms and research interests from the 1970s to the early 2000s. The chapter focuses on the historical development of “crop descriptors,” today defined as providing an “international format and a universally understood language for plant genetic resources data.” Developers of descriptors aspire to agree on traits and terms that will allow users from diverse institutions and backgrounds to contribute to and extract information from an integrated data infrastructure. The chapter examines crop descriptors as a critical component of CGIAR’s earliest efforts to create “system-wide” research tools and agendas, emphasizing the scientific and political agendas that shaped this top-down systematizing work, finding that it provided an opportunity for CGIAR to instantiate and consolidate its central position in a web of international development initiatives.
Methods for analyzing and visualizing literary data receive substantially more attention in digital literary studies than the digital archives with which literary data are predominantly constructed. When discussed, digital archives are often perceived as entirely different from nondigital ones, and as passive – that is, as novel and enabling (or disabling) settings or backgrounds for research rather than active shapers of literary knowledge. This understanding produces abstract critiques of digital archives, and risks conflating events and trends in the histories of literary data with events and trends in literary history. By contrast, an emerging group of media-specific approaches adapt traditional philological and media archaeological methods to explore the complex and interdependent relationship between literary knowledges, technologies, and infrastructures.
Delirium is a severe neuropsychiatric syndrome caused by physical illness, associated with high mortality. Understanding risk factors for delirium is key to targeting prevention and screening. Whether severe mental illness (SMI) predisposes people to delirium is not known. We aimed to establish whether pre-existing SMI diagnosis is associated with higher risk of delirium diagnosis and mortality following delirium diagnosis.
Methods
A retrospective cohort and nested case–control study using linked primary and secondary healthcare databases from 2000–2017. We identified people diagnosed with SMI, matched to non-SMI comparators. We compared incidence of delirium diagnoses between people with SMI diagnoses and comparators, and between SMI subtypes; schizophrenia, bipolar disorder and ‘other psychosis’. We compared 30-day mortality following a hospitalisation involving delirium between people with SMI diagnoses and comparators, and between SMI subtypes.
Results
We identified 20 566 people with SMI diagnoses, matched to 71 374 comparators. Risk of delirium diagnosis was higher for all SMI subtypes, with a higher risk conferred by SMI in the under 65-year group, (aHR:7.65, 95% CI 5.45–10.7, ⩾65-year group: aHR:3.35, 95% CI 2.77–4.05). Compared to people without SMI, people with an SMI diagnosis overall had no difference in 30-day mortality following a hospitalisation involving delirium (OR:0.66, 95% CI 0.38–1.14).
Conclusions
We found an association between SMI and delirium diagnoses. People with SMI may be more vulnerable to delirium when in hospital than people without SMI. There are limitations to using electronic healthcare records and further prospective study is needed to confirm these findings.
Since the 1970s, hundreds of khipus—Andean knotted-string recording devices—have been named after academic researchers. This practice disassociates individual khipus from their places of origin and reifies scientific inequity. Here, a new convention of the form KH#### (e.g., KH0125) is proposed, which we believe represents a more neutral, direct, and accurate nomenclature. The change is implemented in the Open Khipu Repository (OKR), the largest khipu database.
Our knowledge of the institutional features of local government in Canadian cities is surprisingly fragmentary. The academic literature has long identified dominant tendencies in Canadian local institutions, but systematic empirical data has been missing. In this article, we address this gap in knowledge in two ways. We introduce the Canadian Municipal Attributes Portal (CMAP), a new open-access database that contains information on dozens of institutional features of local government for nearly 100 of the most populous municipalities in Canada. We then propose a new multidimensional index of authority concentration, which is designed to capture variation in the local structure of decision-making authority in a systematic and nuanced manner. We apply this index to a systematic pan-Canadian subsample of 65 CMAP municipalities. The result is a rich portrait of institutional variety, one that both corroborates and substantially extends our current understanding of the shape of municipal institutions in Canadian cities.
Edited by
Rob Waller, NHS Lothian,Omer S. Moghraby, South London & Maudsley NHS Foundation Trust,Mark Lovell, Esk and Wear Valleys NHS Foundation Trust
As the use of big data in psychiatry continues to expand, it is crucial to involve patients and the public in decisions about its development and application. Mental Health Data Science Scotland has co-produced a best practice checklist involving both researchers and people with lived experience. This guidance emphasises the need for data to be securely accessible and carefully anonymised and for processes and analyses to be transparent, with participants or patients prioritised throughout.
The theoretical and practical problems of constructing normal databases within and between clinics are discussed. Pooling of data is probably a more attractive proposition in theory than in practice. The importance of standardisation of techniques is stressed, whilst caution in the over-interpretation of borderline results is counselled.
The aim of this paper is to identify scientific content and compilations related to circus arts available in subscription databases and in renowned and free academic information systems. After providing terminological definitions for circus and circus arts, the article describes the search strategies applied and the issues which emerged during the searches, and then introduces quantitative results, thereby also identifying the major periodicals and the most often referenced articles of the topic. The analysis provides useful input for representatives of other arts related to circus arts (e.g., performing arts, theatre arts, visual arts, musical arts) and of other academic fields (e.g., literary studies, history, media science); but first of all, it serves as an unparalleled library information service guide for navigating between electronic information sources.
The nineteenth-century Australian novel has predominantly been understood in terms of the dominance of Britain, both as the place where most books were published and as the source of literary traditions. But this account presumes and maintains the status of the book as the primary vehicle for transmission of literature, whereas the vast majority of Australian novels were serialised (either before or after book publication) and a great many were only ever published in serial form. A history of the early Australian novel that recognises the vital role of serialisation, as distinct from but also in relation to book publication, brings to light new trends in authorship, publication, circulation and reception. This history also uncovers new Australian novelists as well as previously unrecognised features of their fiction. In particular, a number of literary historians argue that early Australian novelists replicated the legal lie of terra nullius in excluding Aboriginal characters from their fiction. Considering fiction serialised in Australian newspapers indicates that these characters were actually widely depicted and suggests the need for a new account of the relationship between nineteenth-century Australian novels and colonisation.
Though there have been longstanding discussions on the value of ethics in health technology assessment (HTA), less awareness exists on ethics information retrieval methods. This study aimed to scope available evidence and determine current practices for ethics information retrieval in HTA.
Methods
Literature searches were conducted in Ovid MEDLINE, LISTA, Scopus, and Google Scholar. Once a list of relevant articles was determined, citation tracking was conducted via Scopus. HTA agency websites were searched for published guidance on ethics searching, and for reports which included ethical analyses. Methods sections of each report were analyzed to determine the databases, subject headings, and keywords used in search strategies. The team also reached out to information specialists for insight into current search practices.
Results
Findings from this study indicate that there is still little published guidance from HTA agencies, few HTAs that contain substantial ethical analysis, and even less information on the methodology for ethics information retrieval. The researchers identified twenty-five relevant HTAs. Ten of these reports did not utilize subject-specific databases outside health sciences. Eight reports published ethics searches, with significant overlap in subject headings and text words.
Conclusions
This scoping study of current practice in HTA ethics information retrieval highlights findings of previous studies—while ethics analysis plays a crucial role in HTA, methods for literature searching remain relatively unclear. These findings provide insight into the current state of ethics searching, and will inform continued work on filter development, database selection, and grey literature searching.
The European Union Agency for Law Enforcement Cooperation (Europol) is competent to support action by the EU Member States’ law enforcement authorities and strengthen their cooperation in the fight against cross-border crime. Europol is not a ‘European FBI’ as it does not have executive powers. Nevertheless, its contribution to the activities of national police authorities is increasingly appreciated by practitioners, especially since the Agency is in an ideal position to process and exchange enormous amounts of personal data that are relevant for criminal investigations. This chapter examines Europol’s history, structure, competence and powers, as well as its relations with partners and the rules on its accountability. It also focuses on the crucial role that Europol plays in shaping EU criminal justice thanks to its Serious and Organised Crime Threat Assessments, which set in motion a process at the European level by which the EU periodically identifies its priorities for the fight against serious international crime (the Policy Cycle-EMPACT). This chapter also analyses the forthcoming revision of Europol’s legal framework, which aims to ensure that the Agency can efficiently perform its tasks in an ever-changing security landscape.
This paper corresponds to an invited oral contribution to the session 5A organised by the IAU inter-commission B2-B5 working group (WG) “Laboratory Astrophysics Data Compilation, Validation and Standardization: from the Laboratory to FAIR usage in the Astronomical Community” at the IAU 2022 General Assembly (GA) Rengel (2022). This WG provides a platform where to discuss the Findability, Accessibility,Interoperability, Reuse (FAIR) usage of laboratory Atomic and Molecular (A&M) data in astronomy and astrophysics.
A&M data play a key role in the understanding of the physics and chemistry of processes in several research topics, including planetary science and interdisciplinary research in particular the atmospheres of planets and planetary explorations, etc. Databases, compilation of spectroscopic parameters, and facility tools are used by computer codes to interpret spectroscopic observations and simulate them. In this talk I presented existing A&M databases of interest to the planetary community focusing on access, organisation, infrastructures, limitations and issues, etc.
Daily data collection during archaeological fieldwork forms the basis for later interpretation and analysis. Across the world, we observe a wide variety of digital data collection methods and tools employed during fieldwork. Here, we detail the daily practices at four recent survey and excavation projects in the South Caucasian country of Armenia. As archaeology continues to become ever more digital, it is useful to consider these day-to-day recording processes at a typical field project. We provide details on both the types of data collected and the ways they are collected so as to foreground these topics. Finally, we reflect on how our work is currently impacted by digital changes and how it may continue to change in the future.
This introduction provides an overview of the collection of thirteen chapters on the life and works of Hildegard of Bingen (1098–1179). The editor compares the content and style of this volume with two earlier multiauthored collections of essays on Hildegard of Bingen (Voice of the Living Light and Brill’s A Companion to Hildegard of Bingen) and enumerates the range of publications, both in print and online, which necessitates an updated study. The volume is organized into three main sections: Hildegard’s life and monastic context, considering the education of women religious in medieval Germany; her writings and reputation, focusing on her visionary and theological output (Scivias, Liber vitae meritorum, and Liber divinorum operum), her extensive correspondence, her sermonizing, her scientific and medical texts, and the reception of her works in subsequent centuries; and finally her music, manuscripts, illuminations and scribes, engaging with the materiality of the transmission of Hildegard’s output. The author closes by discussing potential new areas of Hildegard research, brought to light in various chapters throughout the volume.
Health apps are software programs that are designed to prevent, diagnose, monitor, or manage conditions. Inconsistent terminology for apps is used in research literature and bibliographic database subject headings. It can therefore be challenging to retrieve evidence about them in literature searches. Information specialists at the United Kingdom's National Institute for Health and Care Excellence (NICE) have developed novel validated search filters to retrieve evidence about apps from MEDLINE and Embase (Ovid).
Methods
A selection of medical informatics journals was hand searched to identify a “gold standard” (GS) set of references about apps. The GS set was divided into a development and validation set. The filters’ search terms were derived from and tested against the development set. An external development set containing app references from published NICE products was also used to inform the development of the filters. The filters were then validated using the validation set. Target recall was >90 percent. The filters’ overall recall, specificity, and precision were calculated using all the references identified from the hand search.
Results
Both filters achieved 98.6 percent recall against their validation sets. Overall, the MEDLINE filter had 98.8 percent recall, 71.3 percent specificity, and 22.6 percent precision. The Embase filter had 98.6 percent recall, 74.9 percent specificity, and 24.5 percent precision.
Conclusions
The NICE health apps search filters retrieve evidence about apps from MEDLINE and Embase with high recall. They can be applied to literature searches to retrieve evidence about the interventions by information professionals, researchers, and clinicians.
This chapter is about finding the law. Research skills are expected of Australian law graduates; indeed, you need these skills to practise law competently. As Chapter 1 highlighted, the law is so immense that we cannot possibly know it all and, besides that, it changes all the time! By the time you enter into legal practice, the law you learned at university may have changed or may no longer apply.
In clinical and translational research, data science is often and fortuitously integrated with data collection. This contrasts to the typical position of data scientists in other settings, where they are isolated from data collectors. Because of this, effective use of data science techniques to resolve translational questions requires innovation in the organization and management of these data.
Methods:
We propose an operational framework that respects this important difference in how research teams are organized. To maximize the accuracy and speed of the clinical and translational data science enterprise under this framework, we define a set of eight best practices for data management.
Results:
In our own work at the University of Rochester, we have strived to utilize these practices in a customized version of the open source LabKey platform for integrated data management and collaboration. We have applied this platform to cohorts that longitudinally track multidomain data from over 3000 subjects.
Conclusions:
We argue that this has made analytical datasets more readily available and lowered the bar to interdisciplinary collaboration, enabling a team-based data science that is unique to the clinical and translational setting.
We review Big Data in Astronomy and its role in Astronomy Education. At present all-sky and large-area astronomical surveys and their catalogued data span over the whole range of electromagnetic spectrum, from gamma-ray to radio, as well as most important surveys giving optical images, proper motions, variability and spectroscopic data. Most important astronomical databases and archives are presented as well. They are powerful sources for many-sided efficient research using the Virtual Observatory (VO) environment. It is shown that using and analysis of Big Data accumulated in astronomy lead to many new discoveries. Using these data gives a significant advantage for Astronomy Education due to its attractiveness and due to big interest of young generation to computer science and technologies. The Computer Science itself benefits from data coming from the Universe and a new interdisciplinary science Astroinformatics has been created to manage these data.
The development of chronologies relies on integrating information from a number of different sources. In addition to direct dating evidence, such as radiocarbon dates, researchers will have contextual information which might be an environmental sequence or the context in an archaeological site. This information can be combined through Bayesian or other types of age-model. Once a chronology has been developed, this information can be used to estimate, for example, chronological uncertainties, rates of change, or the age of material which has not been directly dated.
Dealing with the information associated with chronology building is complicated and re-evaluation of chronologies often requires structured information which is hard to access. Although there are many databases with primary dating information, these often do not contain all of the information needed for a chronology. The Chronological Query Language (CQL) developed for OxCal was intended to be a convenient way of pulling such information together for Bayesian analysis. However, even this does not include much of the associated information required for reusing data in other analyses.
The IntChron initiative builds on the framework set up for the INTIMATE (Integrating Ice core, Marine and Terrestrial Records) chronological database (Bronk Ramsey et al. 2014) and is primarily an information exchange format and data visualization tool which enables users to pull together the types of information needed for chronological analysis. It is intended for use with multiple dating methodologies and while it will be integrated with OxCal, is intended to be an open format suitable for use with other software tools. The file format is JSON which is easily readable in software such as R, Python and MatLab. IntChron is not primarily intended to be a data depository but rather an index of sites where information is stored in the relevant format. As an initial step, databases of radiocarbon dates from the Oxford Radiocarbon Accelerator Unit (including those for the NERC radiocarbon facility), the RESET tephra database, the INTIMATE chronological database and regional radiocarbon databases for Egypt and Southern Africa are all linked. The intention is that users of OxCal will also be able to make published data accessible to others and to store working data, visible only to the user, to be used with the associated analysis tools. The IntChron site allows data from third party sources to be accessed through a representational state transfer (REST) application programming interface (API) in a number of different formats (JSON, csv, txt, oxcal) and associated bibliographic information in BibTeX format.
The aim of the IntChron initiative is to make it easy for users to provide data (in the single JSON format with limited minimum requirements) as well as to access data and tools, while promoting robust chronologies including realistic estimates of uncertainties. It is hoped that this will help to bring the chronological research communities to a point where data access is as easy as it is in some other fields. This is particularly important for Early Career Researchers and for those seeking to use large datasets in novel ways.
Since the beginning of the century, the digitization of medieval manuscripts has been a major concern of institutions in the possession of such material. This has led to the massive production of digital surrogates for online display. Preservation condition and temporal and spatial limitations are no longer restrictions for accessing these objects, making them easily available to a potentially larger public than before. The databases created for hosting the surrogates are designed for different categories of audience, with various standards in mind and different levels of technical sophistication. Although primarily accessed for the texts they bear, the digital surrogates of manuscripts are also the object of study of a specialized group of users interested in their physical features. This review will discuss whether databases that comprise digital surrogates of Greek New Testament manuscripts built by different types of institutions are efficient in addressing the needs of this admittedly small audience. I examine questions of content, interface, organization, and rationales behind the choices of their creators.