We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter establishes what it means to do discourse analysis. This is done by defining discourse analysis and providing examples of discourse. The chapter offers a practical overview of how the discourse in discourse analysis fits within the research process. The examples of discourse that are introduced in this chapter are grammar, actions and practices, identities, places and spaces, stories, ideologies, and social structures. After reading the chapter, readers will know what discourse analysis is; understand that there are many types of discourse; know that discourse is an object of study; and understand how an object of study fits within a research project.
Chapter 5 addresses a major demographic puzzle concerning thousands of New York slaves who seem to have gone missing in the transition from slavery to freedom, and the chapter questions how and if slaves were sold South. The keys to solving this puzzle include estimates of common death rates, census undercounting, changing gender ratios in the New York black population, and, most importantly, a proper interpretation of the 1799 emancipation law and its effects on how the children of slaves were counted in the census. Given an extensive analysis of census data, with various demographic techniques for understanding how populations change over time, I conclude that a large number of New York slaves (between 1,000 and 5,000) were sold South, but not likely as many as some previous historians have suggested. A disproportionate number of these sold slaves came from Long Island and Manhattan.
The chapter demonstrates that selecting an object of study is a consequential part of doing discourse analysis. Selecting an object of study requires considering many planning and analytic issues that are often neglected in introductory books on discourse analysis. This chapter reviews many of these planning and analytic issues, including how to organize and present data. After reading the chapter, readers will know how to structure an analysis; understand what data excerpts are and how to introduce them in an analysis; be able to create and present an object of study as smaller data excerpts; and know how to sequence an analysis.
Chapter 3 establishes that the Dutch had economic incentives to continue holding slaves. Slavery in Dutch New York was not just a cultural choice, but was reinforced by economic considerations. From archival sources and published secondary sources, I have compiled a unique dataset of prices for over 3,350 slaves bought, sold, assessed for value, or advertised for sale in New York and New Jersey. This data has been coded by sex, age, county, price, and type of record, among other categories. It is as far as I know the only slave price database for slaves in the Northern states yet assembled. Regression analysis allows us to compute the average price of Northern slaves over time, the relative price difference between male and female slaves, the price trend relative to known prices in the American South, and other variables such as the price differential between New York City slaves and slaves in other counties in the state. Slave prices in New York and New Jersey appear relatively stable over time, but declined in the nineteenth century. The analysis shows that slaveholders in Dutch New York were motivated by profit, and they sought strength and youth in purchasing slaves.
This article interrogates three claims made in relation to the use of data in relation to peace. That more data, faster data, and impartial data will lead to better policy and practice outcomes. Taken together, this data myth relies on a lack of curiosity about the provenance of data and the infrastructure that produces it and asserts its legitimacy. Our discussion is concerned with issues of power, inclusion, and exclusion, and particularly how knowledge hierarchies attend to the collection and use of data in relation to conflict-affected contexts. We therefore question the axiomatic nature of these data myth claims and argue that the structure and dynamics of peacebuilding actors perpetuate the myth. We advocate a fuller reflection of the data wave that has overtaken us and echo calls for an ethics of numbers. In other words, this article is concerned with the evidence base for evidence-based peacebuilding. Mindful of the policy implications of our concerns, the article puts forward five tenets of good practice in relation to data and the peacebuilding sector. The concluding discussion further considers the policy implications of the data myth in relation to peace, and particularly, the consequences of casting peace and conflict as technical issues that can be “solved” without recourse to human and political factors.
Focusing on methods for data that are ordered in time, this textbook provides a comprehensive guide to analyzing time series data using modern techniques from data science. It is specifically tailored to economics and finance applications, aiming to provide students with rigorous training. Chapters cover Bayesian approaches, nonparametric smoothing methods, machine learning, and continuous time econometrics. Theoretical and empirical exercises, concise summaries, bolded key terms, and illustrative examples are included throughout to reinforce key concepts and bolster understanding. Ancillary materials include an instructor's manual with solutions and additional exercises, PowerPoint lecture slides, and datasets. With its clear and accessible style, this textbook is an essential tool for advanced undergraduate and graduate students in economics, finance, and statistics.
This chapter introduces what a time series is and defines the important decomposition into trend, seasonal, and cycle that guides our thinking. We introduce a number of datasets used in the book and plot them to show their key features in terms of these components.
This chapter explores the knowledge creation aspect of contemporary tax reforms in Nigeria. It offers a historical perspective on this process which lets us see today’s reforms not only as the re-creation of long-retreated systems of state taxation-led ordering, but against the backdrop of what intervened in the meantime – a four-decade late-twentieth-century interregnum where revenue reliance on oil profits created a very different distributive system of government-as-knowledge. Today’s system of tax-and-knowledge is not just reform but an inversion of what came before.
This chapter introduces the main research themes of this book, which explores two current global developments. The first concerns the increased use of algorithmic systems by public authorities in a way that raises significant ethical and legal challenges. The second concerns the erosion of the rule of law and the rise of authoritarian and illiberal tendencies in liberal democracies, including in Europe. While each of these developments is worrying as such, in this book, I argue that the combination of their harms is currently underexamined. By analysing how the former development might reinforce the latter, this book seeks to provide a better understanding of how algorithmic regulation can erode the rule of law and lead to algorithmic rule by law instead. It also evaluates the current EU legal framework which is inadequate to counter this threat, and identifies new pathways forward.
The risks emanating from algorithmic rule by law lie at the intersection of two regulatory domains: regulation pertaining to the rule of law’s protection (the EU’s rule of law agenda), and regulation pertaining to the protection of individuals against the risks of algorithmic systems (the EU’s digital agenda). Each of these domains consists of a broad range of legislation, including not only primary and secondary EU law, but also soft law. In what follows, I confine my investigation to those areas of legislation that are most relevant for the identified concerns. After addressing the EU’s competences to take legal action in this field (Section 5.1), I respectively examine safeguards provided by regulation pertaining to the rule of law (Section 5.2), to personal data (Section 5.3) and to algorithmic systems (Section 5.4), before concluding (Section 5.5).
The World Health Organisation describes micronutrient deficiencies, or hidden hunger, as a form of malnutrition that occurs due to low intake and/or absorption of minerals and vitamins, putting human development and health at risk. In many cases, emphasis, effort, and even policy, revolves around the prevention of deficiency of one particular micronutrient in isolation. This is understandable as that micronutrient may be among a group of nutrients of public health concern. Vitamin D is a good exemplar. This review will highlight how the actions taken to tackle low vitamin D status have been highly dependent on the generation of new data and/or new approaches to analysis of existing data, to help develop the evidence-base, inform advice/guidelines, and in some cases, translate into policy. Beyond focus on individual micronutrients, there has also been increasing international attention around hidden hunger, or deficiencies of a range of micronutrients, which can exist unaccompanied by obvious clinical signs but can adversely affect human development and health. A widely quoted estimate of the global prevalence of hidden hunger is a staggering two billion people, but this is now over 30 years old. This review will outline how strategic data sharing and generation is seeking to address this key knowledge gap in relation to the true prevalence of hidden hunger in Europe, a key starting point towards defining sustainable and cost-effective, food-based strategies for its prevention. The availability of data on prevalence and food-based strategies can help inform public policy to eradicate micronutrient deficiency in Europe.
The Introduction sets out the central puzzle that the book seeks to solve. Descriptively, it asks whether primaries have transformed in the twenty-first century by using a series of case studies to illustrate the central descriptive argument of change. It then frames the importance of the second half of the book, justifying the focus on elite partisan positioning and ideological change in relation to recent primary elections as a (potential) mechanism. It then clarifies the data collection process and sources used. Finally, it focuses on partisan differences between the Republican and Democratic parties before providing an outline of the book’s structure.
What is literary data? This chapter addresses this question by examining how the concept of data functioned during a formative moment in academic literary study around the turn of the twentieth century and again at the beginning of electronic literary computing. The chapter considers the following cases: Lucius Adelno Sherman’s Analytics of Literature (1893), the activities of the Concordance Society (c.1906–28), Lane Cooper’s A Concordance to the Poems of William Wordsworth (1911), and the work of Stephen M. Parrish c.1960. The chapter explains how the concept of literary data was used by literature scholars to signal a commitment to a certain epistemological framework that was opposed to other ways of knowing and reading in the disciplinary field.
While it is important to be able to read and interpret individual papers, the results of a single study are never going to provide the complete answer to a question. To move towards this, we need to review the literature more widely. There can be a number of reasons for doing this, some of which require a more comprehensive approach than others. If the aim is simply to increase our personal understanding of a new area, then a few papers might provide adequate background material. Traditional narrative reviews have value for exploring areas of uncertainty or novelty but give less emphasis to complete coverage of the literature and tend to be more qualitative, so it is harder to scrutinise them for flaws. Scoping reviews are more systematic but still exploratory. They are conducted to identify the breadth of evidence available on a particular topic, clarify key concepts and identify the knowledge gaps. In contrast, a major decision regarding policy or practice should be based on a systematic review and perhaps a meta-analysis of all the relevant literature, and it is this approach that we focus on here.
Datafication—the increase in data generation and advancements in data analysis—offers new possibilities for governing and tackling worldwide challenges such as climate change. However, employing data in policymaking carries various risks, such as exacerbating inequalities, introducing biases, and creating gaps in access. This paper articulates 10 core tensions related to climate data and its implications for climate data governance, ranging from the diversity of data sources and stakeholders to issues of quality, access, and the balancing act between local needs and global imperatives. Through examining these tensions, the article advocates for a paradigm shift towards multi-stakeholder governance, data stewardship, and equitable data practices to harness the potential of climate data for the public good. It underscores the critical role of data stewards in navigating these challenges, fostering a responsible data ecology, and ultimately contributing to a more sustainable and just approach to climate action and broader social issues.
Epidemiology is fundamental to public health, providing the tools required to detect and quantify health problems and identify and evaluate solutions. Essential Epidemiology is a clear, engaging and methodological introduction to the subject. Now in its fifth edition, the text has been thoroughly updated. Its trademark clear and consistent pedagogical structure makes challenging topics accessible, while the local and international examples, including from the COVID-19 pandemic, encourage students to apply theory to real-world cases. Statistical analysis is explained simply, with more challenging concepts presented in optional advanced boxes. Each chapter includes information boxes, margin notes highlighting supplementary facts and question prompts to enhance learners' understanding. The end-of-chapter questions and accompanying guided solutions promote the consolidation of knowledge. Written by leading Australian academics and researchers, Essential Epidemiology remains a fundamental resource and reference text for students and public health practitioners alike.
This chapter introduces the National Security Institutions Data Set, an original cross-national resource offering the first systematic measurement of national security decision-making and coordination bodies across the globe from 1946 to 2015. The chapter leverages these data to probe the theory quantitatively, yielding three findings that are consistent with the theory’s propositions. First, it shows that national security institutions are more malleable than previous scholarship has suggested. Second, it finds that integrated institutions tend to perform better than institutional alternatives. Third, it shows that institutional change is associated with domestic environments in which leaders have political incentives to weaken the bureaucracy.
Quantification can be a double-edged sword. Converting lived experience into quantitative data can be reductive, coldly condensing complex thoughts, feelings, and actions into numbers. But it can also be a powerful tool to abstract from isolated instances into patterns and groups, providing empirical evidence of systemic injustice and grounds for collectivity. Queer lives and literatures have contended with both these qualities of quantification. Statistics have been used to pathologize queer desire as deviant from the norm, but they have also made it clear how prevalent queer people are, enabling collective action. Likewise for queer literature, which has sometimes regarded quantification as its antithesis, and other times as a prime representational resource. Across the history of queer American literature this dialectical tension between quantification as reductive and resource has played out in various ways, in conjunction with the histories of science, sexuality, and literary style. This chapter covers the history of queer quantification in literature from the singular sexological case study through the gay minority to contemporary queerness trying to transcend the countable.
Do your communication skills let you down? Do you struggle to explain and influence, persuade and inspire? Are you failing to fulfil your potential because of your inability to wield words in the ways you'd like? This book has the solution. Written by a University of Cambridge Communication Course lead, journalist and former BBC broadcaster, it covers everything from the essentials of effective communication to the most advanced skills. Whether you want to write a razor sharp briefing, shine in an important presentation, hone your online presence, or just get yourself noticed and picked out for promotion, all you need to know is here. From writing and public speaking, to the beautiful and stirring art of storytelling, and even using smartphone photography to help convey your message, this invaluable book will empower you to become a truly compelling communicator.
For many, public speaking is nothing less than terrifying. But the art is indispensable if you want to get on in life, and can be mastered by learning certain techniques. These include how to start and end a talk, effective structures and the use of slides and data, as well as incorporating your character to help make presentations come alive.