We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Child undernutrition is a global public health problem with serious implications. In this study, we estimate predictive algorithms for the determinants of childhood stunting by using various machine learning (ML) algorithms.
Design:
This study draws on data from the Ethiopian Demographic and Health Survey of 2016. Five ML algorithms including eXtreme gradient boosting, k-nearest neighbours (k-NN), random forest, neural network and the generalised linear models were considered to predict the socio-demographic risk factors for undernutrition in Ethiopia.
Setting:
Households in Ethiopia.
Participants:
A total of 9471 children below 5 years of age participated in this study.
Results:
The descriptive results show substantial regional variations in child stunting, wasting and underweight in Ethiopia. Also, among the five ML algorithms, xgbTree algorithm shows a better prediction ability than the generalised linear mixed algorithm. The best predicting algorithm (xgbTree) shows diverse important predictors of undernutrition across the three outcomes which include time to water source, anaemia history, child age greater than 30 months, small birth size and maternal underweight, among others.
Conclusions:
The xgbTree algorithm was a reasonably superior ML algorithm for predicting childhood undernutrition in Ethiopia compared to other ML algorithms considered in this study. The findings support improvement in access to water supply, food security and fertility regulation, among others, in the quest to considerably improve childhood nutrition in Ethiopia.
Data-driven algorithms are increasingly used by penal systems across western jurisdictions to predict risks of recidivism. This chapter draws on Foucauldian analysis of the epistemic power of discourse to demonstrate how the algorithms are operating as truth or knowledge producers through the construction of risk labels that determine degrees of penal governance and control. Some proponents emphasise the technical fairness of the algorithms, highlighting their predictive accuracy. But in its exploration of the algorithms and their design configurations as well as their structural implications, this paper unravels the distinctions between a criminal justice and a social justice perspective on algorithmic fairness. It argues that whilst the former focuses on the technical, the latter emphasises broader structural consequences. These include impositions of algorithmic risk labels that operate as self-fulfilling prophesies, triggering future criminalisation and consequently undermining the perceived legitimacy of risk assessment and prediction. Through its theorisation of these issues, the chapter expands the parameters of current scholarship on the predictive algorithms applied by penal systems.
Chapter 3 examines the challenges of applying the cost–benefit analysis theory given the current legal stanrads used by courts. The cost–benefit analysis theory requires quantified costs and benefits, while the current legal system uses broad, descriptive standards to evaluate searches. The chapter notes that the current legal standards are inconsistently applied, and thus provide inadequate guidance to police who are attempting to follow these standards. The chapter also points out a dissonance between how judges apply the current standards and how lay people believe the standards should be applied. The solution is to quantify the legal standards, thus making the standards more transparent, allowing for a greater range of standards, and allowing the judges to use data from predictive algorithms as formal factors in deciding whether to allow a certain type of surveillance. This will also allow courts and policymakers to use the cost–benefit analysis theory more accurately and efficiently.
Chapter 2 focuses on the benefits side of the cost–benefit analysis equation, and notes that the rise of big data’s predictive algorithms allow law enforcement to measure the likely success rate of their surveillance with far greater precision than in the past. These predictive algorithms have the potential to revolutionize criminal investigations in many ways, by making them cheaper, more accurate, and less biased. However, the surveillance technologies must be designed in ways to ensure that they meet the Fourth Amendment’s requirement of particularized suspicion and to ensure that they do not rely on tainted data.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.