To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Dutch Children’s Food Literacy Questionnaire (DCFLQ) was developed and validated to assess food literacy among children aged 8 to 12 years. The DCFLQ is structured around farm-to-fork principles, including questions on food production, distribution, consumption, waste, and sustainability.
Design
After initial item pool creation, the DCFLQ was developed in collaboration with experts and children. The validation process included assessments of reliability and construct validity, as well as a test–retest evaluation in a subgroup of children.
Setting
The expert panel consisted of domain-related researchers, a pedagogue, a paediatrician, dietitians, and a primary school teacher. Children were recruited via primary schools and a sports club.
Participants
A total of 11 experts and 27 children participated in the development process; 608 children participated in the validation process.
Results
The final questionnaire comprised 29 questions and demonstrated good internal consistency (Cronbach’s α = 0.80) and test-retest reliability (ICC = 0.81). DCFLQ scores positively correlated with age, indicating that food literacy is higher in older children.
Conclusions
The DCFLQ is a valuable tool for assessing the effectiveness of nutrition intervention programs and monitoring Dutch children’s food literacy over time. International expert consensus on developing food literacy instruments is needed, as diversity in assessment tools impedes cross-cultural comparisons.
To test and validate a measure of primary health care (PHC) engagement in the Australian remote health context.
Background:
PHC principles include quality improvement, community participation and orientation of health care, patient-centred continuity of care, accessibility, and interdisciplinary collaboration. Measuring the alignment of services with the principles of PHC provides a method of evaluating the quality of care in community settings.
Methods:
A two-stage design of initial content and face validity evaluation by a panel of experts and then pilot-testing the instrument via survey methods was conducted. Twelve experts from clinical, education, management and research roles within the remote health setting evaluated each item in the original instrument. Panel members evaluated the representativeness and clarity of each item for face and content validity. Qualitative responses were also collected and included suggestions for changes to item wording. The modified tool was pilot-tested with 47 remote area nurses. Internal consistency reliability of the Australian Primary Health Care Engagement scale was evaluated using Cronbach’s alpha. Construct validity of the Australian scale was evaluated using exploratory factor analysis and principal component analysis.
Findings:
Modifications to suit the Australian context were made to 8 of the 28 original items. This modified instrument was pilot-tested with 47 complete responses. Overall, the scale showed high internal consistency reliability. The subscale constructs ‘Quality improvement’, ‘Accessibility-availability’ and ‘population orientation’ showed low levels of internal consistency reliability. However, the mean inter-item correlation was 0.31, 0.26 and 0.31, respectively, which are in the recommended range of 0.15 to 0.50 and indicate that the items are correlated and are measuring the same construct. The Australian PHCE scale is recommended as a tool for the evaluation of health services. Further testing on a larger sample may provide clarity over some items which may be open to interpretation.
In this chapter, we review the quantitative measurement of critical consciousness that has emerged within developmental and applied research over the last few decades. We provide a brief history and offer an overview of the current status of critical consciousness measurement. We also introduce four “phases” of critical consciousness measurement, which we refer to as (1) proxy measurement; (2) scale development; (3) scale expansion and (re)specification; and (4) scale refinement and adaptation. Due to their central role in critical consciousness measurement, we pay particular attention to instruments appearing in phase two, the scale development phase. After summarizing each phase, we identify opportunities for advancement and innovation in critical consciousness measurement and point to important new directions for measurement work in this area of scholarship – many of which are addressed more extensively in subsequent chapters of this volume.
The concluding chapter highlights the contributions of this edited volume’s chapters in terms of advancing and expanding critical consciousness theory and measurement. We recap the two parts of the volume – one focused on issues relevant to theory and the other focused on issues relevant to measurement – and briefly review the ways in which each chapter appearing in the volume addresses key issues related to theory and measurement.
This introduction chapter provides an introduction to critical consciousness and articulates the rationale for why an edited volume on critical consciousness theory and measurement is needed. We highlight the structure of the book, which has two parts: one focused on issues relevant to theory and the other focused on issues relevant to measurement. A brief review of each of the chapters appearing in the volume’s two sections is provided. This chapter concludes with the presentation of a "schema" we provide to support navigating the contents of this volume – and other critical consciousness scholarship– and explicate how this schema represents some of the most complex and challenging issues faced by scholars working in critical consciousness theory and measurement today.
Critical consciousness represents the analysis of inequitable social conditions, the motivation to effect change, and the action taken to redress perceived inequities. Scholarship and practice in the last two decades have highlighted critical consciousness as a key developmental competency for those experiencing marginalization and as a pathway for navigating and resisting oppression. This competency is more urgent than ever given the current sociopolitical moment, in which longstanding inequity, bias, discrimination, and competing ideologies are amplified. This volume assembles leading scholars to address some of the field's most urgent questions: How does critical consciousness develop? What theories can be used to complement and enrich our understanding of the operation of critical consciousness? How might new directions in theory and measurement further enhance what is known about critical consciousness? It offers cutting-edge ideas and answers to these questions that are of critical importance to deepen our critical consciousness theory and measurement.
The usual operationalization of the psycholexical approach to personality becomes inefficient when dealing with languages with a limited lexicography or languages that are inaccessible to the investigator. These conditions, which probably hold for most of the world’s languages, call for different approaches to uncover the complexities of personality. This chapter describes the mixed-method approach in designing the South African Personality Inventory (SAPI). We describe (1) the dynamics and lessons learned from the extensive qualitative research conducted in 11 languages, (2) the challenges of item development and reduction with a focus on cultural comparability and validity, and (3) the complexities in extracting and validating the SAPI factor structure across language versions. We discuss how the diversification of methods can enrich the understanding of personality in understudied contexts and inform the debate between universal models (with their lure of cross-cultural comparability) and indigenous models (with their claim to ecological validity).
Although the science of team science is no longer a new field, the measurement of team science and its standardization remain in relatively early stages of development. To describe the current state of team science assessment, we conducted an integrative review of measures of research collaboration quality and outcomes.
Methods:
Collaboration measures were identified using both a literature review based on specific keywords and an environmental scan. Raters abstracted details about the measures using a standard tool. Measures related to collaborations with clinical care, education, and program delivery were excluded from this review.
Results:
We identified 44 measures of research collaboration quality, which included 35 measures with reliability and some form of statistical validity reported. Most scales focused on group dynamics. We identified 89 measures of research collaboration outcomes; 16 had reliability and 15 had a validity statistic. Outcome measures often only included simple counts of products; publications rarely defined how counts were delimited, obtained, or assessed for reliability. Most measures were tested in only one venue.
Conclusions:
Although models of collaboration have been developed, in general, strong, reliable, and valid measurements of such collaborations have not been conducted or accepted into practice. This limitation makes it difficult to compare the characteristics and impacts of research teams across studies or to identify the most important areas for intervention. To advance the science of team science, we provide recommendations regarding the development and psychometric testing of measures of collaboration quality and outcomes that can be replicated and broadly applied across studies.
The past 20 years have seen the development of instruments designed to specify standards and evaluate the adequacy of published studies with respect to the quality of study design, the quality of findings, as well as the quality of their reporting. In the field of psychometrics, the first minimum set of standards for the review of psychometric instruments was published in 1996 by the Scientific Advisory Committee of the Medical Outcomes Trust. Since then, a number of tools have been developed with similar aims. The present paper reviews basic psychometric properties (reliability, validity and responsiveness), compares six tools developed for the critical appraisal of psychometric studies and provides a worked example of using the COSMIN checklist, Terwee-m statistical quality criteria, and the levels of evidence synthesis using the method of Schellingerhout and colleagues (2012). This paper will aid users and reviewers of questionnaires in the quality appraisal and selection of appropriate instruments by presenting available assessment tools, their characteristics and utility.
The study purpose was to provide evidence of validity for the Primary Health Care Engagement (PHCE) Scale, based on exploratory factor analysis and reliability findings from a large national survey of regulated nurses residing and working in rural and remote Canadian communities.
Background
There are currently no published provider-level instruments to adequately assess delivery of community-based primary health care, relevant to ongoing primary health care (PHC) reform strategies across Canada and elsewhere. The PHCE Scale reflects a contemporary approach that emphasizes community-oriented and community-based elements of PHC delivery.
Methods
Data from the pan-Canadian Nursing Practice in Rural and Remote Canada II (RRNII) survey were used to conduct an exploratory factor analysis and evaluate the internal consistency reliability of the final PHCE Scale.
Findings
The RRNII survey sample included 1587 registered nurses, nurse practitioners, licensed practical nurses, and registered psychiatric nurses residing and working in rural and remote Canada. Exploratory factor analysis identified an eight-factor structure across 28 items overall, and good internal consistency reliability was indicated by an α estimate of 0.89 for the final scale. The final 28-item PHCE Scale includes three of four elements in a contemporary approach to PHC (accessibility/availability, community participation, and intersectoral team) and most community-oriented/based elements of PHC (interdisciplinary collaboration, person-centred, continuity, population orientation, and quality improvement). We recommend additional psychometric testing in a range of health care providers and settings, as the PHCE Scale shows promise as a tool for health care planners and researchers to test interventions and track progress in primary health care reform.
To report the development and psychometric evaluation of a scale to measure rural and remote (rural/remote) nurses’ perceptions of the engagement of their workplaces in key dimensions of primary health care (PHC).
Background
Amidst ongoing PHC reforms, a comprehensive instrument is needed to evaluate the degree to which rural/remote health care settings are involved in the key dimensions that characterize PHC delivery, particularly from the perspective of professionals delivering care.
Methods
This study followed a three-phase process of instrument development and psychometric evaluation. A literature review and expert consultation informed instrument development in the first phase, followed by an iterative process of content evaluation in the second phase. In the final phase, a pilot survey was undertaken and item discrimination analysis employed to evaluate the internal consistency reliability of each subscale in the preliminary 60-item Primary Health Care Engagement (PHCE) Scale. The 60-item scale was subsequently refined to a 40-item instrument.
Findings
The pilot survey sample included 89 nurses in current practice who had experience in rural/remote practice settings. Participants completed either a web-based or paper survey from September to December, 2013. Following item discrimination analysis, the 60-item instrument was refined to a 40-item PHCE Scale consisting of 10 subscales, each including three to five items. Alpha estimates of the 10 refined subscales ranged from 0.61 to 0.83, with seven of the subscales demonstrating acceptable reliability (α⩾0.70). The refined 40-item instrument exhibited good internal consistency reliability (α=0.91). The 40-item PHCE Scale may be considered for use in future studies regardless of locale, to measure the extent to which health care professionals perceive their workplaces to be engaged in key dimensions of PHC.
Juggling the demands of work and family is becoming increasingly difficult in today's world. As dual-earners are now a majority and men and women's roles in both the workplace and at home have changed, questions have been raised regarding how individuals and couples can balance family and work. Nevertheless, research addressing work-family conciliation strategies is limited to a conflict-driven approach and context-specific instruments are scarce. This study develops an instrument for assessing how dual-earners manage their multiple roles detaching from a conflict point of view highlighting the work-family conciliation strategies put forward by these couples. Through qualitative and quantitative procedures the Work-Family Conciliation Strategies Scales was developed and is composed by 5 factors: Couple Coping; Positive Attitude Towards Multiple Roles, Planning and Management Skills, Professional Adjustments and Institutional Support; with good adjustment [χ2/df = 1.22; CFI = .90, RMSEA = .04, SRMR = .08.] and good reliability coefficients [from .67 to .87]. The developed scale contributes to research because of its specificity to the work-family framework and its focus on the proactive nature of balancing work and family roles. The results support further use of this instrument.
The Mt Merapi volcanic eruption in October 2010 claimed more than 386 lives, injured thousands of survivors, and devastated the surrounding environment. No instrument was available in Indonesia to assess the psychosocial impact on survivors of environmental degradation caused by such natural disasters. We developed, translated, and tested an Indonesian version of the Environmental Distress Scale (EDS) for use as a tool to reliably measure environmental distress related to environmental damage in Indonesia.
Method
The EDS, a prospective translation and psychometric study, was modified for use in a volcano disaster setting in Indonesia; translated into Indonesian; and pilot tested to determine meaning and cultural appropriateness. A test-retest study with 80 survivors of the 2010 Mt Merapi volcanic eruption measured the reliability of the tool.
Results
The Indonesian version of the EDS (I-EDS) captured the content of the original EDS with appropriate adaptations for cultural differences of Indonesian natural disaster survivors.
Conclusions
The I-EDS can be considered a reliable tool for assessing the psychosocial impact of environmental degradation from natural disasters such as volcanic eruptions, which might be useful for Indonesian researchers. (Disaster Med Public Health Preparedness. 2014;0:1–10)
To address limitations in measuring the preparedness capacities of health departments, we developed and tested the Local Health Department Preparedness Capacities Assessment Survey (PCAS).
Methods
Preexisting instruments and a modified 4-cycle Delphi panel process were used to select instrument items. Pilot test data were analyzed using exploratory factor analysis. Kappa statistics were calculated to examine rater agreement within items. The final instrument was fielded with 85 North Carolina health departments and a national matched comparison group of 248 health departments.
Results
Factor analysis identified 8 initial domains: communications, surveillance and investigation, plans and protocols, workforce and volunteers, legal infrastructure, incident command, exercises and events, and corrective action. Kappa statistics and z scores indicated substantial to moderate agreement among respondents in 7 domains. Cronbach α coefficients ranged from 0.605 for legal infrastructure to 0.929 for corrective action. Mean scores and standard deviations were also calculated for each domain and ranged from 0.41 to 0.72, indicating sufficient variation in the sample to detect changes over time.
Conclusion
The PCAS is a useful tool to determine how well health departments are performing on preparedness measures and identify opportunities for future preparedness improvements. Future survey implementation will incorporate recent Centers for Disease Control and Prevention's Public Health Preparedness Capabilities: National Standards for State and Local Planning. (Disaster Med Public Health Preparedness. 2013;7:578–584)
Scope – The Collaborative Spectrum Project aims to define subthreshold and atyical conditions not sufficiently characterized in the current diagnostic nomenclature and for which adequate assessment instruments are not available. This paper reports on the development and validation of new instruments to assess the spectrum of five psychiatric disorders. Design – Three multicenter studies and one single-site study were conducted in Italy to assess the validity and reliability of the five spectrum interviews. Another cross-sectional study to validate the panic-agoraphobia spectrum has been conducted in Pittsburgh. Setting – Outpatients attending various university clinics, university students and, in one Italian study, gym attenders were recruited for the studies. Main outcome measures – Five structured clinical interview to assess the spectrum of panicagoraphobia (SCI-PAS), mood (SCI-MOODS), social phobia (SCI-SHY), and the obsessive-compulsive (SCI-OBS) and eating disorder spectra (SCI-ABS) were administered along with a diagnostic interview and a number of self-report and interviewerrated instruments. Results – All the domains of the interview showed high test-retest reliability (intraclass correlation coefficient >0.61) and satisfactory internal consistency. Mean domain scores were significantly higher in cases than in controls and in patients with the disorder of interest than in patients with other disorders. Convergent validity was satisfactory for panic-agoraphobia, social phobia and obsessive-compulsive spectrum domains. Differences emerged between SCI-ABS and self-report instruments assessing eating disorders. A cut-off score for the panic-agoraphobia spectrum was defined and its clinical validity was tested. Conclusions – The psychometric properties of the five spectrum interviews are very satisfactory, and studies are currently ongoing to test the clinical validity of all the spectra. Subthreshold and atypical symptoms deserve attention in epidemiological investigation.
Edited by
Frederick P. Rivara, Harborview Injury Prevention and Research Center, Seattle,Peter Cummings, Harborview Injury Prevention and Research Center, Seattle,Thomas D. Koepsell, Harborview Injury Prevention and Research Center, Seattle,David C. Grossman, Harborview Injury Prevention and Research Center, Seattle,Ronald V. Maier, Harborview Injury Prevention and Research Center, Seattle
The study question, logistical, sampling, and budget issues determine the data collection method. This chapter considers the suitability of various methods for different research questions. It addresses issues of feasibility, logistics of instrument design, and methods to enhance response rates and data quality. The design and use of data collection instruments are central activities in quantitative studies. Data collection instruments can be fully structured or semistructured. Semistructured instruments allow both quantitative and qualitative analysis. The chapter discusses three major categories of data collection, namely self-administered surveys, in-person interviews, and direct observation. The computerized self-administered questionnaire (CSAQ) or audio computer-assisted self-interviewing (audio-CASI) approach to data collection has recently been developed to facilitate the private collection of sensitive data by allowing respondents to interact with a computer rather than an interviewer. Computer-assisted telephone interviewing systems (CATI) are also available to improve the efficiency of data collection tasks.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.