We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Personality tests often consist of a set of dichotomous or Likert items. These response formats are known to be susceptible to an agreeing-response bias called acquiescence. The common assumption in balanced scales is that the sum of appropriately reversed responses should be reasonably free of acquiescence. However, inter-item correlation (or covariance) matrices can still be affected by the presence of variance due to acquiescence. To analyse these correlation matrices, we propose a method that is based on an unrestricted factor analysis and can be applied to multidimensional scales. This method obtains a factor solution in which acquiescence response variance is isolated in an independent factor. It is therefore possible, without the potentially confounding effect of acquiescence, to: (a) examine the dominant factors related to content latent variables; and (b) estimate participants–factor scores on content latent variables. This method, which is illustrated by two empirical data examples, has proved to be useful for improving the simplicity of the factor structure.
We propose an index for assessing the degree of factor simplicity in the context of principal components and exploratory factor analysis. The new index, which is called Loading Simplicity, is based on the idea that the communality of each variable should be related to few components, or factors, so that the loadings in each variable are either zero or as far from zero as possible. This index does not depend on the scale of the factors, and its maximum and minimum are only related to the degree of simplicity in the loading matrix. The aim of the index is to enable the degree of simplicity in loading matrices to be compared.
The factor analysis (FA) model does not permit unique estimation of the common and unique factor scores. This weakness is notorious as the factor indeterminacy in FA. Luckily, some part of the factor scores can be uniquely determined. Thus, as a whole, they can be viewed as a sum of determined and undetermined parts. The paper proposes to select the undetermined part, such that the resulting common factor scores have the following feature: the rows (i.e., individuals) of the common factor score matrix are as well classified as possible into few clusters. The clear benefit is that we can easily interpret the factor scores simply by focusing on the clusters. The procedure is called clustered common factor exploration (CCFE). An alternating least squares algorithm is developed for CCFE. It is illustrated with real data examples. The proposed approach can be viewed as a parallel to the rotation techniques in FA. They exploit another FA indeterminacy, the rotation indeterminacy, which is resolved by choosing the rotation that transforms the loading matrix into the ‘most’ interpretable one according to a pre-specified criterion. In contrast to the rotational indeterminacy, the factor indeterminacy is utilized to achieve well-clustered factor scores by CCFE. To the best of our knowledge, such an approach to the FA interpretation has not been studied yet.
The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both maximum likelihood estimation and ordinary least squares estimation are considered.
The purpose of this study was to investigate and compare the performance of a stepwise variable selection algorithm to traditional exploratory factor analysis. The Monte Carlo study included six factors in the design; the number of common factors; the number of variables explained by the common factors; the magnitude of factor loadings; the number of variables not explained by the common factors; the type of anomaly evidenced by the poorly explained variables; and sample size. The performance of the methods was evaluated in terms of selection and pattern accuracy, and bias and root mean squared error of the structure coefficients. Results indicate that the stepwise algorithm was generally ineffective at excluding anomalous variables from the factor model. The poor selection accuracy of the stepwise approach suggests that it should be avoided.
Based on the usual factor analysis model, this paper investigates the relationship between improper solutions and the number of factors, and discusses the properties of the noniterative estimation method of Ihara and Kano in exploratory factor analysis. The consistency of the Ihara and Kano estimator is shown to hold even for an overestimated number of factors, which provides a theoretical basis for the rare occurrence of improper solutions and for a new method of choosing the number of factors. The comparative study of their estimator and that based on maximum likelihood is carried out by a Monte Carlo experiment.
The likelihood ratio test is widely used in exploratory factor analysis to assess the model fit and determine the number of latent factors. Despite its popularity and clear statistical rationale, researchers have found that when the dimension of the response data is large compared to the sample size, the classical Chi-square approximation of the likelihood ratio test statistic often fails. Theoretically, it has been an open problem when such a phenomenon happens as the dimension of data increases; practically, the effect of high dimensionality is less examined in exploratory factor analysis, and there lacks a clear statistical guideline on the validity of the conventional Chi-square approximation. To address this problem, we investigate the failure of the Chi-square approximation of the likelihood ratio test in high-dimensional exploratory factor analysis and derive the necessary and sufficient condition to ensure the validity of the Chi-square approximation. The results yield simple quantitative guidelines to check in practice and would also provide useful statistical insights into the practice of exploratory factor analysis.
A construction method is given for all factors that satisfy the assumptions of the model for factor analysis, including partially determined factors where certain error variances are zero. Various criteria for the seriousness of indeterminacy are related. It is shown that Green's (1976) conjecture holds: For a linear factor predictor the mean squared error of prediction is constant over all possible factors. A simple and general geometric interpretation of factor indeterminacy is given on the basis of the distance between multiple factors. It is illustrated that variable elimination can have a large effect on the seriousness of factor indeterminacy. A simulation study reveals that if the mean square error of factor prediction equals .5, then two thirds of the persons are “correctly” selected by the best linear factor predictor.
A new factor analysis (FA) procedure has recently been proposed which can be called matrix decomposition FA (MDFA). All FA model parameters (common and unique factors, loadings, and unique variances) are treated as fixed unknown matrices. Then, the MDFA model simply becomes a specific data matrix decomposition. The MDFA parameters are found by minimizing the discrepancy between the data and the MDFA model. Several algorithms have been developed and some properties have been discussed in the literature (notably by Stegeman in Comput Stat Data Anal 99:189–203, 2016), but, as a whole, MDFA has not been studied fully yet. A number of new properties are discovered in this paper, and some existing ones are derived more explicitly. The properties provided concern the uniqueness of results, covariances among common factors, unique factors, and residuals, and assessment of the degree of indeterminacy of common and unique factor scores. The properties are illustrated using a real data example.
Spearman (Am J Psychol 15(1):201–293, 1904. https://doi.org/10.2307/1412107) marks the birth of factor analysis. Many articles and books have extended his landmark paper in permitting multiple factors and determining the number of factors, developing ideas about simple structure and factor rotation, and distinguishing between confirmatory and exploratory factor analysis (CFA and EFA). We propose a new model implied instrumental variable (MIIV) approach to EFA that allows intercepts for the measurement equations, correlated common factors, correlated errors, standard errors of factor loadings and measurement intercepts, overidentification tests of equations, and a procedure for determining the number of factors. We also permit simpler structures by removing nonsignificant loadings. Simulations of factor analysis models with and without cross-loadings demonstrate the impressive performance of the MIIV-EFA procedure in recovering the correct number of factors and in recovering the primary and secondary loadings. For example, in nearly all replications MIIV-EFA finds the correct number of factors when N is 100 or more. Even the primary and secondary loadings of the most complex models were recovered when the sample sizes were at least 500. We discuss limitations and future research areas. Two appendices describe alternative MIIV-EFA algorithms and the sensitivity of the algorithm to cross-loadings.
Grammatical complexity has been considered as an important research construct closely related to second language (L2) writing development. Although theoretical models were developed to demonstrate what grammatical complexity is, few studies have been conducted to analyze how this construct is represented from an empirical perspective. This chapter presents a data-driven investigation on the representation of grammatical complexity with an exploratory factor analysis (EFA). The investigation is based on (1) a corpus of scientific research reports written by Hong Kong students in an English Medium Instruction (EMI) scientific English course, and (2) an EFA, which is a statistical approach to uncover an underlying structure of a phenomenon, which fits this research purpose well. A corpus has been built with the science writing from EMI undergraduate students in Hong Kong. After corpus cleaning, Second Language Syntactic Complexity Analyzer – a software – was applied to output the values of fourteen effective measures of grammatical complexity for running the EFA in SPSS, and a step-by-step instruction was described in the chapter. The final model includes three latent factors: clausal (subordination) complexity, nominal phrasal complexity, and coordinate phrasal complexity. This EFA model is generally consistent with the argument of investigating grammatical complexity as a multidimensional construct (Biber et al., 2011; Norris & Ortega, 2009). In the end, we highlighted the research and pedagogical implications that readers should pay attention to when the EFA is applied in other EMI contexts in the future.
Informal digital learning of English (IDLE) is a promising way of learning English that has received growing attention in recent years. It has positive effects on English as a foreign language (EFL) learners and also creates valuable opportunities for EFL teachers to improve their teaching skills. However, there has been a lack of a valid and reliable scale to measure IDLE among teachers in EFL contexts. To address this lacuna, this study aims to develop and validate a scale to measure IDLE for EFL teachers in Iran. For this purpose, a nine-step rigorous validation procedure was undertaken: administering pilot interviews; creating the first item pool; running expert judgment; running interviews and think-aloud protocol; running the pilot study; performing exploratory factor analysis, Cronbach’s alpha, and confirmatory factor analysis; creating the second item pool; conducting expert reviews; and performing translation and translation quality check. Findings yielded a 41-item scale with six subscales: IDLE-enhanced benefits (12 items), IDLE practice (five items), support from others (nine items), authentic L2 experience (three items), resources and cognition (four items), and frequency and device (eight items). The scale demonstrated satisfactory psychometric properties such that it can be used for research and educational purposes in future.
Floods are the most frequent natural disasters with a significant share of their mortality. Preparedness is capable of decreasing the mortality of floods by at least 50%. This paper aims to present the psychometric properties of a scale developed to evaluate the behavior of preparedness to floods in Sudan and similar settings.
Methods:
In this methodological scale development study, experts assessed the content validity of the items of the developed scale. Data were collected from key persons of 413 households living in neighborhoods affected by the 2018 floods in Kassala City in Sudan. A pre-tested questionnaire of sociodemographic data and the Flood Preparedness Behavior Scale (FPBS) were distributed to the participants’ houses and recollected. Construct validity of the scale was checked using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Internal consistency of the scale was checked using Cronbach’s alpha. Test-retest reliability was assessed by Pearson’s correlation coefficient. Item analyses and tests of significance of the difference in the mean scores of the highest and lowest score groups were carried out to ensure discriminatory power of the scale items.
Results:
Experts agreed on the scale items. Construct validity of the scale was achieved using EFA by removing 34 items and retaining 25 items that were structured in three factors, named as: measures to be done before, during, and after a flood. Confirmatory factor analysis confirmed the construct obtained by EFA. The loadings of the items on their factors in both EFA and CFA were all > 0.3 with significant associations and acceptable fit indices obtained from CFA. The three factors were found to be reliable in terms of internal consistency (Cronbach’s alpha coefficients for all factors were > 0.7) and test-retest reliability coefficient. In item analysis, the corrected total item correlations for all the items were > 0.3, and significant differences in the means of the highest and lowest score groups indicated good item discrimination power.
Conclusion:
The developed 25 items scale is an instrument which produces valid and reliable measures of preparedness behavior for floods in Sudan and similar settings.
There is no universal tool for measuring disaster preparedness in the general population. This study aimed to provide a summary of the domains and psychometric properties of the available scales that assess preparedness for disasters, or one of its main types, among individuals or households.
Methods:
This study is a systematic review of the literature on disaster preparedness tools. Studies published up to December 2022 were identified through a systematic search of four databases: Google Scholar, PubMed, Scopus, and Web of Science. Consensus-Based Standards for the Selection of Health Measurement Instruments (COSMIN) were used to review and evaluate the psychometric properties. The Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines were used to report this article.
Results:
Twelve articles met the inclusion criteria. Among them, five scales measured general disaster preparedness, five measured earthquake preparedness, one measured flood preparedness, and one measured bushfire preparedness. The scales had a number of dimensions ranging from one to six. The most common item topics in the included scales were as follows: having an evacuation plan (n = 7), information source (n = 7), fire extinguisher (n = 6), and emergency kit (n = 5). The scales were rated sufficient for content validity (n = 10), structural validity (n = 5), internal consistency (n = 5), and test-re-test reliability (n = 6). One scale was checked for criterion validity and was rated as insufficient according to the COSMIN guidelines.
Conclusion:
The findings suggest the need to improve the psychometric properties of the scales, expand their contents, and develop scales relevant to target populations. This study provides useful information for researchers to develop comprehensive assessment tools and valuable sources of items for future scales.
This chapter provides an overview of exploratory factor analysis (EFA) from an applied perspective. We start with a discussion of general issues and applications, including definitions of EFA and the underlying common factors model. We briefly cover history and general applications. The most substantive part of the chapter focuses on six steps of EFA. More specifically, we consider variable (or indicator) selection (Step 1), computing the variance–covariance matrix (Step 2), factor-extraction methods (Step 3), factor-retention procedures (Step 4), factor-rotation methods (Step 5), and interpretation (Step 6). We include a data analysis example throughout (with example code for R), with full details in an online supplement. We hope the chapter will provide helpful guidance to applied researchers in the social and behavioral sciences.
In this chapter, we review the quantitative measurement of critical consciousness that has emerged within developmental and applied research over the last few decades. We provide a brief history and offer an overview of the current status of critical consciousness measurement. We also introduce four “phases” of critical consciousness measurement, which we refer to as (1) proxy measurement; (2) scale development; (3) scale expansion and (re)specification; and (4) scale refinement and adaptation. Due to their central role in critical consciousness measurement, we pay particular attention to instruments appearing in phase two, the scale development phase. After summarizing each phase, we identify opportunities for advancement and innovation in critical consciousness measurement and point to important new directions for measurement work in this area of scholarship – many of which are addressed more extensively in subsequent chapters of this volume.
This study was conducted to determine the validity and reliability of the Turkish version of the Sustainable and Healthy Eating (SHE) Behaviors Scale. The original scale included eight factors and thirty-four items related to the SHE behaviors of adults. The research was carried out in three stages with a total of 586 participants aged 19 to 50 years. The Cronbach alpha coefficient was used to evaluate internal consistency reliability and the test–retest method was applied. Exploratory factor analysis (EFA) was performed to determine the factor structure. The model obtained with EFA was evaluated with confirmatory factor analysis (CFA). The Cronbachαcoefficient of the scale was found to be excellent at 0·912, and the intra-class correlation coefficient was found to be good at 0·832 using the test–retest method. Considering the suitability of the data for factor analysis, the Kaiser–Meier–Olkin coefficient was 0·859, and the significance level of the Bartlett test of sphericity was less than 0·05 (χ2=3·803,25; P < 0·05). As a result of EFA, the items of the scale were found to be distributed in seven factor dimensions. The factor loadings of the items were between 0·516 and 0·890, and the factors explained 67 % of the variance. Considering the fit indices obtained as a result of the analysis of this model with CFA, it was seen that the model had an acceptable fit (χ2/sd = 2·593, comparative fit index = 0·915, Tucker–Lewis index=0·902, standardised root mean square error = 0·0754 and root mean square error of approximation = 0·067). In conclusion, the Turkish version of the SHE Behaviors Scale has credible reliability and construct validity to assess the sustainable and healthy eating behaviours of the Turkish adult population.
The present study explored whether motivational constructs for diet and physical activity (PA) cluster and how these motivational constructs relate to dietary and PA behaviour. Data of 1142 participants were used from a randomised controlled trial examining the effects of a web-based diet and PA promotion intervention based on self-determination theory and motivational interviewing. Motivation was assessed using the Treatment Self-Regulation Questionnaire and Behavioural Regulation in Exercise Questionnaire. The dietary outcomes were measured using an adapted Food Frequency Questionnaire. PA was assessed using the Short QUestionnaire to ASsess Health. Spearman rank-order correlations showed large correlation coefficients (rs ≥ 0⋅63) between similar motivational constructs between the two lifestyle domains, except for intrinsic motivation where a medium correlation coefficient was found (rs = 0⋅41). Furthermore, the exploratory factor analysis illustrated that more self-determined forms of motivation seem to be more domain-specific. In contrast, non-self-determined forms of motivation seem to be domain-independent. Last, regression analyses demonstrated that intrinsic motivation towards PA was the only motivational construct significantly positively associated with all PA sub-behaviours (standardised regression coefficients ranging from 0⋅17 to 0⋅28, all P < 0⋅0125). Intrinsic motivation to eat healthily was significantly positively associated with fruits, vegetables and fish intake (standardised regression coefficients ranging from 0⋅11 to 0⋅16, all P < 0⋅0125), but not with unhealthy snacks. Insight of this exploratory study is useful for understanding the interrelationships of motivational induced behaviours, the development of interventions targeting multiple behaviours, and the construction of questionnaires.
Research on environmental policy support utilises different categorisations of policies, for example, differentiating between policies assumed to be perceived as rewarding or punishing. Do citizens’ perception of environmental policies also lend itself to this categorisation? Based on an exhaustive sample of active policies in Sweden, this study presents a taxonomy of environmental policy support in Sweden. A fairly representative Swedish sample (N = 2911) rated the acceptability of 44 environmental policies. Exploratory factor analysis indicated that participants’ acceptability of policies forms three categories: push policies consisting of regulatory and market-based disincentives, pull policies consisting mainly of market-based incentives, and informational policies, such as ecolabeling. Sociodemographics had small but consistent effects on attitudes towards the three categories, while political ideology had a larger effect across the categories. This study indicates that current academic categorisations may not adequately capture laypeople’s perceptions, and discusses the importance of research on driving mechanisms behind the current taxonomy.
Rowland Universal Dementia Assessment Scale (RUDAS) is a brief cognitive test, appropriate for people with minimum completed level of education and sensitive to multicultural contexts. It could be a good instrument for cognitive impairment (CI) screening in Primary Health Care (PHC). It comprises the following areas: recent memory, body orientation, praxis, executive functions and language.
Research Objective:
The objective of this study is to assess the construct validity of RUDAS analysing its internal consistency and factorial structure.
Method:
Internal consistency will be calculated using ordinal Cronbach’s α, which reflects the average inter-item correlation score and, as such, will increase when correlations between the items increase. Exploratory Factor Analysis will be used to arrange the variables in domains using principal components extraction. The factorial analysis will include the extraction of five factors reflecting the neuropsychological areas assessed by the test. The result will be rotated under Varimax procedure to ease interpretation.
Exploratory factor analysis will be used to arrange the variables in domains using principal components extraction. The analysis will include Kaiser–Meyer–Olkin measure of sampling adequacy and Bartlett’s test of sphericity. Estimations will be based based on Pearson’s correlations between indicators using a principal component analysis and later replicated with a tetrachoric correlation matrix. The variance in the tetrachoric model will be analysed to indentify convergent iterations and their explicative power.
Preliminary results of the ongoing study:
RUDAS is being administered to 321 participants older than 65 years, from seven PHC physicians’ consultations in O Grove Health Center. The data collection will be finished by August 2021 and in this poster we will present the final results of the exploratory factor analysis.
Conclusions:
We expect that the results of the exploratory factor analysis will replicate the results of previous studies of construct validity of the test in which explanatory factor weights were between 0.57 and 0.82, and all were above 40%. Confirming that RUDAS has a strong factor construct with high factor weights and variance ratio, and 6-item model is appropriate for measurement will support its recommendation as a valid screening instrument for PHC.