Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-27T06:11:58.708Z Has data issue: false hasContentIssue false

Missing data in FFQs: making assumptions about item non-response

Published online by Cambridge University Press:  07 December 2016

Karen E Lamb*
Affiliation:
Institute for Physical Activity and Nutrition, School of Exercise and Nutrition Sciences, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia Department of Paediatrics, University of Melbourne, Parkville, VIC, Australia
Dana Lee Olstad
Affiliation:
Institute for Physical Activity and Nutrition, School of Exercise and Nutrition Sciences, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia
Cattram Nguyen
Affiliation:
Department of Paediatrics, University of Melbourne, Parkville, VIC, Australia Clinical Epidemiology and Biostatistics Unit, Murdoch Childrens Research Institute, Royal Children’s Hospital, Parkville, VIC, Australia
Catherine Milte
Affiliation:
Institute for Physical Activity and Nutrition, School of Exercise and Nutrition Sciences, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia
Sarah A McNaughton
Affiliation:
Institute for Physical Activity and Nutrition, School of Exercise and Nutrition Sciences, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia
*
*Corresponding author: Email karen.lamb@deakin.edu.au
Rights & Permissions [Opens in a new window]

Abstract

Objective

FFQs are a popular method of capturing dietary information in epidemiological studies and may be used to derive dietary exposures such as nutrient intake or overall dietary patterns and diet quality. As FFQs can involve large numbers of questions, participants may fail to respond to all questions, leaving researchers to decide how to deal with missing data when deriving intake measures. The aim of the present commentary is to discuss the current practice for dealing with item non-response in FFQs and to propose a research agenda for reporting and handling missing data in FFQs.

Results

Single imputation techniques, such as zero imputation (assuming no consumption of the item) or mean imputation, are commonly used to deal with item non-response in FFQs. However, single imputation methods make strong assumptions about the missing data mechanism and do not reflect the uncertainty created by the missing data. This can lead to incorrect inference about associations between diet and health outcomes. Although the use of multiple imputation methods in epidemiology has increased, these have seldom been used in the field of nutritional epidemiology to address missing data in FFQs. We discuss methods for dealing with item non-response in FFQs, highlighting the assumptions made under each approach.

Conclusions

Researchers analysing FFQs should ensure that missing data are handled appropriately and clearly report how missing data were treated in analyses. Simulation studies are required to enable systematic evaluation of the utility of various methods for handling item non-response in FFQs under different assumptions about the missing data mechanism.

Type
Commentaries
Copyright
Copyright © The Authors 2016 

Accurate assessment of dietary intake is challenging, given the vast range of foods available for consumption and the difficulty of recalling past food and beverage consumption. FFQs are a popular method of capturing dietary information in epidemiological studies as they are relatively inexpensive and simple to conduct. FFQs estimate usual consumption frequencies of foods, which are commonly used to derive usual intakes of nutrients, dietary patterns and diet quality, and are particularly useful for ranking participants according to intake( Reference Willett 1 , Reference Subar, Freedman and Tooze 2 ). However, FFQs can be burdensome for study participants to complete as detailed assessment may require inclusion of many food items, with up to 350 questions used in some cases( Reference Molag, de Vries and Ocke 3 ). As a consequence, participants may fail to complete all questions (termed ‘item non-response’), leaving researchers to decide how to best deal with these missing data. Although many studies have examined the validation and design of FFQs( Reference Cade, Burley and Warm 4 ), few have considered how various methods used to deal with missing data in FFQs can influence study findings. Potential approaches for dealing with missing data in nutritional epidemiology have been described elsewhere as part of a discussion of wider data challenges in this field( Reference Abellana Sangra and Farran Codina 5 ). However, particular issues when dealing with missing FFQ data require further elaboration.

Given recognition of the importance of overall dietary patterns, as opposed to individual foods or nutrients, in influencing health and disease( Reference Wirfalt, Drake and Wallstrom 6 , Reference Liese, Krebs-Smith and Subar 7 ), FFQs are increasingly being used to measure diet quality through the calculation of composite scores. These scores are commonly derived by adding sub-scores reflecting consumption of foods or nutrients consistent with current dietary guidelines or recommendations aimed at reducing chronic disease, resulting in a single value representing the healthfulness of the total diet. Missing data in FFQs are problematic in calculations of diet quality scores or nutrient intake, as item non-response means that these dietary exposures cannot be calculated.

The purpose of the present commentary is to review and critique strategies for dealing with item non-response in FFQs. We conclude by proposing a research agenda that can help to determine the most appropriate methods for dealing with item non-response in FFQs.

Methods for dealing with item non-response in FFQs

Reported methods for dealing with item non-response in FFQs have varied, with many studies failing to report the approach adopted( Reference Hansson and Galanti 8 , Reference Parr, Hjartaker and Scheel 9 ). Most commonly, investigators have used complete case analysis (described below) or assumed that missing responses indicate no consumption of that particular food item( Reference Cade, Thompson and Burley 10 Reference Lioret, McNaughton and Cameron 12 ). Although imputing zero for missing responses is acceptable in some situations (e.g. quantity of alcohol consumed when participants previously indicated that they do not drink alcohol), it is possible for FFQ items to be missing without a known rationale. Therefore zero imputation may not be an appropriate strategy to adopt for all missing FFQ items. In other instances investigators have imputed the median, mode or mean value from the sample for the missing food items( Reference Gaard, Tretli and Loken 13 ). Less often, more complex statistical approaches such as multiple imputation have been used( Reference Barzi, Woodward and Marfisi 14 ). The various approaches that have been used to deal with item non-response in FFQs are described below, along with their various assumptions.

Complete case analysis

Complete case analysis is often used in epidemiological studies as this is the standard approach that statistical software packages adopt when dealing with missing data. In complete case analysis, only participants with complete data for all study variables to be considered in the analysis are included. This means that the dietary exposure (e.g. nutrient intake or diet quality score) would not be derived for a participant who had missing data on one of the FFQ items used to compute the exposure; instead, this participant’s data would be completely discarded from the analysis. This method can lead to biased findings if the missing data are not missing completely at random (MCAR). Data can be assumed to be MCAR when there are no systematic differences between the missing data and the observed data; that is, the probability of data being missing is unrelated to observed values of other variables or to the unobserved values themselves (e.g. FFQ data could be assumed to be MCAR if a participant accidentally skipped over a question). In addition to potential biases, complete case analyses can lead to a large reduction in sample size if missing data are common, resulting in reduced statistical precision and power. Furthermore, it is inefficient to exclude participants from the analysis who provided a substantial amount of complete data for many FFQ items but failed to respond to a small number of items.

Single imputation

Single imputation approaches are appealing as these are relatively simple to conduct. This type of approach replaces each missing value with only one value in the imputation process, producing a filled-in data set that allows standard analytical methods to be used. Single imputation approaches include zero imputation (where missing values are replaced with zero to represent no consumption of that food item) or mean, mode and median imputation, which replace missing values with the sample mean, mode or median, respectively. More complex single imputation approaches include k nearest neighbours imputation( Reference Parr, Hjartaker and Scheel 9 ). This approach identifies k participants with similar responses on other variables to that of the participant with the missing data on the FFQ item, all of whom have complete data for the FFQ item to be imputed. The average value for that particular FFQ item among those k participants is taken as the FFQ item value for the participant with the missing response.

Single imputation methods have various shortcomings, with disadvantages including the very strong assumptions made. For example, assuming that the missing data equate to no consumption when using zero imputation has the potential to result in misleading findings if it is participants who do not want to report consumption of particular items (e.g. energy-dense, nutrient-poor foods) who have missing data on these items. Furthermore, single imputation methods do not reflect the uncertainty created by the missing data as they assume that the imputed values are real observations. This leads to underestimation of variances and poor coverage of confidence intervals( Reference Schafer and Graham 15 ), which can result in incorrect inferences about associations between exposures and outcomes. Thus, these approaches for dealing with missing data are not recommended, with journals and medical research guidelines increasingly turning towards multiple imputation as a preferred method for handling missing data in epidemiological studies( Reference Ware, Harrington and Hunter 16 , Reference Klebanoff and Cole 17 ).

Multiple imputation

Multiple imputation, like single imputation, produces filled-in data sets that enable standard analytical methods to be used, allowing partially observed information from all participants to be included in analyses. Each missing value is replaced with multiple imputed values that are generally drawn from a statistical model (known as an ‘imputation model’). This produces multiple filled-in data sets, each of which is then analysed separately using standard methods and the multiple results are then combined to give an overall estimate( Reference Rubin 18 ). A benefit of multiple imputation over single imputation is that it appropriately accounts for the uncertainty in the missing values, ensuring that the variance is not underestimated. It can also correct for bias, for example, if predictors of non-response are included in the imputation process. However, although multiple imputation approaches have been highlighted as potentially useful approaches for dealing with item non-response in FFQs( Reference Parr, Hjartaker and Scheel 9 ), these have not often been used within nutritional epidemiology, although some nutritional cohorts, such as the Adventist Health Study 2 (AHS-2) cohort, have begun to routinely adopt this method( Reference Orlich, Singh and Sabate 19 , Reference Rizzo, Sabate and Jaceldo-Siegl 20 ). This is perhaps due to the complexity of these techniques and the lack of guidance about when these methods are suitable to use.

Multiple imputation involves the specification of an imputation model which should generally include all covariates to be considered in the underlying analytical model (of dietary exposure and health outcome), in addition to other covariates which may be associated with the variables that have missing data or that are predictive of non-response. These additional covariates, known as ‘auxiliary variables’, have been shown to increase efficiency and reduce bias( Reference Collins, Schafer and Kam 21 ). A particular challenge when using dietary exposures in epidemiological analyses is deciding which variables to include in the imputation model. Considering nutrient intake, it would seem more appropriate to impute individual food items as each individual food will potentially contribute to the score of multiple nutrients. Similarly, it would be ideal to retain all of the complete information available for individual FFQ items on which the diet quality score is based, rather than simply the complete score, in the imputation model. This is because using the complete score can result in inefficiencies as this will fail to consider complete FFQ items within the score, meaning that participants with missing data for only one FFQ item would have a missing score. However, multiple imputation can be computationally intensive, particularly when many categorical variables are included or when there are substantial missing data. Imputation models may fail to converge when they involve large numbers of FFQ items( Reference White, Royston and Wood 22 ). One solution, although less efficient, could be to impute sub-scale scores in order to retain as much information as possible within a more parsimonious model, as discussed elsewhere( Reference Plumpton, Morris and Hughes 23 ). Although options for conducting multiple imputation are now available in standard statistical software, such as Stata and SAS, appropriate ways in which multiple imputation can best be used to address item non-response in FFQ data, and particularly when deriving dietary exposures, have received little attention in the literature to date.

A further challenge with multiple imputation is that most software implementations of multiple imputation assume that the missing data are missing at random (MAR). This means that the probability that data are missing does not depend on unobserved data (i.e. the missing values themselves), but may depend on observed data (i.e. the non-missing data)( Reference Sterne, White and Carlin 24 ). An example of MAR in the context of diet assessment is that males may be more likely to have missing fruit and vegetable intake. If sex is observed, then within each sex category there will be no systematic differences between observed and missing data. However, if data are missing not at random (MNAR), meaning the probability that an FFQ response is missing depends on unobserved data (i.e. the missing values themselves)( Reference Sterne, White and Carlin 24 ), then multiple imputation can give misleading results (as will most complete case analyses and single imputation methods). Given known issues affecting assessment of dietary intake, such as social desirability bias( Reference Hebert, Hurley and Peterson 25 , Reference Miller, Abdel-Maksoud and Crane 26 ), it may be the case that FFQ data are MNAR if participants who have missing data on ‘unhealthy’ FFQ items (e.g. energy-dense, nutrient-poor foods) are those with high levels of consumption. Unfortunately, it is not possible to determine from the data if the missingness is MAR or MNAR. Therefore, as highlighted by Sterne et al.( Reference Sterne, White and Carlin 24 ), researchers must consider the possible reasons why the missing data could have occurred in order to decide if MNAR could be a problem in their study. Strategies have been proposed to recover missing data when the MAR assumption may not be reasonable, such as guided multiple imputation where a random sub-sample of participants with missing data is contacted to recover the missing data. This information is then used in the imputation model( Reference Fraser and Ru 27 ). However, it is not always possible to re-contact a sub-sample of study participants to obtain this information. Therefore, it is important to consider potential strategies for dealing with missing FFQ data in this context as well as to perform sensitivity analyses to examine the robustness of results to departures from the assumptions made about the missing data mechanism. Commonly researchers employ a combination of these methods. For example, principles of complete case analysis may be applied in the first instance, with participants missing a large number of items excluded from the analysis. This has often been recommended as a form of global assessment for ensuring questionnaire fidelity and to allow exclusion of poorly completed questionnaires that would impact on validity of the dietary assessment( Reference Willett 1 ). Subsequently, missing data may be subjected to imputation, with zero imputation a common approach( Reference Parr, Hjartaker and Scheel 9 ).

Prior investigations of methods for dealing with item non-response in FFQs

Very few studies have compared approaches for dealing with missing data in FFQs. Of those that have, these investigations have typically compared differences in nutrient intakes when analyses include only individuals with no missing FFQ responses with analyses that include individuals whose initially missing responses have been subsequently filled in through resurveying them up to 2 years later( Reference Michels and Willett 28 ). Findings from these studies tend to suggest that the common approach of assuming that non-response equates to not eating that particular food can introduce bias( Reference Michels and Willett 28 Reference Fraser, Yan and Butler 30 ). One study examined a sub-sample of participants followed up 3 months after the initial FFQ to recover missing data in order to determine the influence of assuming that non-response equated to no consumption. Nutrient intake was derived using an eighty-four-item FFQ and results showed that average energy intake, among other dietary measures, increased following resurvey( Reference Ahn, Paik and Ahn 29 ). A second study followed up a sub-sample of participants approximately 1 year after the initial FFQ and used a guided multiple imputation approach to derive complete data for those with missing observations for the eighty foods considered. This study showed that typically the resurveyed values of those who had missing data for commonly consumed foods tended not to be zero so the potential for bias may be greater if focusing on these foods( Reference Fraser, Yan and Butler 30 ). Both studies highlighted that their findings were influenced by the number of items that had missing responses, with one noting that the greater the number of missing food items, the greater the likelihood that they truly reflected no consumption of that item( Reference Fraser, Yan and Butler 30 ); while the other noted that the greater the number of items missing, the higher the increase in average energy intake( Reference Ahn, Paik and Ahn 29 ). A third study involving a sixty-item FFQ noted that 37 % of the items that were initially missing were subsequently reported to be consumed at least monthly and so assuming no consumption would not be accurate for these participants( Reference Michels and Willett 28 ). However, the authors concluded that the assumption of zero intake was reasonable if examining nutrient intake expressed as a percentage of energy. One drawback to these comparison studies is that they assume that resurveying participants at a later time point captures the true consumption at the earlier time point. This assumption may not be valid in practice given the well-known difficulties associated with accurately recalling dietary intake( Reference Kipnis, Subar and Midthune 31 ). It is therefore difficult to use these studies as a basis to infer the likely impact of assuming item non-response equates to zero consumption of the food item in question, although re-contacting participants may at least provide an idea of the direction of potential biases (i.e. whether participants with missing data were more likely to have consumed those foods).

To date, Parr et al.( Reference Parr, Hjartaker and Scheel 9 ) have provided the most comprehensive comparison of methods for dealing with item non-response in FFQs. In addition to recovering missing data by resurveying participants with missing data 3 months after they were originally surveyed, they compared various single imputation approaches for dealing with missing FFQ data, including zero imputation, mode imputation, median imputation and k nearest neighbours imputation. The authors were particularly interested in considering the implications of zero imputation as their review of the literature suggested that this was the most common method used when dealing with item non-response. In the context of the Norwegian Women and Cancer study, Parr et al. concluded that assuming no consumption of a food item for which no response was provided may lead to underestimation and misclassification of the dietary intake (measured as the calculated intakes of food groups and a range of nutrient intakes, rather than dietary patterns or quality) of study participants as the resurvey values suggested that it was not accurate to assume zero consumption of missing FFQ items. However, a major limitation of the study, acknowledged by the authors, is that they were unable to conclude which imputation method provided the most accurate estimate of consumption since they did not know the true values for the missing data. Thus, that study is unable to provide guidance on the optimal approach to address item non-response. Although the authors noted that an evaluation of imputation methods required a simulation study with a complete data set as reference, to our knowledge, no study of this type has been conducted. Furthermore, the authors did not examine the influence of different missing data assumptions on exposure and outcome associations involving FFQ measures. Therefore, the effect of these assumptions in epidemiological studies in practice is unclear.

Summary and research agenda

There are many approaches for dealing with item non-response in FFQs and all make quite different assumptions about how the missing data occurred. When deciding how to deal with missing FFQ responses, it is important to consider the potential reasons for the missing data. However, although existing studies that have examined missing FFQ data have provided useful information about potential biases when assuming missing data equates to no consumption of that item, these have not been able to clearly demonstrate the mechanisms behind the missing data, such as social desirability bias leading to a desire to avoid reporting the consumption of unhealthy foods, inability to recall consumption, questionnaire fatigue or simply randomly missing questions when completing lengthy questionnaires. Furthermore, to our knowledge, no studies have fully explored the best means of addressing item non-response in FFQs, the implications of assumptions related to missing data mechanisms and how these assumptions might impact associations between diet and health, and, in particular, in relation to the use of diet quality scores and, nutrient intake. Thus, there is a clear gap in the nutrition literature on appropriate approaches for dealing with missing FFQ data. Given the possible implications of poor handling of missing data (e.g. biased study findings), we propose this as an important research agenda; a significant issue that should be considered alongside measurement error issues frequently discussed in this field( Reference Keogh and White 32 , Reference Freedman, Schatzkin and Midthune 33 ). While researchers can adopt different missing data methods to deal with item non-response, further research is required to determine the sensitivity of research findings to the various approaches and determine the implications of the different approaches. To do this, a rigorous evaluation of missing data methods involving simulation studies is required, as these allow systematic evaluation of the performance of these methods under different assumptions about the missing data mechanism.

In addition, considering the potential implications of the method adopted to address missing data on study findings, it is important that researchers working with FFQs are transparent in the reporting and handling of the missing FFQ responses. There are clear guidelines for reporting missing data and the details of multiple imputation analyses( Reference Sterne, White and Carlin 24 ). Journals, such as the New England Journal of Medicine ( Reference Ware, Harrington and Hunter 16 ), are increasingly recognising the need to handle missing data using principled methods and this is reflected in reporting guidelines such as the Consolidated Standards of Reporting Trials (CONSORT)( 34 ) and Strengthening the Reporting of Observational studies in Epidemiology (STROBE)( 35 ) statements. Despite this, missing data do not appear to be appropriately handled or adequately described in the field of nutrition. Moving forward, we propose a need for better reporting of how missing FFQ data were treated in analyses, consistent with these guidelines. Researchers should report both the amount of missing data in their studies and how the missing data were dealt with in analyses so that the assumptions are clear. Furthermore, they should provide information on sensitivity analyses conducted to assess the sensitivity of research findings to the assumptions being made. Specific guidelines relating to nutritional epidemiology (STROBE-nut) are currently under development( 36 ) and may assist in strengthening the reporting of studies, including the handling of missing data. With advancements in missing data methods and the availability of these methods in statistical software packages routinely used in nutritional epidemiology, such as Stata and SAS, it is now possible for researchers to make use of techniques such as multiple imputation to deal with missing FFQ data.

Conclusions

Missing data may be unavoidable when using long, self-administered questionnaires which can prove burdensome for participants to complete. The present commentary provided an overview of common techniques used to deal with item non-response in FFQ and the assumptions underlying each one. Researchers should take care to use robust techniques to address missing data in FFQ and to clearly and fully report the details of these analyses. Further research is required to systematically evaluate the performance of approaches for dealing with missing data in FFQ.

Acknowledgements

Financial support: K.E.L. is supported by a Deakin University Alfred Deakin Postdoctoral Research Fellowship (grant number RM 27751). D.L.O. is supported by a Research Fellowship from the Canadian Institutes of Health Research. S.A.M. is supported by a National Health and Medical Research Council Career Development Fellowship Level 2 (grant number ID1104636). The funders had no role in the design, analysis or writing of this article. Conflict of interest: None. Authorship: K.E.L. conceived the idea for the commentary and took the lead role in drafting this manuscript. D.L.O., C.M. and S.A.M. provided guidance on FFQ, diet quality scores and missing data approaches used in this field. C.N. provided guidance on statistical methods used to deal with missing data. All authors contributed to drafting this manuscript. Ethics of human subject participation: Not applicable.

References

1. Willett, W (2013) Nutritional Epidemiology, 3rd ed. Oxford: Oxford University Press.Google Scholar
2. Subar, AF, Freedman, LS, Tooze, JA et al. (2015) Addressing current criticism regarding the value of self-report dietary data. J Nutr 145, 26392645.Google Scholar
3. Molag, ML, de Vries, JHM, Ocke, MC et al. (2007) Design characteristics of food frequency questionnaires in relation to their validity. Am J Epidemiol 166, 14681478.Google Scholar
4. Cade, JE, Burley, VJ, Warm, DL et al. (2004) Food-frequency questionnaires: a review of their design, validation and utilisation. Nutr Res Rev 17, 522.Google Scholar
5. Abellana Sangra, R & Farran Codina, A (2015) The identification, impact and management of missing values and outlier data in nutritional epidemiology. Nutr Hosp 31, Suppl. 3, 189195.Google Scholar
6. Wirfalt, E, Drake, I & Wallstrom, P (2013) What do review papers conclude about food and dietary patterns? Food Nutr Res 2013, 57.Google Scholar
7. Liese, AD, Krebs-Smith, SM, Subar, AF et al. (2015) The dietary patterns methods project: synthesis of findings across cohorts and relevance to dietary guidance. J Nutr 145, 393402.CrossRefGoogle ScholarPubMed
8. Hansson, LM & Galanti, MR (2000) Diet-associated risks of disease and self-reported food consumption: how shall we treat partial nonresponse in a food frequency questionnaire? Nutr Cancer 36, 16.Google Scholar
9. Parr, CL, Hjartaker, A, Scheel, I et al. (2008) Comparing methods for handling missing values in food-frequency questionnaires and proposing k nearest neighbours imputation: effects on dietary intake in the Norwegian Women and Cancer study (NOWAC). Public Health Nutr 11, 361370.Google Scholar
10. Cade, J, Thompson, R, Burley, V et al. (2002) Development, validation and utilisation of food-frequency questionnaires – a review. Public Health Nutr 5, 567587.Google Scholar
11. Johansson, I, Hallmans, G, Wikman, A et al. (2002) Validation and calibration of food-frequency questionnaire measurements in the Northern Sweden Health and Disease cohort. Public Health Nutr 5, 487496.Google Scholar
12. Lioret, S, McNaughton, SA, Cameron, AJ et al. (2014) Three-year change in diet quality and associated changes in BMI among schoolchildren living in socio-economically disadvantaged neighbourhoods. Br J Nutr 112, 260268.Google Scholar
13. Gaard, M, Tretli, S & Loken, EB (1995) Dietary fat and the risk of breast cancer: a prospective study of 25,892 Norwegian women. Int J Cancer 63, 1317.Google Scholar
14. Barzi, F, Woodward, M, Marfisi, RM et al. (2006) Analysis of the benefits of a Mediterranean diet in the GISSI-Prevenzione study: a case study in imputation of missing values from repeated measurements. Eur J Epidemiol 21, 1524.Google Scholar
15. Schafer, JL & Graham, JW (2002) Missing data: our view of the state of the art. Psychol Methods 7, 147177.Google Scholar
16. Ware, JH, Harrington, D, Hunter, DJ et al. (2012) Missing data. N Engl J Med 367, 13531354.CrossRefGoogle Scholar
17. Klebanoff, MA & Cole, SR (2008) Use of multiple imputation in the epidemiologic literature. Am J Epidemiol 168, 355357.Google Scholar
18. Rubin, DB (1987) Multiple Imputation for Non-Response in Surveys. Hoboken, NJ: John Wiley & Sons, Inc.Google Scholar
19. Orlich, MJ, Singh, PN, Sabate, J et al. (2013) Vegetarian dietary patterns and mortality in Adventist Health Study 2. JAMA Intern Med 173, 12301238.Google Scholar
20. Rizzo, NS, Sabate, J, Jaceldo-Siegl, K et al. (2011) Vegetarian dietary patterns are associated with a lower risk of metabolic syndrome: the Adventist Health Study 2. Diabetes Care 34, 12251227.Google Scholar
21. Collins, LM, Schafer, JL & Kam, CM (2001) A comparison of inclusive and restrictive strategies in modern missing data procedures. Psychol Methods 6, 330351.Google Scholar
22. White, IR, Royston, P & Wood, AM (2011) Multiple imputation using chained equations: issues and guidance for practice. Stat Med 30, 377399.Google Scholar
23. Plumpton, CO, Morris, T, Hughes, DA et al. (2016) Multiple imputation of multiple multi-item scales when a full imputation model is infeasible. BMC Res Notes 9, 45.CrossRefGoogle Scholar
24. Sterne, JAC, White, IR, Carlin, JB et al. (2009) Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. BMJ 338, b2393.Google Scholar
25. Hebert, JR, Hurley, TG, Peterson, KE et al. (2008) Social desirability trait influences on self-reported dietary measures among diverse participants in a multicenter multiple risk factor trial. J Nutr 138, issue 1, 226S234S.Google Scholar
26. Miller, TM, Abdel-Maksoud, MF, Crane, LA et al. (2008) Effects of social approval bias on self-reported fruit and vegetable consumption: a randomized controlled trial. Nutr J 7, 18.Google Scholar
27. Fraser, G & Ru, Y (2007) Guided multiple imputation of missing data – using a subsample to strengthen the missing-at-random assumption. Epidemiology 18, 246252.Google Scholar
28. Michels, KB & Willett, WC (2009) Self-administered semiquantitative food frequency questionnaires patterns, predictors, and interpretation of omitted items. Epidemiology 20, 295301.CrossRefGoogle ScholarPubMed
29. Ahn, Y, Paik, HY & Ahn, YO (2006) Item non-responses in mailed food frequency questionnaires in a Korean male cancer cohort study. Asia Pac J Clin Nutr 15, 170177.Google Scholar
30. Fraser, GE, Yan, R, Butler, TL et al. (2009) Missing data in a long food frequency questionnaire: are imputed zeroes correct? Epidemiology 20, 289294.Google Scholar
31. Kipnis, V, Subar, AF, Midthune, D et al. (2003) Structure of dietary measurement error: results of the OPEN biomarker study. Am J Epidemiol 158, 1421.Google Scholar
32. Keogh, RH & White, IR (2014) A toolkit for measurement error correction, with a focus on nutritional epidemiology. Stat Med 33, 21372155.Google Scholar
33. Freedman, LS, Schatzkin, A, Midthune, D et al. (2011) Dealing with dietary measurement error in nutritional cohort studies. J Natl Cancer Inst 103, 10861092.Google Scholar
34. CONSORT Transparent Reporting of Trials (2016) Home page. http://www.consort-statement.org/ (accessed April 2016).Google Scholar
35. STROBE Statement: Strengthening the reporting of observational studies in epidemiology (2016) Home page. http://www.strobe-statement.org/ (accessed April 2016).Google Scholar
36. STROBE-nut: An extension of the STROBE statement for nutritional epidemiology (2016) About page. http://www.strobe-nut.org/content/strobe-nut (accessed April 2016).Google Scholar