We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A new method known as ‘current-day dietary recall’ (current-day recall) is based on an application for mobile phones called ‘electronic 12 h dietary recall’ (e-12HR). This new method was designed to rank participants into categories of habitual intake regarding a series of key food groups. The present study compared current-day recall against a previously validated short paper FFQ.
Design
Participants recorded the consumption of selected food groups using e-12HR during twenty-eight consecutive days and then filled out a short paper FFQ at the end of the study period. To evaluate the association and agreement between both methods, Spearman’s correlation coefficients (SCC), cross-classification analysis and weighted kappa statistics (κw) were used.
Setting
Andalusia, Spain, Southern Europe.
Subjects
University students and employees over the age of 18 years.
Results
One hundred and eighty-seven participants completed the study (64·2 % female, 35·8 % male). For all particpants, for all food group intakes, the mean SCC was 0·70 (SCC≥0·62 were observed for all strata); the mean percentage of participants cross-classified into categories of ‘exact agreement+adjacent’ was 90·1 % (percentages≥87·8 % were observed for all strata); and the mean κw was 0·55 (κw≥0·53 in ten of the twelve strata).
Conclusions
For the whole sample and for all strata thereof, the current-day recall has good agreement with the previously validated short paper FFQ for assessing food group intakes, rendering it a useful method for ranking individuals.
Methods for the detection of influenza epidemics and prediction of their progress have seldom been comparatively evaluated using prospective designs. This study aimed to perform a prospective comparative trial of algorithms for the detection and prediction of increased local influenza activity. Data on clinical influenza diagnoses recorded by physicians and syndromic data from a telenursing service were used. Five detection and three prediction algorithms previously evaluated in public health settings were calibrated and then evaluated over 3 years. When applied on diagnostic data, only detection using the Serfling regression method and prediction using the non-adaptive log-linear regression method showed acceptable performances during winter influenza seasons. For the syndromic data, none of the detection algorithms displayed a satisfactory performance, while non-adaptive log-linear regression was the best performing prediction method. We conclude that evidence was found for that available algorithms for influenza detection and prediction display satisfactory performance when applied on local diagnostic data during winter influenza seasons. When applied on local syndromic data, the evaluated algorithms did not display consistent performance. Further evaluations and research on combination of methods of these types in public health information infrastructures for ‘nowcasting’ (integrated detection and prediction) of influenza activity are warranted.
Although dispatching ambulance crews from unaffected areas to a disaster zone is inevitable when a major disaster occurs, the effect on emergency care in the unaffected areas has not been studied. We evaluated whether dispatching ambulance crews from unaffected prefectures to those damaged by the Great East Japan Earthquake was associated with reduced resuscitation outcomes in out-of-hospital cardiac arrest (OHCA) cases in the unaffected areas.
Methods
We used the Box-Jenkins transfer function model to assess the relationship between ambulance crew dispatches and return of spontaneous circulation (ROSC) before hospital arrival or 1-month survival after the cardiac event.
Results
In a model whose output was the rate of ROSC before hospital arrival, dispatching 1000 ambulance crews was associated with a 0.474% decrease in the rate of ROSC after the dispatch in the prefectures (p=0.023). In a model whose output was the rate of 1-month survival, dispatching 1000 ambulance crews was associated with a 0.502% decrease in the rate of 1-month survival after the dispatch in the prefectures (p=0.011).
Conclusions
The dispatch of ambulances from unaffected prefectures to earthquake-stricken areas was associated with a subsequent decrease in the ROSC and 1-month survival rates in OHCA cases in the unaffected prefectures. (Disaster Med Public Health Preparedness. 2015;9:609–613)
The abrupt transition to heightened poliomyelitis epidemicity in England and Wales, 1947–1957, was associated with a profound change in the spatial dynamics of the disease. Drawing on the complete record of poliomyelitis notifications in England and Wales, we use a robust method of spatial epidemiological analysis (swash-backwash model) to evaluate the geographical rate of disease propagation in successive poliomyelitis seasons, 1940–1964. Comparisons with earlier and later time periods show that the period of heightened poliomyelitis epidemicity corresponded with a sudden and pronounced increase in the spatial rate of disease propagation. This change was observed for both urban and rural areas and points to an abrupt enhancement in the propensity for the geographical spread of polioviruses. Competing theories of the epidemic emergence of poliomyelitis in England and Wales should be assessed in the light of this evidence.
Adjustment for body weight and physical activity has been suggested as an alternative to adjusting for reported energy intake in nutritional epidemiology. We examined which of these approaches would yield stronger correlations between nutrients and their biomarkers.
Design
A cross-sectional study in which dietary fatty acids, carotenoids and retinol were adjusted for reported energy intake and, separately, for weight and physical activity using the residual method. Correlations between adjusted nutrients and their biomarkers were examined.
Setting
USA.
Subjects
Cases and controls from a nested case–control study of erythrocyte fatty acids and CHD (n 442) and of plasma carotenoids and retinol and breast cancer (n 1254).
Results
Correlations between intakes and plasma levels of trans-fatty acids were 0·30 (energy-adjusted) and 0·16 (weight- and activity-adjusted); for erythrocyte levels, the corresponding correlations were 0·37 and 0·25. Energy-adjusted intakes of linoleic acid and α-linolenic acid were more strongly correlated with their respective biomarkers than weight- and activity-adjusted intakes, but the differences were not significant except for linoleic acid (erythrocyte). Weight- and activity-adjusted DHA intake was slightly more strongly correlated with its plasma biomarker than energy-adjusted intake (0·37 v. 0·34). Neither method made a difference for DHA (erythrocyte), carotenoids and retinol.
Conclusions
The effect of energy adjustment depends on the nutrient under investigation, and adjustment for energy calculated from the same questionnaire used to estimate nutrient intakes improves the correlation of some nutrients with their biomarkers appreciably. For the nutrients examined, adjustment using weight and physical activity had at most a small effect on these correlations.
Methods for conducting dietary assessment in the United States date back to the early twentieth century. Methods of assessment encompassed dietary records, written and spoken dietary recalls, FFQ using pencil and paper and more recently computer and internet applications. Emerging innovations involve camera and mobile telephone technology to capture food and meal images. This paper describes six projects sponsored by the United States National Institutes of Health that use digital methods to improve food records and two mobile phone applications using crowdsourcing. The techniques under development show promise for improving accuracy of food records.
Societal and technological changes render traditional study designs less feasible for investigation of outbreaks. We compared results obtained from case-case and case-control designs during the investigation of a Salmonella Enteritidis PT14b (SE14b) outbreak in Britain to provide support for validation of this approach. Exposures of cases were compared to concurrent non-Enteritidis Salmonella cases and population controls recruited through systematic digit phone dialling. Infection with SE14b was associated with eating in oriental restaurants [odds ratio (OR) 35·8, 95% confidence interval (CI) 4·4–290·9] and consuming eggs away from home (OR 13·8, 95% CI 1·5–124·5) in the case-case study and was confirmed through a concurrent case-control study with similar effect estimates and microbiological findings of SE14b in eggs from a specific chicken flock on a Spanish farm. We found that the case-case design was feasible, quick and inexpensive, potentially minimized recall bias and made use of already interviewed cases with subtyping results. This approach has potential for use in future investigations.
To assess race-specific validity of food and food group intakes measured using an FFQ.
Design
Calibration study participants were randomly selected from the Adventist Health Study-2 (AHS-2) cohort by church, and then by subject-within-church. Intakes of forty-seven foods and food groups were assessed using an FFQ and then compared with intake estimates measured using six 24 h dietary recalls (24HDR). We used two approaches to assess the validity of the questionnaire: (i) cross-classification by quartile and (ii) de-attenuated correlation coefficients.
Setting
Seventh-day Adventist church members geographically spread throughout the USA and Canada.
Subjects
Members of the AHS-2 calibration study (550 whites and 461 blacks).
Results
The proportion of participants with exact quartile agreement in the FFQ and 24HDR averaged 46 % (range: 29–87 %) in whites and 44 % (range: 25–88 %) in blacks. The proportion of quartile gross misclassification ranged from 1 % to 11 % in whites and from 1 % to 15 % in blacks. De-attenuated validity correlations averaged 0·59 in whites and 0·48 in blacks. Of the forty-seven foods and food groups, forty-three in whites and thirty-three in blacks had validity correlations >0·4.
Conclusions
The AHS-2 questionnaire has good validity for most foods in both races; however, validity correlations tend to be higher in whites than in blacks.
To develop a method to validate an FFQ for reported intake of episodically consumed foods when the reference instrument measures short-term intake, and to apply the method in a large prospective cohort.
Design
The FFQ was evaluated in a sub-study of cohort participants who, in addition to the questionnaire, were asked to complete two non-consecutive 24 h dietary recalls (24HR). FFQ-reported intakes of twenty-nine food groups were analysed using a two-part measurement error model that allows for non-consumption on a given day, using 24HR as a reference instrument under the assumption that 24HR is unbiased for true intake at the individual level.
Setting
The National Institutes of Health–AARP Diet and Health Study, a cohort of 567 169 participants living in the USA and aged 50–71 years at baseline in 1995.
Subjects
A sub-study of the cohort consisting of 2055 participants.
Results
Estimated correlations of true and FFQ-reported energy-adjusted intakes were 0·5 or greater for most of the twenty-nine food groups evaluated, and estimated attenuation factors (a measure of bias in estimated diet–disease associations) were 0·4 or greater for most food groups.
Conclusions
The proposed methodology extends the class of foods and nutrients for which an FFQ can be evaluated in studies with short-term reference instruments. Although violations of the assumption that the 24HR is unbiased could be inflating some of the observed correlations and attenuation factors, results suggest that the FFQ is suitable for testing many, but not all, diet–disease hypotheses in a cohort of this size.
The authors automated the selection of foods in a computer system that compiles and processes tailored FFQ. For the selection of food items, several methods are available. The aim of the present study was to compare food lists made by MOM2, which identifies food items with highest between-person variance in intake of the nutrients of interest without taking other items into account, with food lists made by forward regression. The name MOM2 refers to the variance, which is the second moment of the nutrient intake distribution. Food items were selected for the nutrients of interest from 2 d of recorded intake in 3524 adults aged 25–65 years. Food lists by 80 % MOM2 were compared to those by 80 % explained variance for regression on differences between the number and type of food items, and were evaluated on (1) the percentage of explained variance and (2) percentage contribution to population intake computed for the selected items on the food list. MOM2 selected the same food items for Ca, a few more for fat and vitamin C, and a few less for carbohydrates and dietary fibre than forward regression. Food lists by MOM2 based on 80 % of variance in intake covered 75–87 % of explained variance for different nutrients by regression and contributed 53–75 % to total population intake. Concluding, for developing food lists of FFQ, it appears sufficient to select food items based on the contribution to variance in nutrient intake without taking covariance into account.
To validate a 204-item quantitative FFQ for measurement of nutrient intake in the Adventist Health Study-2 (AHS-2).
Design
Calibration study participants were randomly selected from the AHS-2 cohort by church, and then subject-within-church. Each participant provided two sets of three weighted 24 h dietary recalls and a 204-item FFQ. Race-specific correlation coefficients (r), corrected for attenuation from within-person variation in the recalls, were calculated for selected energy-adjusted macro- and micronutrients.
Setting
Adult members of the AHS-2 cohort geographically spread throughout the USA and Canada.
Subjects
Calibration study participants included 461 blacks of American and Caribbean origin and 550 whites.
Results
Calibration study subjects represented the total cohort very well with respect to demographic variables. Approximately 33 % were males. Whites were older, had higher education and lower BMI compared with blacks. Across fifty-one variables, average deattenuated energy-adjusted validity correlations were 0·60 in whites and 0·52 in blacks. Individual components of protein had validity ranging from 0·40 to 0·68 in blacks and from 0·63 to 0·85 in whites; for total fat and fatty acids, validity ranged from 0·43 to 0·75 in blacks and from 0·46 to 0·77 in whites. Of the eighteen micronutrients assessed, sixteen in blacks and sixteen in whites had deattenuated energy-adjusted correlations ≥0·4, averaging 0·60 and 0·53 in whites and blacks, respectively.
Conclusions
With few exceptions validity coefficients were moderate to high for macronutrients, fatty acids, vitamins, minerals and fibre. We expect to successfully use these data for measurement error correction in analyses of diet and disease risk.
To develop and evaluate a pictorial, web-based version of the NCI diet history questionnaire (Web-PDHQ).
Design
The Web-PDHQ and paper version of the DHQ (Paper-DHQ) were administered 4 weeks apart with 218 participants randomised to order. Dietary data from the Web-PDHQ and Paper-DHQ were validated using a randomly selected 4 d food record recording period (including a weekend day) and two randomly selected 24 h dietary recalls during the 4 weeks intervening between these two diet history administrations.
Setting
Research office in Reston, VA, USA.
Participa414ts
Computer-literate men and women recruited through newspaper advertisements.
Results
Mean correlation of energy and the twenty-five examined nutrients between the Web-PDHQ and Paper-DHQ was 0·71 and 0·51, unadjusted and energy-adjusted by the residual method, respectively. Moderate mean correlations (unadjusted 0·41 and 0·38; energy-adjusted 0·41 and 0·34) were obtained between both the Web-PDHQ and Paper-DHQ with the 4 d food record on energy and nutrients, but the correlations between the Web-PDHQ and Paper-DHQ with the 24 h recalls were modest (unadjusted 0·31 and 0·29; energy-adjusted 0·37 and 0·26). A subset of participants (n 48) completing the Web-PDHQ at the initial visit performed a retest on the same questionnaire 1 week later to determine repeatability, and the unadjusted mean correlation was 0·82.
Conclusions
These data indicate that the Web-PDHQ has comparable repeatability and validity to the Paper-DHQ but did not improve the relationship of the DHQ to other food intake measures (e.g. food records, 24 h recall).
Use of well persons as the comparison group for laboratory-confirmed cases of sporadic salmonellosis may introduce ascertainment bias into case-control studies. Data from the 1996–1997 FoodNet case-control study of laboratory-confirmed Salmonella serogroups B and D infection were used to estimate the effect of specific behaviours and foods on infection with Salmonella serotype Enteritidis (SE). Persons with laboratory-confirmed Salmonella of other serotypes acted as the comparison group. The analysis included 173 SE cases and 268 non-SE controls. SE was associated with international travel, consumption of chicken prepared outside the home, and consumption of undercooked eggs prepared outside the home in the 5 days prior to diarrhoea onset. SE phage type 4 was associated with international travel and consumption of undercooked eggs prepared outside the home. The use of ill controls can be a useful tool in identifying risk factors for sporadic cases of Salmonella.
To compare intake estimates, validity and reliability of two summary questions to measure fish consumption with information from a detailed semi-quantitative food-frequency questionnaire (FFQ) on fish consumption.
Design
Population-based, cross-sectional study. Participants completed an FFQ and provided blood samples for erythrocyte membrane eicosapentaenoic acid (EPA) analysis. Aggregate measures of consumption of fresh/frozen/canned fish (fresh fish) and smoked/salted/dried fish (preserved fish) were generated from the FFQ and were compared with responses to the summary questions regarding intakes of similar items. Both methods were tested for validity, using correlation and linear regression techniques with EPA, and retest reliability.
Setting
Perth metropolitan area, Western Australia.
Subjects
One hundred and nine healthy volunteers of both sexes, aged 21–75 years.
Results
The summary fresh fish measure underestimated frequency and grams per week given by the aggregate question by about 50%, while estimates from the summary preserved fish measure were approximately three times that of the aggregate measure. Multiple linear regression analysis suggested that the aggregates accounted for more of the variation in EPA levels, but the difference was minimal. Intra-class correlations confirmed that both methods were reliable.
Conclusions
Our study indicates that extensive questioning results in different absolute intakes of fish compared with brief questioning, but does not add any information if ranking individuals according to overall consumption of fish.
We evaluated the performance of the food-frequency questionnaire (FFQ) administered to participants in the US NIH–AARP (National Institutes of Health–American Association of Retired Persons) Diet and Health Study, a cohort of 566 404 persons living in the USA and aged 50–71 years at baseline in 1995.
Design
The 124-item FFQ was evaluated within a measurement error model using two non-consecutive 24-hour dietary recalls (24HRs) as the reference.
Setting
Participants were from six states (California, Florida, Pennsylvania, New Jersey, North Carolina and Louisiana) and two metropolitan areas (Atlanta, Georgia and Detroit, Michigan).
Subjects
A subgroup of the cohort consisting of 2053 individuals.
Results
For the 26 nutrient constituents examined, estimated correlations with true intake (not energy-adjusted) ranged from 0.22 to 0.67, and attenuation factors ranged from 0.15 to 0.49. When adjusted for reported energy intake, performance improved; estimated correlations with true intake ranged from 0.36 to 0.76, and attenuation factors ranged from 0.24 to 0.68. These results compare favourably with those from other large prospective studies. However, previous biomarker-based studies suggest that, due to correlation of errors in FFQs and self-report reference instruments such as the 24HR, the correlations and attenuation factors observed in most calibration studies, including ours, tend to overestimate FFQ performance.
Conclusion
The performance of the FFQ in the NIH–AARP Diet and Health Study, in conjunction with the study’s large sample size and wide range of dietary intake, is likely to allow detection of moderate (≥1.8) relative risks between many energy-adjusted nutrients and common cancers.
To compare two approaches to analysing energy- and nutrient-converted data from dietary validation (and relative validation) studies – conventional analyses, in which the accuracy of reported items is not ascertained, and reporting-error-sensitive analyses, in which reported items are classified as matches (items actually eaten) or intrusions (items not actually eaten), and reported amounts are classified as corresponding or overreported.
Design
Subjects were observed eating school breakfast and lunch, and interviewed that evening about that day's intake. For conventional analyses, reference and reported information were converted to energy and macronutrients; then t-tests, correlation coefficients and report rates (reported/reference) were calculated. For reporting error-sensitive analyses, reported items were classified as matches or intrusions, reported amounts were classified as corresponding or overreported, and correspondence rates (corresponding amount/reference amount) and inflation ratios (overreported amount/reference amount) were calculated.
Subjects
Sixty-nine fourth-grade children (35 girls) from 10 elementary schools in Georgia (USA).
Results
For energy and each macronutrient, conventional analyses found that reported amounts were significantly less than reference amounts (every P < 0.021; paired t-tests); correlations between reported and reference amounts exceeded 0.52 (every P < 0.001); and median report rates ranged from 76% to 95%. Analyses sensitive to reporting errors found median correspondence rates between 67% and 79%, and that median inflation ratios, which ranged from 7% to 17%, differed significantly from 0 (every P < 0.0001; sign tests).
Conclusions
Conventional analyses of energy and nutrient data from dietary reporting validation (and relative validation) studies may overestimate accuracy and mask the complexity of dietary reporting error.
To evaluate the accuracy of a simplified inventory procedure for assessing nutrient intake from vitamin and mineral supplements.
Design
Participants brought their supplements to a clinic. An interviewer conducted the supplement inventory procedure, which consisted of recording data on the type of multiple vitamin and single supplements used. For the multiple vitamins, the interviewer recorded the exact dose for a subset of nutrients (vitamin C, calcium, selenium). For other nutrients, we imputed the dose in multiple vitamins. The dose of all single supplements was recorded. Labels of the supplements were photocopied and we transcribed the exact nutrient label data for the criterion measure. Spearman correlation coefficients were used to assess precision of nutrient intakes from the simplified inventory compared to the criterion measure.
Setting/subjects
Data are from 104 adult vitamin supplement users in Washington state.
Results
Correlation coefficients between nutrient intake estimated from the simplified inventory compared to the criterion measure were high (0.8–1.0) for those nutrients (vitamin C, calcium, selenium) for which the interviewer recorded the exact dose contained in multiple vitamins. However, for nutrients for which imputations were made regarding dose in multiple vitamins, correlation coefficients ranged from good (0.8 for vitamin E) to poor (0.3 for iron).
Conclusions
The simplified inventory is rapid (4–5 min) and practical for large-scale studies. The precision of nutrient estimates using this procedure was variable, although excellent for the subset of nutrients for which the dose was recorded exactly. This study illustrates many of the challenges of collecting high quality supplement data.
To compare different statistical methods for assessing the relative validity of a self-administered, 150-item, semi-quantitative food-frequency questionnaire (FFQ) with 4-day weighed diet records (WR).
Design:
Subjects completed the Scottish Collaborative Group FFQ and carried out a 4-day WR. Relative agreement between the FFQ and WR for energy-adjusted nutrient intakes was assessed by Pearson and Spearman rank correlation coefficients, the percentages of subjects classified into the same and opposite thirds of intake, and Cohen's weighted kappa.
Subjects:
Forty-one men, mean age 36 (range 21-56) years, and 40 women, mean age 33 (range 19-58) years, recruited from different locations in Aberdeen, Scotland.
Results:
Spearman correlation coefficients tended to be lower than Pearson correlation coefficients, and were above 0.5 for 10 of the 27 nutrients in men and 17 of the 27 nutrients in women. For nutrients with Spearman correlation coefficients above 0.5, the percentage of subjects correctly classified into thirds ranged from 39 to 78%, and weighted kappa values ranged from 0.23 to 0.66.
Conclusions:
Both Spearman correlation coefficients and weighted kappa values are useful in assessing the relative validity of estimates of nutrient intake by FFQs. Spearman correlation coefficients above 0.5, more than 50% of subjects correctly classified and less than 10% of subjects grossly misclassified into thirds, and weighted kappa values above 0.4 are recommended for nutrients of interest in epidemiological studies.
Because the percentage of missing portion sizes was large in the Aerobics Center Longitudinal Study (ACLS), careful consideration of the accuracy of standard portion sizes was necessary. The purpose of the present study was to investigate the consequences of using standard portion sizes instead of reported portion sizes on subjects' nutrient intake.
Methods
In 2307 men and 411 women, nutrient intake calculated from a 3-day dietary record using reported portion sizes was compared with nutrient intake calculated from the same record in which standard portion sizes were substituted for reported portion sizes.
Results
The standard portion sizes provided significantly lower estimates (> 20%) of energy and nutrient intakes than the reported portion sizes. Spearman correlation coefficients obtained by the two methods were high, ranging from 0.67 to 0.93. Furthermore, the agreement between both methods was fairly good. Thus, in the ACLS the use of standard portion sizes rather than reported portion sizes did not appear to be suitable to assess the absolute intake at the group level, but appeared to lead to a good ranking of individuals according to nutrient intake. These results were confirmed by the Continuing Survey of Food Intake by Individuals (CSFII), in which the assessment of the portion size was optimal. When the standard portion sizes were adjusted using the correction factor, the ability of the standard portion sizes to assess the absolute nutrient intake at the group level was considerably improved.
Conclusions
This study suggests that the adjusted standard portion sizes may be able to replace missing portion sizes in the ACLS database.
We examine (1) the extent to which seasonal diet assessments correctly classify individuals with respect to their usual nutrient intake, and (2) whether the magnitude of true variation in intake between individuals is seasonal. These effects could lead, respectively, to bias in estimates of relative risk for associations between usual nutrient exposure and disease, and to an increase in required sample size.
Subjects and setting:
One hundred and twenty-seven families in four regions of the Japan Public Health Center (JPHC) Cohort Study.
Design:
On average, 48 weighed daily food records were collected per family over six seasons of 1994 and 1995.
Results:
A random slopes regression model was used to predict the correlation between seasonal and annual average intakes, and to estimate true between-person variation in intakes by season. Mean vitamin C intake was greatest in summer and autumn, and seasonal variation was attributable to the consumption of fruit and vegetables. Predicted correlations between seasonal and annual average vitamin C intake ranged from 0.62 to 0.87, with greatest correlations in summer and autumn. True between-person variation in vitamin C intake was also strongly seasonal, ranging from 45 to 78% of total variance, and was again greatest in summer and autumn. These effects were less seasonal among energy and 13 other nutrients.
Conclusions:
It may be possible substantially to reduce both seasonal misclassification of individuals with respect to their usual vitamin C intake, and required sample size, by asking subjects to report high-season intake of fruit and vegetables in the JPHC Study.