Quality: the reliability of information, compared against a set of defined criteria, which usually includes assessment of financial disclosures, citing of references, transparency and provision of balanced and unbiased information(Reference Zhang, Sun and Xie10).
Accuracy: the factual correctness of information, typically in comparison to scientific literature or guidelines published by an authoritative group.
Publisher: the entity that has published information on a website or social media, for example, government or commercial organisation.
Dietary patterns have a significant influence on human health, and poor diet quality is the leading preventable risk factor contributing to the global burden of non-communicable disease(Reference Afshin, Sur and Fay1). Dietary behaviours are complex and have many influences that extend beyond physiological cues such as hunger and taste preferences(Reference Freeland-Graves and Nitzke2). Social and built nutrition environments also exert an influence on dietary behaviours, including nutrition information environments, which encompass the media and advertising(Reference Glanz, Sallis and Saelens3). Online environments are virtual, computer-based environments that are connected by the Internet, including websites and social media, and are now a prominent part of the media, with 60 % of the global population having Internet access and higher rates observed in high-income countries(Reference Johnson4). The WHO has outlined that such online environments can influence dietary behaviours through the provision of services and information(5).
In recent years, the prevalence and spread of health misinformation in online platforms have become a significant problem. In 2013, the World Economic Forum marked digital misinformation as one of the most dangerous trends of the era(6). Since then, the spread of health misinformation online has contributed to vaccine hesitancy, the ‘anti-vax’ movement and likely contributed to the spread of COVID-19(Reference Merchant, South and Lurie7,Reference Hussain, Ali and Ahmed8) . Internet and social media users can instantaneously publish information on any topic, regardless of their expertise or qualifications. Consequently, consumers are presented with an abundance of online information of variable quality and veracity(Reference Wang, McKee and Torbica9,Reference Zhang, Sun and Xie10) . Furthermore, it has been identified that consumers typically have low levels of media literacy and critical evaluation skills(Reference Barton11,Reference Chen, Conroy and Rubin12) . These factors have led to a scenario in which time-poor consumers are inundated with online information that they are unable to adequately scrutinise(Reference Rubin13).
Dietitians, public health nutritionists and organisations have raised concerns about the potential for nutrition-related misinformation to cause harm(5,14) and as a barrier to healthy eating behaviours(15). Consumers are increasingly relying on the Internet and social media for nutrition-related information(Reference Pollard, Pulker and Meng16–Reference Goodman, Hammond and Pillo-Blocka20), which puts them at risk of being misinformed. Further, the public’s trust in nutrition science and authoritative voices in the field has been eroded(Reference Garza, Stover and Ohlhorst21,Reference Penders, Wolters and Feskens22) . Numerous factors have contributed to the erosion of trust, including scientific uncertainty(Reference Holmberg23), failure to disclose conflicts of interest(Reference Garza, Stover and Ohlhorst21,Reference Penders, Wolters and Feskens22) , insufficient context in nutrition communication and contradictory messaging about nutrition issues(14). Exposure to nutrition information that lacks context or seems contradictory can lead to confusion and backlash among consumers(Reference Nagler24,Reference Chang25) . In turn, consumers are less likely to accept nutrition information from authoritative experts and may rely on information from less credible and qualified sources, further increasing their risk of being misinformed(Reference Nagler24,Reference Chang25) .
The quality and accuracy of health information on websites and social media have been extensively researched. Numerous systematic reviews have summarised the literature about the quality or accuracy of health information on the Internet and social media, to provide a more comprehensive overview of the information landscape(Reference Zhang, Sun and Xie10,Reference Eysenbach, Powell and Kuss26–Reference Daraz, Morrow and Ponce28) . These reviews are able to capture large amounts of data about the quality or accuracy of online health information, which is not feasible in a single study, due to the time-intensive process of quality and accuracy assessments, the plethora of information online and the continuous cycle of information being published, updated and deleted. However, to date, no systematic reviews have been conducted that summarise the quality or accuracy of online information specific to nutrition. Therefore, the aims of the current review were to systematically search the literature in order to: (1) summarise the level of quality and accuracy of nutrition-related information in online environments and (2) determine if nutrition-related information’s quality and accuracy varied between websites and social media or different publishers of information.
Methods
The protocol for this systematic review was registered in PROSPERO: CRD42021224277 (https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=224277) in January 2021 and followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA)(Reference Page, McKenzie and Bossuyt29) and the PRISMA literature search extension (PRISMA-S) protocols(Reference Rethlefsen, Kirtley and Waffenschmidt30). The PRISMA checklist is included in online Supplementary Table 1.
Inclusion and exclusion criteria
Peer-reviewed content analysis studies published in English after January 1989 that evaluated the quality and/or accuracy of nutrition-related information in online environments (websites or social media) were eligible for inclusion. For the purposes of this review, nutrition-related information was defined as information regarding healthy eating, dietary patterns, nutrients, nutritional requirements, nutritional composition of foods, nutritional supplements, health outcomes associated with foods and dietary patterns, food safety, food ethics and cooking. This definition was developed to incorporate key components of food literacy as defined by Vigden et al.(Reference Vidgen and Gallegos31) The year 1989 was selected because it is the year the world wide web became available(32). Studies that evaluated information from only one website or information intended for health professionals or experts were excluded. Conference abstracts, theses, unpublished works, editorials, perspectives, commentaries, systematic reviews and original research that used methods other than content analysis were excluded. Studies that focused specifically on online advertising were also excluded because food and nutrition-related advertising has been extensively researched and is beyond the scope of this review.
Search strategy
CINAHL (EBSCOhost), MEDLINE Complete (EBSCOhost), Embase (Ovid), Global Health (EBSCOhost) and Academic Search Complete (EBSCOhost) were systematically searched on 15 January 2021. Each database was searched individually. Study titles and abstracts were searched, and the search strategy included search terms related to four concepts: nutrition; AND online environments; AND quality/accuracy; AND information. Terminology was altered to include subject headings relevant to the database being searched. The databases and search terms used were decided upon after extensive pilot testing and consultation with the health librarian. Searches were limited to peer-reviewed journals and articles published after January 1989. To ensure that no relevant articles were missed, backwards and forwards searching of included articles was performed through hand searching of reference lists and Scopus searches of citing articles. Scopus searches were performed on 25 October 2021. See online Supplementary File 1 for further details of search strategy.
Screening
Results from database searches were downloaded and saved in an Endnote library (version X9), which was imported to Covidence software (Veritas Health Innovation). Duplicates were automatically removed during the import, and title and abstract screening was conducted in Covidence. Two researchers (ED and SM/RL) independently screened each article to determine its eligibility. Title and abstract screening disagreements were resolved by the researcher who did not initially screen the disagreed upon reference. Full-text articles were also independently reviewed by two authors (ED and SM/RL). Disagreements were discussed among all authors until consensus was reached.
Data extraction
A data extraction template was developed and was informed by a previous scoping of the literature and the systematic review aims. One author (ED) independently extracted data from all included references in Microsoft Excel (version 2108). If an included study contained components that were unclear or difficult to extract, the paper was circulated to all authors who met to discuss until the issue was resolved. The following data were extracted: study details (year of publication, country of origin, title, corresponding author’s contact details, aim, online environment investigated, nutrition-related topic of interest), methods (search strategy, inclusion and exclusion criteria, method of quality and/or accuracy evaluation, method of assessing inter-rater reliability), results (sample size, findings about information quality and/or accuracy and inter-rater reliability) and conflicts of interest. If a study focused on a broad health topic, only information relevant to nutrition was extracted.
Data synthesis
To assist in the interpretation of quality and accuracy findings, a classification framework developed for previous systematic reviews on health information quality was adapted(Reference Zhang, Sun and Xie10,Reference Eysenbach, Powell and Kuss26) . Quality or accuracy was coded as: (1) poor, if the authors’ overall tone about the quality or accuracy of the information was cautious or unfavourable; (2) good, if authors spoke positively and did not express concerns about the quality or accuracy of the information; (3) moderate, if the authors concluded with neither a negative nor positive tone and discussed the risks and benefits of the information or (4) varied, if it was explicitly stated that the information evaluated was of variable quality or accuracy(Reference Zhang, Sun and Xie10,Reference Eysenbach, Powell and Kuss26) . All included studies evaluated quality or accuracy, and therefore, all studies were eligible for synthesis with the framework. One author (ED) classified all articles and 20 % were randomly selected to be classified by a second author (SM) for reliability, achieving 76 % agreement. Disagreements were resolved through discussion until consensus was reached.
Risk of bias
The Academy of Nutrition and Dietetics Quality Criteria Checklist was used to conduct the risk of bias assessments(33). This risk of bias assessment tool contains fourteen questions (four relevance and ten validity questions), and studies receive an overall rating of positive, neutral or negative, where a positive rating indicates low risk of bias and negative indicates high risk of bias(33). Due to the design of included studies, a number of questions in the tool were not relevant. Therefore, most consideration was given to questions one, two and seven, as specified for descriptive studies in the tool’s manual for use(33). For a study to receive an overall positive rating, questions one, two and seven must all have all received a positive response. If one or more of these questions was rated as neutral or negative, a neutral or negative overall score was awarded respectively. All risk of bias assessments were performed by one author (ED), and a random 20 % were independently reviewed by another author (RL) for reliability. Eighty-five percentage agreement was achieved, and disagreements were discussed until consensus was reached.
Results
Description of included studies
Sixty-four studies, published between 1996 and 2021, were included in this review (Fig. 1). The number of studies published each year shows a generally increasing trend (Fig. 2). The first study to examine social media content was published in 2015 and at least two studies per year included social media data in subsequent years, except for 2021 due to the literature searches being run in January of the same year. Reported data collection periods ranged from February 1996 to August 2020 and 16 (34·0 %) studies did not report when data were collected(Reference Alfaro-Cruz, Kaul and Zhang34–Reference Zarnowiecki, Mauch and Middleton50). A summary of extracted data for studies evaluating websites and social media is provided in online Supplementary Tables 2 and 3, respectively.
Characteristics of the included studies are reported in Table 1. There was a fairly even distribution of studies that assessed quality and accuracy. The majority of included studies (82·8 %) evaluated information published on websites, and a wide range of nutrition topics were covered. Most studies (54·7 %) did not focus on information published in a specific region and those studies that did, generally evaluated information published in high-income countries. The number of websites, webpages and/or social media posts included in the studies’ samples varied greatly; the mean sample size was 165·7 (sd 359·1) and ranged from 4 to 2770.
* Studies may fall under more than one category.
Risk of bias assessments
Most studies were rated for risk of bias as negative (28·1 %) or neutral (51·6 %); thirteen studies (20·3 %) received an overall positive rating (online Supplementary Tables 2 and 3). Negative or neutral ratings were typically given due to risk of bias in the sample selection. For example, it was uncommon for the screening of content to involve more than one researcher and reporting of inclusion/exclusion criteria and search methods often lacked detail. Additionally, negative or neutral ratings were also given due to risk of bias in the evaluation of information quality and accuracy. For example, in three studies one rater independently performed all quality or accuracy evaluations and there was no method of measuring reliability, and thirteen studies did not report the number of raters involved.
Quality and accuracy assessment methods
Methods used to evaluate information quality varied across the forty-one studies that assessed quality (Table 2). The most common quality assessment methods were use of study-specific criteria developed by the study authors (23·4 %), the DISCERN Instrument (17·2 %) and the JAMA Benchmarks (10·9 %). The application of the JAMA Benchmarks was consistent across the studies that used this tool; however, the application of the DISCERN Instrument varied.
HONCode, Health on the Net Code; JAMA, Journal of American Medical Association; Health Information Technology Institute; EQIP, Ensuring Quality Information for Patients; IPDAS, International Patient Decision Aid Standards; LIDA, MinervaLIDAtion; MARS, Mobile App Rating Scale.
* Studies may fall under more than one category.
The majority of studies evaluating information accuracy assessed correctness against authoritative guidelines (n 16, 34·0 %)(Reference Keaver, Callaghan and Walsh41,Reference Shahar, Shirley and Noah45,Reference Shaikh and Scott46,Reference Sidnell and Nestel51–Reference Batar, Kermen and Sevdin63) , academic literature (n 13, 27·7 %)(Reference Dawson and Piller36,Reference Jimenez-Liñan, Edwards and Abhishek40,Reference Michael, Corey and Timothy43,Reference Lambert, Mullan and Mansfield58,Reference Neunez, Goldman and Ghezzi64–Reference Toth, O’Connor and Hartman72) or national dietary guidelines (n 12, 25·5 %)(Reference Sutherland, Wildemuth and Campbell48,Reference Zarnowiecki, Mauch and Middleton50,Reference Hirasawa, Yachi and Yoshizawa56,Reference Cardel, Chavez and Bian66,Reference Hopkins, Meedya and Ivers73–Reference Cannon, Lastella and Vincze80) . A scoring system was used for accuracy evaluations in 16 (34·0 %) studies(Reference Gholizadeh, Papi and Ashrafi-Rizi37,Reference Keaver, Callaghan and Walsh41,Reference Shaikh and Scott46,Reference Sutherland, Wildemuth and Campbell48,Reference Bernard, Cooke and Cole55,Reference McNally, Donohue and Newton59,Reference Post and Mainous60,Reference England and Nicholls62,Reference Batar, Kermen and Sevdin63,Reference Cardel, Chavez and Bian66,Reference Reddy, Kearns and Alvarez-Arango70,Reference Temple and Fraser71,Reference Htet, Cassar and Boyle75,Reference Joshi, Bhangoo and Kumar81,Reference Hires, Ham and Forsythe82) . Fourteen (29·8 %) studies included an evaluation of the comprehensiveness of information in accuracy assessments(Reference Jimenez-Liñan, Edwards and Abhishek40,Reference Shahar, Shirley and Noah45,Reference Shaikh and Scott46,Reference Sidnell and Nestel51,Reference Agricola, Gesualdo and Pandolfi53,Reference McNally, Donohue and Newton59–Reference Dornan and Oermann61,Reference Hoffman, Bross and Hamilton67,Reference Temple and Fraser71,Reference da Silva Gomes Monteiro, Macário de Assis and Alvim Leite74,Reference Htet, Cassar and Boyle75,Reference Cannon, Lastella and Vincze80,Reference Joshi, Bhangoo and Kumar81) . Accuracy was evaluated as a component of quality in seven (14·9 %) studies(Reference Gholizadeh, Papi and Ashrafi-Rizi37,Reference Keaver, Callaghan and Walsh41,Reference Sutherland, Wildemuth and Campbell48,Reference Hires, Ham and Forsythe82–Reference Rhoades and Ellis85) . Forty-seven studies (67·2 %) did not mention ethics or that approval from an ethics committee was not required.
Quality and accuracy results
Quality and accuracy coding classifications are presented in Table 3. Overall, 48·8 % of studies that investigated information quality were coded as poor. Of the studies that evaluated information quality on websites and social media, 47·1 % and 62·5 % were classified as poor, respectively. Similar proportions of studies were classified as poor, good and moderate between studies evaluating information quality on websites and social media. One study investigated websites and YouTube content and found a slightly larger proportion of low quality information on YouTube(Reference Lambert, Mullan and Mansfield58). Higher proportions of poor classifications for quality were observed for studies evaluating information about weight loss (n 5, 100 %) and supplements (n 3, 75 %), and a greater proportion of good classifications for information about child and maternal nutrition (n 2, 40 %), although the number of studies that evaluated these topics was small.
Overall, 48·9 % of studies assessing accuracy were coded as poor. Similar results were observed between studies that evaluated accuracy on websites and social media, with 47·7 % and 50 % classified as poor, respectively. One study compared the accuracy of website content with YouTube content, finding that accuracy was significantly higher for websites than YouTube (P < 0·0001)(Reference Lambert, Mullan and Mansfield58). Higher proportions of poor classifications for accuracy were observed for studies evaluating information about weight loss (n 4, 100 %) and supplements (n 3, 100 %), although the number of studies that evaluated these topics was small. For some topics, there was only study available, and they had poor ratings (immune function and sports nutrition).
Findings about the quality and accuracy of information from different publishers varied between studies. Three found that government websites had lower quality scores compared with other categories, such as news sites and non-government organisations(Reference Joshi, Bhangoo and Kumar81,Reference Aslam, Gibbons and Ghezzi86,Reference Hirasawa, Saito and Yachi87) and one study found government sites provided the least accurate information(Reference Joshi, Bhangoo and Kumar81). Conversely, government websites received some of the highest scores for quality in four studies(Reference Sutherland, Wildemuth and Campbell48,Reference Hopkins, Meedya and Ivers73,Reference Lobo, Lucas and Herbert88,Reference Ng, Ahmed and Zhang89) and accuracy in one(Reference Sidnell and Nestel51). Commercial websites’ information quality or accuracy was poorer than other publishers in six(Reference Shaikh and Scott46,Reference Neunez, Goldman and Ghezzi64,Reference Modave, Shokar and Peñaranda69,Reference Aslam, Gibbons and Ghezzi86,Reference Hirasawa, Saito and Yachi87,Reference Ng, Ahmed and Zhang89) and four studies, respectively, while two studies found commercial entities published the highest quality information(Reference McNally, Donohue and Newton59,Reference Lobo, Lucas and Herbert88) , and one found commercial health channels published the most accurate information(Reference Ostry, Young and Hughes78). Blogs provided the poorest quality information in three studies(Reference Cardel, Chavez and Bian66,Reference Lobo, Lucas and Herbert88,Reference El Jassar, El Jassar and Kritsotakis90) and least accurate information in two(Reference Agricola, Gesualdo and Pandolfi53,Reference Cardel, Chavez and Bian66) , although blogs were found to provide the most accurate information in one study(Reference Modave, Shokar and Peñaranda69). Organisations and/or academic institutions received the most favourable quality assessments in four studies(Reference Shaikh and Scott46,Reference Sutherland, Wildemuth and Campbell48,Reference Joshi, Bhangoo and Kumar81,Reference Herth, Kuenzel and Liebl91) and provided the most accurate information in five studies(Reference Shaikh and Scott46,Reference Sidnell and Nestel51,Reference Agricola, Gesualdo and Pandolfi53,Reference Davison and Guan79,Reference Joshi, Bhangoo and Kumar81) . Two studies evaluated information published by nutritionists and dietitians, both stating that information from dietitians was of higher quality and accuracy(Reference Toth, O’Connor and Hartman72,Reference Hires, Ham and Forsythe82) . Two studies focused solely on Wikipedia, one was coded as good for quality and accuracy(Reference Cabrera-Hernández, Wanden-Berghe and Curbelo Castro92) and one coded as moderate for accuracy(Reference Temple and Fraser71). No differences in the quality or accuracy of information by different publisher categories were observed in two and five studies(Reference Gholizadeh, Papi and Ashrafi-Rizi37,Reference Smekal, Gil and Donald93) , respectively(Reference Jimenez-Liñan, Edwards and Abhishek40,Reference McNally, Donohue and Newton59,Reference Batar, Kermen and Sevdin63,Reference Hoffman, Bross and Hamilton67,Reference Htet, Cassar and Boyle75) .
A breakdown of results for each quality criteria was not always reported. From studies that reported results for each criteria, the most consistently reported contributor to poor quality scores was a lack of reference to the original source of information, which was reported in eleven (26·8 %) studies(Reference Basch, Mongiovi and Berdnik35,Reference Sabbagh, Boyland and Hankey44,Reference Sutherland, Wildemuth and Campbell48,Reference Young, Bhulabhai and Papadopoulos52,Reference Hirasawa, Yachi and Yoshizawa56,Reference Cardel, Chavez and Bian66,Reference Modave, Shokar and Peñaranda69,Reference Toth, O’Connor and Hartman72,Reference Rhoades and Ellis85,Reference Ng, Ahmed and Zhang89,Reference El Jassar, El Jassar and Kritsotakis90) . Two articles examined the correlation between information quality and accuracy, one observed a weak correlation (r = 0·250, P < 0·05)(Reference Shahar, Shirley and Noah45) and one observed no correlation (r = 0·18, P > 0·05)(Reference England and Nicholls62). In another study, almost half of the websites deemed low quality contained accurate information(Reference Lambert, Mullan and Mansfield58).
Discussion
This systematic review included content analysis studies that investigate the quality and/or accuracy of nutrition-related information published on websites and social media. Half of the included studies found that the quality and/or accuracy of nutrition-related information examined was suboptimal. There was some variation in quality and accuracy between nutrition-related topics but very little consistency in findings about the level of quality or accuracy from different publishers of information. These results about the online nutrition-related information are discussed and summarised into four substantive observations.
Overall quality and accuracy
A major finding of this review was the high prevalence of poor-quality information in online environments. This finding is consistent with the outcomes from three systematic reviews that investigated the quality of health information on websites and found that online health information was of suboptimal quality(Reference Zhang, Sun and Xie10,Reference Eysenbach, Powell and Kuss26,Reference Daraz, Morrow and Ponce28) . Further, a systematic review investigating the use of social media for communicating health information found that one of the biggest limitations of using social media for this purpose was the lack of quality and reliability of health information(Reference Moorhead, Hazlett and Harrison94). A slightly higher rate of social media studies received an overall poor classification for quality findings compared with websites, which suggests that information quality may be more of a problem on social media. Further research that evaluates and compares the quality of information from both websites and social media is required to confirm if information quality is worse on social media.
Findings from the included studies indicate that that there is a large amount of inaccurate nutrition information present on websites and social media. These results are not surprising, given the widespread concerns about the prevalence and propagation of online health and nutrition misinformation(5,6,14,15) . Findings about accuracy in this review are also consistent with Eyesenbach et al.(Reference Eysenbach, Powell and Kuss26) and Zhang et al.(Reference Zhang, Sun and Xie10) who included accuracy as a component of quality in their systematic reviews about health information on websites, both concluding that, overall, the standard of information was poor. Further, a systematic review investigating the prevalence of health misinformation on social media identified that diet misinformation is present in greater amounts compared with other health topics(Reference Suarez-Lledo and Alvarez-Galvez27).
Quality and accuracy by topic
Studies that evaluated information about weight loss or supplements received a larger proportion of poor classifications about quality and accuracy findings compared with other topics. Weight loss and supplements are large commercial industries(95,96) . Assessment of financial and conflict of interest disclosures are a prominent component of quality assessment tools(Reference Zhang, Sun and Xie10), which may explain why these are rated more frequently as poor-quality information about weight loss and supplements. Further, the high rate of inaccuracies about these topics in online sources may mirror the high rate of misleading claims among information about products and services(Reference Rhodes and Wilson97). Consistent with findings about the accuracy of weight loss information in this review, Suarez-Lledo et al.(Reference Suarez-Lledo and Alvarez-Galvez27) found that misinformation about weight loss diets and promotion of eating disorders was present on social media in moderate amounts. Further, restrictive eating practices have been claimed as being healthy on websites and blogs(Reference Toth, O’Connor and Hartman72,Reference Ramachandran, Kite and Vassallo76) . Inaccurate online information about weight loss diets may be a particular concern, because diets have been identified as a risk factor for the development of eating disorders and engagement with health-related online content can contribute to poor body image, body dissatisfaction and restrictive eating(Reference Easton, Morton and Tappy98–Reference Haines and Neumark-Sztainer101). Therefore, inaccurate weight loss information in online environments may exacerbate the potential for harm and warrants further investigation.
Quality and accuracy by publisher typology
Included studies had contradictory findings about the quality and accuracy of information published by government agencies, academic institutions, blogs and commercial entities. These findings are concerning because consumers consider publishers as an indicator of nutrition information’s credibility(Reference Jung, Walsh-Childers and Kim102), and typically view organisations, academic institutions and government agencies as trustworthy, and commercial entities, Wikipedia and social media as less trustworthy when selecting health information(Reference Sun, Zhang and Gwizdka103). As such, when selecting information consumers may perceive nutrition information as credible, even if it is poor quality or inaccurate because the publisher is considered to be credible. Findings from this review suggest that the publisher of information may not always be a reliable indicator of the quality or accuracy of online nutrition-related information and using the publisher of online information only to determine credibility may put consumers at risk of being misinformed.
Evaluation methods and limitations of included studies
Quality and accuracy assessment methods varied between studies, particularly for studies investigating information quality. The use of a range of different quality assessment methods has also been observed in other systematic reviews and creates difficulty in comparing findings about information quality because quality principles are not consistently measured(Reference Zhang, Sun and Xie10,Reference Eysenbach, Powell and Kuss26,Reference Daraz, Morrow and Ponce28) . Some studies measured accuracy as a component of information quality, while others did not consider accuracy at all. There was little evidence of a relationship between information quality and accuracy. This suggests that quality and accuracy should both be assessed when evaluating information so that all factors are considered when drawing conclusions about the overall credibility of information.
It was common for accuracy measures to include an assessment of comprehensiveness. Some studies classified missing information in the same way as information that was inaccurate. While it is important to provide complete information(14), the absence of information may not be the same as the presence of inaccurate information. Accuracy measures that did not distinguish between inaccurate and incomplete information may have overstated the presence of inaccurate information. Differing considerations about information completeness in accuracy measures of included studies may account for some of the variation in conclusions about publishers of accurate and inaccurate information in this review. In future studies, use of accuracy measures that evaluate comprehensiveness should clearly distinguish between missing and inaccurate information.
Many of the studies included in this review had common limitations. First, most studies did not address ethical issues in their design or reporting. While ethics approvals may not have been required due to the use of publicly available data, research in online environments including social media can involve ethical issues such as identifying included websites or social media profiles, particularly if those sites or profiles identify individuals. Second, it was also rare for more than one researcher to be involved in the sample screening, which is a potential source of bias. Future studies about online health or nutrition information should aim to minimise the risk of bias by involving more than one researcher in screening. Finally, no included studies that evaluated the quality of social media content used tools that were developed specifically for social media. Use of quality assessment tools that have not been designed for social media may be inappropriate to measure information from social media, due to the many unique characteristics of social media platforms that may not be considered, such as the use of brief information and covert advertising(Reference Afful-Dadzie, Afful-Dadzie and Egala104,Reference Denniss, Lindberg and McNaughton105) . A quality assessment tool for social media-based health information has recently been developed that considers social media’s characteristics(Reference Denniss, Lindberg and McNaughton105) and recommended for future studies examining the quality of health information on social media.
Strengths, limitations and directions for future research
This systematic review has several strengths, including the large number of studies included (n 64) and wide range of nutrition-related topics examined. Further, it provides an analysis of research examining the quality and accuracy of online nutrition-related information since the Internet became widely available. This review also has limitations. First, a number of studies that examined information related to a broad health topic that encompassed nutrition were excluded because data specific to nutrition could not be extracted. In these instances, authors were contacted; however, most did not provide data. Second, readability is often considered a component of quality(Reference Zhang, Sun and Xie10); however, no search terms related to readability were included in the search strategy. Such terms were not included because it is common for studies to focus only on readability, and studies that only considered one aspect of information quality were beyond the scope of this review. Third, although the risk of bias assessment tool used was the most appropriate option available, there were several items that were not relevant, and the application of the tool was modified for the purposes of this review. Finally, due to the different assessment methods employed in the included studies, a previously established coding framework was used to assist in the interpretation of findings. Findings were coded based on the authors’ overall tone about the quality or accuracy of information; however, the interpretation of results was not always consistent between studies and in some studies the authors’ tone about accuracy was poor due to information being incomplete, rather than inaccurate. Additionally, agreement in coding decisions was low (76 %); however, coding disagreements were mainly between varied and moderate categorisations and therefore were not likely to significantly impact the findings.
Findings from this review have implications for future research and practice. Few studies investigated the quality or accuracy of nutrition information on social media, and some popular social media platforms, such as TikTok and Instagram, are yet to be studied. Future research should focus on social media, particularly platforms that have not been evaluated. Online health misinformation is a complicated problem and effectively combatting it will likely require a range of solutions. Improving the eHealth and media literacy of consumers may be one such solution; however, more research about how eHealth and media literacy skills can be improved in various demographic groups is needed to inform future policy actions(Reference Griebel, Enwald and Gilstad106). Greater regulation and moderation of information published on online platforms have also been identified as a possible solution, particularly on social media(Reference Kington, Arnesen and Chou107). There has been a push for social media giants to accept greater responsibility for the publication and propagation of harmful health misinformation on their platforms; however, thus far, there has been limited progress(Reference Kington, Arnesen and Chou107,Reference Puri, Coomes and Haghbayan108) . Communication has been identified as a core nutrition competency and harnessing the Internet and social media for efficient, effective nutrition communication is recommended in Australia’s decadal plan for nutrition(Reference Lepre, Mansfield and Ray109,110) . Nutrition professionals and experts can counteract nutrition misinformation by publishing accurate and high-quality nutrition information online and avoiding common mistakes, such as the omission of reference to original source material. Utilising methods such as search engine optimisation, to ensure that credible information is visible, and referring to resources such as Guidance for Professional Use of Social Media in Nutrition and Dietetics Practice (Reference Klemm111), to ensure information is of high quality, are also recommended.
Conclusion
This systematic review found that poor-quality and inaccurate nutrition-related information is prevalent on websites and social media. These high rates of suboptimal nutrition-related information are concerning because the public is increasingly relying on the Internet to source information about food and nutrition and are likely to encounter misleading information when using the Internet for this purpose. Results from this review also indicate that the publisher of information is not a good indicator of its quality or accuracy. Consumers typically use information’s publisher as an indicator of its credibility, which puts them at a greater risk of being misinformed when seeking nutrition information online. Future research should investigate methods to improve the public’s eHealth and media literacy to lessen the potential for harm caused by nutrition- and health-related misinformation. To improve the quality and accuracy of nutrition-related information available on websites and social media, credentialed experts and nutrition professionals should publish and promote their own high-quality and accurate information, and greater moderation, regulation and fact-checking of information should be carried out by social media companies and other online platforms that publish nutrition- and health-related information.
Acknowledgements
Acknowledgements: The authors would like to thank librarian Rachel West for her valuable input to the search strategy and selection of relevant databases. Authorship: This review was undertaken as part of the first author’s (E.D.) PhD studies. As such, E.D. was responsible for each aspect of the review, including conception, design, screening of articles, data extraction, data synthesis and writing of the first draft of the manuscript. Co-authors (R.L. and S.A.M.) supervised the research process, contributed to conception, decision making, screening of titles, abstracts and full texts and revising the manuscript. S.A.M. double coded 20 % of the included studies using the synthesis framework for reliability. R.L. conducted the risk of bias assessments on 20 % of the included studies for reliability. Ethics of human subject participation: N/A.
Financial support:
The first author (E.D.) was supported by a Deakin University Post-Graduate Research Scholarship (DURP 0000018830).
Conflict of interest:
There are no conflicts of interest.
Supplementary material
For supplementary material accompanying this paper visit https://doi.org/10.1017/S1368980023000873