The changing human diet
The human diet has undergone profound changes throughout human history and pre-history. The change in nutrient availability made possible by the use of fire in the preparation of food was so great that it has been proposed as a key driver for the evolution of the human brain and intelligence. Over the past 10 000 years the human diet has changed significantly as mankind has moved from a hunter-gatherer subsistence to that of an agriculturalist( Reference Jew, AbuMweis and Jones 1 ). The astonishing pace of change in the development of global agriculture and food distribution systems over the past century has resulted in further changes to the human diet. In the USA in the 20th century, intakes of added oils, shortening, meat, cheese and frozen dairy products have increased significantly, with more recent increases in added sweeteners, fruit, fruit juices and vegetables( Reference Barnard 2 ). This has resulted in important changes in the intakes of individual nutrients; for example the ratio of n-6 linoleic acid to n-3 linolenic acid has increased while the intakes of total n-3 and n-6 long-chain polyunsaturates as a per cent of energy have fallen( Reference Blasbalg, Hibbeln and Ramsden 3 ). Rapid changes have also occurred in the diet in the UK over the past century (Fig. 1). Since 1940, when reliable figures were first collected, there has been a modest increase in the intake of meat but significant changes in the type of meat consumed, with a reduction in mutton and lamb and an increase in poultry( 4 ). Over the same period vegetable consumption has decreased, whereas fruit consumption has increased. These and other changes in food consumption patterns have resulted in significant changes in the consumption of individual nutrients in the UK during the 20th century.
More recent advances have increased the potential for deliberate modification of the nutritional composition of the human diet. This trend is driven partly by pragmatism on the part of food producers and processors – efficiency and cost effectiveness of production; and partly by the desire to make claims, explicit and implicit, for the beneficial effects of foods and food products. These different goals tend to lead to socioeconomic stratification; highly processed low-cost foods v. high-value premium products defined by health claims. Innovations in food technology, new ways of producing and processing foods and the increasing use of artificial vitamins and novel ingredients are changing the human diet in ways that our dietary monitoring systems struggle to keep pace with. Since our nutritional monitoring systems are largely based on the estimated intakes of foods and food groups, they have a limited ability to detect changes in nutrient intake as a result of changes to the composition of foods. There is some monitoring of blood samples for nutrient status in national surveys, such as the National Health and Nutrition Examination Survey in the USA and the National Diet and Nutrition Survey in the UK, but the range of nutrients measured is relatively limited, the time spans over which change is measured are modest, and the changing methodologies used to measure nutrient status complicate the interpretation of trend data.
Technologies such as genetic modification have the potential to further increase the artificiality of the human diet. Such technologies are currently not allowed in Europe for foods destined for human consumption but the pressure is increasing to allow these technologies in order to alleviate food supply problems that will develop as a result of the linked issues of human population growth and climate change. An example of this dilemma is provided by ‘Golden Rice’. This variety of rice has been genetically engineered to produce β-carotene to help combat vitamin A deficiency; a significant public health problem worldwide, particularly in developing countries( Reference Ye, Al-Babili and Kloti 5 ). This rice has recently been modified to further increase the content of β-carotene( Reference Paine, Shipton and Chaggar 6 ). β-Carotene is a naturally occurring nutrient and while this food could help to reduce the prevalence of vitamin A deficiency worldwide there have been concerns over the potential for β-carotene to promote the development of lung cancer among high-risk individuals such as smokers and asbestos workers( Reference Sommer and Vyas 7 ).
Dietary supplements also have the potential to profoundly influence the intake of individual nutrients and there has been a steady increase in the use of supplements in industrialised countries over recent decades. In the USA, National Health and Nutrition Examination Survey II estimated supplement use at 28% among men and 38% among women aged 20 years and over in the period 1971–1975( Reference Gahche, Bailey, Burt, Hughes, Yetley, Dwyer, Piccano, McDowell and Sempos 8 ). By 1976–1980 it was 32% among men and 43% among women in the same age group. By 1988–1994 over 40% of adults were using one or more dietary supplement and by 2003–2006 over 50% of adults were using supplements( Reference Gahche, Bailey, Burt, Hughes, Yetley, Dwyer, Piccano, McDowell and Sempos 8 ). Dietary supplements can contain nutrients in amounts as high as or higher than the Institute of Medicine's Recommended Dietary Reference Intakes, and it is acknowledged that they may already be contributing substantially to total nutrient intake( Reference Gahche, Bailey, Burt, Hughes, Yetley, Dwyer, Piccano, McDowell and Sempos 8 ). Multivitamins and multiminerals were the most frequently reported dietary supplement across all National Health and Nutrition Examination Survey years and the effect of supplements on total intake is likely to span a broad spectrum of nutrients. The UK is slightly behind the USA in terms of supplement use, but the trend is in the same direction. In the UK, it is estimated that 40% of women and 29% of men take dietary supplements( Reference Henderson, Gregory and Swan 9 ). There is also social stratification in supplement use in the UK with a higher frequency of supplement use in non-manual than manual groups. Cod liver oil and other fish-based supplements were the most commonly consumed supplements. Multivitamins and multiminerals were taken by 35% of those taking supplements with 12% taking multivitamins with no minerals, and the same proportion taking minerals with no vitamins( Reference Henderson, Gregory and Swan 9 ).
The nutrient content of foods may also be manipulated for a variety of reasons. Efforts to improve population health through diet have led to the introduction of a number of regulatory measures over the past century, which have required the addition of individual nutrients to certain foods. ‘Restoration’ is the term applied to the addition of nutrients to foods to ensure that any nutritional losses during storage, handling and manufacturing are made good. An example of this in the UK is the requirement that white and brown flour, unlike wholemeal flour, must be fortified with thiamin, niacin, calcium and iron. ‘Substitution’ is relevant to the production of substitute foods. An example of this is the substitution of margarine for butter and it has been a legal requirement in the UK since 1967 to fortify margarine with vitamins A and D, so that the levels are comparable with butter. ‘Fortification’ refers to the addition of vitamins or minerals irrespective of whether these nutrients were present originally and this may be mandatory or voluntary. An example of the former is the introduction in the USA, Canada and a number of other countries, of mandatory fortification of enriched grain products with folic acid. This is one of the most significant public health nutrition measures to be enacted in recent decades. Folic acid consumption by women is known to reduce the risk of neural tube defect in pregnancy and women who intend to become pregnant are currently recommended to take folic acid supplements periconceptionally and up until 12 weeks of gestation to reduce the risk of neural tube defect. However, many pregnancies are unplanned and fortification was introduced in order to reduce the incidence of neural tube defect in these pregnancies and in those groups who were not following the advice on supplement use. This measure produced significant changes in population folate status. The introduction of mandatory fortification in the USA was completed in early 1998 and resulted in an estimated 215–240 μg/d increase in the intake of folates( Reference Quinlivan and Gregory 10 ) and a 144% increase in plasma folate concentration in the female population( Reference Pfeiffer, Caudill and Gunter 11 ). Nutrients such as folic acid are also added, on a voluntary basis, to a wide range of foods such as breakfast cereals, spreads and a number of other product groups. When consumed in combination, or individually in large amounts, this can result in very high intakes. In some cases, the intake may exceed the upper level of folic acid intake considered to be safe (1 mg/d for adults)( 12 ). In 2006 the UK Scientific Advisory Committee on Nutrition estimated that approximately 127 000 people in the UK exceeded the upper limit for folic acid and 86% of these excess levels were attributed to consumption of foods, largely fat spreads and breakfast cereals, voluntarily fortified with folic acid( 12 ).
Most consumers can exercise choice in relation to the foods they consume but increasing sophistication of food manufacture and processing may actually reduce our ability in practice to regulate our dietary intake of nutrients. It is difficult, but feasible, to optimise a diet for a few nutrients (e.g. in an effort to follow current recommendations on saturated fat, salt and energy) but the widespread addition of nutrients and novel ingredients to foods makes the process of product selection a formidable mathematical optimisation task, which may well be impractical for most consumers. Even if individuals were prepared to spend time attempting this, the addition of individual nutrients and novel ingredients to foods in which they do not normally occur means that every single product has to be checked for a large number of nutrients. In addition, many in society (e.g. those in schools, care homes, hospitals, prisons, or even those who choose not to prepare their own food) have very little control over the products they eat. The net result is that in practice, consumers have only a limited ability to resist industry driven changes in the nutrient composition of foods even if they wished to.
The nutrient composition of the human diet varies between populations and ethnic groups, and across geographical regions. Mankind has been able to adapt to these diets over time but the current pace and nature of the change in the human diet is new. The dietary changes listed earlier, together with technological manipulation of foods and the increasing use of nutritional supplements, are resulting in mixtures of nutrients never before experienced in human evolutionary history and this trend to artificiality shows every sign of accelerating in the 21st century. The ‘single nutrient – deficiency symptom’ model has historically been very helpful in alleviating nutritional problems but it has had little success in relation to the complex diseases, which are now the main concern in industrialised countries. There is a growing awareness of the importance of nutrient mixtures and recognition that it may actually be multiple nutrient exposure and nutrient interactions that are the key to health. Whatever the mechanism, we have little understanding of how the profound changes in nutrient intakes already observed will affect the health of current and future generations. Our emerging understanding of the field of epigenetics, and the way it is affected by nutrition, make it particularly relevant to this issue.
Epigenetics
Epigenetics is emerging as perhaps the most important mechanism through which the diet and nutrition can directly influence the genome( Reference Haggarty 13 ). This is not surprising as the two key groups involved in epigenetic modification of the histones and DNA (methyl and acetyl groups) are at the heart of nutritional metabolism. Numerous studies have demonstrated effects on DNA methylation of alcohol( Reference Liu, Balaraman and Wang 14 – Reference Bielawski, Zaher and Svinarich 20 ), the B vitamins( Reference Steegers-Theunissen, Obermann-Borst and Kremer 21 – Reference Haggarty, Hoad and Campbell 29 ), protein( Reference Lillycrop, Phillips and Jackson 30 – Reference Ivanova, Chen and Segonds-Pichon 33 ), micronutrients( Reference Cooper, Khulan and Owens 34 – Reference Reichard and Puga 37 ), functional food components( Reference Ho, Clarke and Dashwood 38 – Reference Fang, Wang and Ai 42 ) and general nutritional status( Reference Heijmans, Tobi and Stein 43 – Reference Radford, Isganaitis and Jimenez-Chillaron 45 ) (Table 1).
IGF, insulin-like growth factor 2; LINE-1, long interspersed nuclear elements; PEG3, paternally expressed gene 3; SNRPN, small nuclear ribonucleoprotein polypeptide N.
At its most fundamental level, epigenetics is about information, and specifically the information present in the genome over and above that coded in the DNA sequence. This epigenetic information determines how, when and where the sequence information is used( Reference Haggarty 13 ). Epigenetics is also about time and the way in which exposures can result in metastable epigenetic marks that persist for variable amounts of time (Fig. 2) and can influence biological function and health( Reference Haggarty 13 , Reference Jirtle and Skinner 46 – Reference Cordaux and Batzer 50 ). It is this aspect of epigenetics that makes it particularly relevant to the rapid pace of change in the human diet.
Much of the work on basic epigenetic mechanisms has focused on reproduction and this has led to a particular interest in the possibility that epigenetic status may be influenced by specific environmental factors such as nutrition in the critical period before birth, and even before conception. Epigenetics has been defined as ‘heritable changes in gene function that cannot be explained by changes in DNA sequence’( Reference Russo, Martienssen and Riggs 51 ) and many studies have been carried out in pregnancy in animal models( Reference Garro, McBeth and Lima 15 , Reference Cooney, Dave and Wolff 25 , Reference Wolff, Kodell and Moore 26 , Reference Burdge, Slater-Jefferies and Torrens 32 , Reference Ivanova, Chen and Segonds-Pichon 33 , Reference Kovacheva, Mellott and Davison 36 , Reference Dolinoy, Weidman and Waterland 39 , Reference Radford, Isganaitis and Jimenez-Chillaron 45 ) and human subjects( Reference Steegers-Theunissen, Obermann-Borst and Kremer 21 , Reference Haggarty, Hoad and Campbell 29 , Reference Cooper, Khulan and Owens 34 , Reference Heijmans, Tobi and Stein 43 , Reference Boeke, Baccarelli and Kleinman 44 ) looking at the effect on epigenetic status in the offspring of nutritional exposures during pregnancy. Nutritional factors at key life stages can result in relatively stable epigenetic marks that persist over decades, or even more than one lifetime, and have functional consequences for health. Most of the pregnancy studies have investigated nutrient exposure in mothers but transgenerational epigenetic programming is also relevant to fathers, and the nutrients they consume during the epigenetic programming of the sperm that provide one half of the DNA of the offspring( Reference Bielawski, Zaher and Svinarich 20 , Reference Ferguson-Smith and Patti 52 ).
Epigenetic marking that is particularly relevant to the changing nutritional environment occurs in imprinted genes and the repeat elements. Imprinting refers to parent of origin specific regulation of gene expression( Reference Ferguson-Smith and Surani 53 – Reference Reik and Walter 55 ). The imprint is set early in development and passed down through the somatic cell lineage( Reference Ferguson-Smith and Surani 53 – Reference Reik and Walter 55 ). Some imprinted regions remain stable over decades( Reference Sandovici, Leppert and Hawk 56 , Reference Woodfine, Huddleston and Murrell 57 ) but there is variation between individuals in the level of imprinting methylation( Reference Haggarty, Hoad and Campbell 29 , Reference Sandovici, Leppert and Hawk 56 , Reference Heijmans, Kremer and Tobi 58 , Reference Kaminsky, Tang and Wang 59 ). This variation, and how it comes about, is of considerable interest as the process of imprinting, and imprinting status, is thought to be important in health and disease( Reference Jirtle and Skinner 46 , Reference Waterland and Jirtle 47 ). Human imprinting syndromes, where the normal process of imprinting is disrupted, result in a wide range of phenotypes( Reference Owen and Segars 60 – Reference Trasler 62 ) including obesity( Reference Horsthemke and Buiting 63 ) and diabetes( Reference Temple and Shield 64 ). Loss of imprinting within the insulin-like growth factor 2 (IGF2) gene is characteristic of many cancers( Reference Feinberg, Ohlsson and Henikoff 65 ) and this even occurs in non-tumour tissue of individuals with cancer or at high risk of cancer( Reference Feinberg, Ohlsson and Henikoff 65 ).
Recent work from the encyclopaedia of DNA elements project have highlighted the importance of epigenetic control of the genome at large scale( Reference Bernstein, Birney and Dunham 66 ) and a large proportion (about 45%) of the genome is made up of repeat elements such as the long interspersed nuclear elements (LINE-1) and the short interspersed transposable nuclear elements (SINE), including the Alu family of human SINE elements( Reference Levin and Moran 49 , Reference Cordaux and Batzer 50 ). These are frequently found in or near genes and the chromatin conformation formed at retrotransposons may spread and influence the transcription of nearby genes( Reference Levin and Moran 49 , Reference Cordaux and Batzer 50 ). They can generate insertions, mutations and genomic instability and are responsible for sixty-five known genetic disorders( Reference Levin and Moran 49 , Reference Cordaux and Batzer 50 ). Methylation has the effect of repressing transposition( Reference Levin and Moran 49 , Reference Cordaux and Batzer 50 ). Like the imprinted genes, transposable elements are characterised by developmental stage dependent epigenetic marking and they are thought to play important roles in health and disease( Reference Walter, Hutter and Khare 48 – Reference Cordaux and Batzer 50 ). The epigenetic status of repeat elements such as intracisternal A particle (IAP) are resistant to reprogramming during primordial germ cell and pre-implantation development and this has been proposed as a mechanism by which epigenetic status may be passed between generations through the germline( Reference Reik 67 ). Dietary intake of the phytoestrogen genisten during pregnancy in animals alters the methylation status of IAP and these changes appear to confer some protection against obesity in the offspring( Reference Dolinoy, Weidman and Waterland 39 , Reference Jirtle and Skinner 46 ).
The ultimate methyl donor for epigenetic-methylation reactions is S-adenosylmethionine that is produced by the methylation cycle and it has been reported that periconceptional folic acid use alters the level of methylation within IGF2( Reference Steegers-Theunissen, Obermann-Borst and Kremer 21 ). A larger study of human pregnancy also observed an effect of folic acid use on IGF2 methylation in the offspring but the effect was restricted to folic acid use after 12 weeks gestation when women are not recommended to take the supplement( Reference Haggarty, Hoad and Campbell 29 ). Late gestation use of folic acid was also associated with reduced LINE-1 methylation and altered paternally expressed gene 3 (PEG3) methylation. Three of the four significant associations with folic acid use and folate status were negative and one was positive, suggesting that it may be naive to assume that this is a simple substrate limitation effect or that the supply of nutrients involved in the methylation cycle will affect all genes equally. Imprinting occurs before fertilisation but changes in imprinting methylation in animal models in response to nutritional exposures have been demonstrated into the early post-natal period for IGF2, after which the imprint is apparently fixed( Reference Waterland, Lin and Smith 68 ).
Imprinting requires removal of the epigenetic mark of the previous generation followed by sex-specific epigenetic marking in the gametes( Reference Trasler 62 , Reference Allegrucci, Thurston and Lucas 69 , Reference Sasaki and Matsui 70 ) although it is thought that some repetitive elements, such as LINE1 and IAP, may only be partially demethylated in the primordial germ cells( Reference Trasler 62 ). Such retention of epigenetic information could be one way in which maternal exposures during key stages of development result in epigenetic changes in the offspring. There are also differences in timing with some paternal alleles acquiring methylation before maternal alleles in the male germline and vice versa in the female germ line( Reference Trasler 62 ) and variation in the process of imprinting by gene( Reference Trasler 62 , Reference Lucifero, Mertineit and Clarke 71 ). There are also significant differences in the stage of development at which male and female gametes acquire imprints( Reference Trasler 62 , Reference Sasaki and Matsui 70 , Reference Kerjean, Dupont and Vasseur 72 ). There are reports that the balance of maternal and paternal imprints in the offspring may have functional significance( Reference Badcock and Crespi 73 ). The difference in the timing of maternal and paternal imprinting and other epigenetic processes in relation to life stage is another way in which changing nutritional exposures could influence biological function and health.
Life stage-specific epigenetic marking is not restricted to the period before birth. There is evidence that it changes with age across the life-course. There is a loss of global DNA methylation with age( Reference Fraga, Agrelo and Esteller 74 , Reference Gentilini, Mari and Castaldi 75 ) and this is reflected in a fall in methylation in some repeat elements( Reference Gentilini, Mari and Castaldi 75 – Reference Jintaridth and Mutirangura 77 ) but not all( Reference Jintaridth and Mutirangura 77 ). There are also reports of increases in CpG island methylation, and decreases in methylation in regions out with CpG islands, with age in solid tissues and blood-derived DNA( Reference Christensen, Houseman and Marsit 78 ) The picture in relation to individual genes is complicated( Reference Madrigano, Baccarelli and Mittleman 79 ), with some increasing( Reference Gentilini, Mari and Castaldi 75 , Reference Post, Goldschmidt-Clermont and Wilhide 80 , Reference Kwabi-Addo, Chung and Shen 81 ) and others decreasing( Reference Heyn, Li and Ferreira 76 ) with age.
Implications
The human diet has undergone profound changes over recent generations and this trend is likely to accelerate in the 21st century. There is a growing awareness of the importance of diet and nutrition to human health but little understanding of how these temporal changes in diet are likely to affect the health of current and future generations. One problem is that our understanding of nutrient effects on health is largely based on observational studies in populations consuming diets representative of a particular time and location. Even intervention studies are carried out on a background intake of nutrients that may not be wholly relevant to future populations.
Epigenetic change has been demonstrated in response to a wide range of foods and nutrients and epigenetic status is emerging as a critical determinant of the response of the organism to the environment and its biological function and disease susceptibility. Dietary change may act directly on the epigenetic processes that result in health/disease but it can also programme metabolism and the future response to nutrition itself. There is a growing body of evidence, largely based on animal studies, demonstrating that nutritional exposures during particular life stages, and developmental windows, can influence epigenetic status, biology and physiology throughout life. Supporting evidence is also beginning to emerge from studies in human subjects.
Transgenerational programming is proposed to have developed in human subjects to confer flexibility of response to the environment: the hypothesis is that it allows the offspring genome to be optimally programmed in response to the maternal environment before birth to make it better fitted to respond metabolically to the environment it will experience. However, the profound dietary changes already occurring within less than a human life span, and the apparent acceleration of that change, mean that the nutritional environment experienced by the mother during pregnancy may not reflect the one in which the offspring will live. The concept of epigenetic programming is not only limited to the period before birth, it also applies to nutritional effects across the life course.
We need better monitoring of changing nutrient intakes in the population, particularly in vulnerable sub-groups, but the rapid pace of change in food reformulation, fortification and the increasing use of novel ingredients, presents a challenge to our current food based national monitoring systems. We need to understand better the consequences of intakes of novel mixtures of nutrients and their effect on health. Epigenetic programming, and specifically the concept of persistence of functional epigenetic states following a nutritional exposure, is particularly relevant to the issue of dietary change. We need to better understand the susceptibility of the genome to epigenetic marking, the critical temporal windows when this occurs, the persistence of these marks in time, and their effect on biological function and the response to diet.
Acknowledgements
The author is grateful to the Scottish Government (RESAS).
Financial Support
The Scottish Government (RESAS) provided support.
Conflicts of Interest
None.
Authorship
P. H. conceived the paper, carried out the analysis, and wrote the manuscript.