Sciencescience

Tue Feb 19, 2019, 11:04 AM

Appalling Evolutionist ignorance on mitochondrial Eve.

Once again note the dates in the footnotes! Long after the supposed flawed study our evolutionist cherry picker colleague has cried about!


Genetic Clocks Verify Recent Creation
BY JEFFREY P. TOMKINS, PH.D. * | MONDAY, NOVEMBER 30, 2015

The idea of an evolutionary genetic clock in which DNA sequences steadily change, like a clock ticking off time, has played a major role in the ideas shaping modern biology. As employed by evolutionists, this time-measuring technique compares DNA sequences between different species to estimate supposed rates of evolution based on the amount of changes in individual DNA letters (A, T, C, or G) in the DNA. When two totally different types of creatures are compared (e.g., horses and chickens), their differences are made to match up with evolutionary time through a procedure that calibrates the data with deep-time estimates taken from paleontology.1 While scientists that work in the field know this, the general public is completely unaware of this little trick.

Despite the fact that the genetic clock data are clearly manipulated to conform to vast amounts of evolutionary time, the results rarely support the overall evolutionary story. In fact, the following problems are often encountered.

Different genes give widely different evolutionary rates.
Different types of organisms exhibit different rates for the same type of gene sequences.
Genetic-clock dates that describe when these creatures supposedly split off to form new creatures (called divergence) commonly disagree with paleontology’s timescale despite being calibrated by it.1
What kind of data would researchers get if the assumptions of evolution and deep time were not used to bias the molecular-clock models? Would the DNA sequence variation actually provide usable information to help test creationist predictions about origins? Interestingly, we have a variety of reported studies from both secular scientists and creationist researchers in which DNA clocks were measured empirically—without deep-time calibrations—and yielded ages of only 5,000 to 10,000 years, not millions. Each of these test cases are discussed below, but first let’s visit the closely related concept of genetic entropy.

Genomic Entropy and Genetic Clocks

During the production of egg and sperm, DNA mutations can occur and be passed on to the next generation. When these are empirically measured within a family’s pedigree, an estimate of the mutation rate can be achieved. Scientists have actually measured this rate in humans in a number of studies and found it to be between 75 and 175 mutations per generation.2-6

Using this known data about mutation rates, a variety of researchers have used computer simulations to model the accumulation of mutations in the human genome over time.7-13 It was found that over 90% of harmful mutations fail to be removed over time and are passed on to subsequent generations. Because this buildup of mutations would eventually reach a critical level, it was postulated that humans would eventually go extinct at a point called error catastrophe.14,15 This incessant process of genome degradation over time with each successive generation is called genetic entropy.14,15 More amazing, the process of genetic entropy is closely mirrored by the trend of declining human life-span documented in the Bible, especially in the 4,300 years since the global Flood.12,15-17 In addition to these genetic simulation studies, prominent evolutionists have shown that the problem of mutation accumulation in the human genome is accompanied by the inability of natural selection to remove them—an aspect of genetics completely contrary to evolutionary assumptions.5,18

The conclusions of these studies in modeling genetic entropy have been spectacularly confirmed by two additional secular studies based on empirical data that provided the same results, along with a timescale that paralleled biblical history.4,5 Both studies examined the amount of rare single nucleotide differences in the protein-coding regions (exons) of the human genome called the exome.19,20 One study analyzed 2,440 individuals and the other 6,515. Over 80% of the rare variability was considered to be harmful (associated with heritable disease), and researchers attributed the presence of these mutations to “weak purifying selection.”19 This essentially means that the alleged ability of natural selection to remove these harmful variants from human populations was somehow powerless to do so—the exact same results observed in the computer simulation studies discussed above.8,11-13

A major benefit of this type of genetic data is the fact that protein-coding regions are less tolerant of mutation than other parts of the genome, providing more reliable historical genetic information about human populations than more common types of variability. In addition, this type of data can be conveniently integrated into demographic models over known historical time and geographical space. When the researchers did this, they discovered a very recent and massive burst of human genetic diversification primarily associated with genetic entropy. One of the research papers stated, “The maximum likelihood time for accelerated growth was 5,115 years ago.”19 The other paper uncovered a similar timeline, which places the beginning of human genetic diversification close to the Genesis Flood and subsequent dispersion of people groups at the Tower of Babel. Importantly, this recent explosion of rare genetic variants clearly associated with genetic entropy also follows the same pattern of human life expectancy rapidly declining after the Flood.15,17

Mitochondrial DNA Variability and Genetic Clocks

One other important realm of molecular-clock research demonstrating a recent creation comes from examining mutation rates in mitochondrial genomes.21 The mitochondrial DNA (mtDNA) of an animal is typically inherited from the mother’s egg cell, and the mtDNA mutation rates can accurately be measured in pedigrees to produce a specific clock for that species. When these clocks are calibrated not by evolutionary timescales but by using the organism’s known generation time, a more realistic and unbiased estimate of that creature’s genetic clock can be obtained. By comparing these mitochondrial clocks in fruit flies, roundworms, water fleas, and humans, one creation scientist demonstrated that a creation event for all of these organisms (including humans) occurred not more than 10,000 years ago.21

Other creation scientists also conducted a study into human mtDNA variation in which they statistically analyzed over 800 different sequences and reconstructed a close approximation of Eve’s original mitochondrial genome.15,22 They found that “the average human being is only about 22 mutations removed from the Eve sequence, although some individuals are as much as 100 mutations removed from Eve.”15 The most recent empirical estimate of the mutation rate in human mitochondria is about 0.5 per generation.23 Based on this rate, even for the most mutated mitochondrial sequences, it has been determined that “it would only require 200 generations (less than 6,000 years) to accumulate 100 mutations.”15

Lest critics say that these mtDNA studies are suspect because they were performed by creationists, it should be noted that evolutionists were actually the first to document these biblically supportive timeframes. Buried within a secular research paper back in 1997, the same trends recently observed by creationists regarding human mtDNA mutation rates were first reported but received little attention in the evolutionary community. The authors of the paper stated, “Using our empirical rate to calibrate the mtDNA molecular clock would result in an age of the mtDNA MRCA of only ~6,500 years.”24

One year later, another secular researcher remarked on this study, stating,

Regardless of the cause, evolutionists are most concerned about the effect of a faster mutation rate. For example, researchers have calculated that “mitochondrial Eve”—the woman whose mtDNA was ancestral to that in all living people—lived 100,000 to 200,000 years ago in Africa. Using the new clock, she would be a mere 6000 years old.25

The article continued to note that the new findings of faster mutation rates pointing to mitochondrial Eve about 6,000 years ago also contributed to the development of mtDNA research guidelines used in forensic investigations adopted by the FBI. Now, over 17 years later, and using even more mtDNA data, creation scientists are spectacularly confirming this previous unheralded discovery.

In addition to the mtDNA clock data, scientists have also analyzed the Y chromosomes of modern men, which they found to be only about 300 mutations on average different from the consensus sequence of a Y-chromosome Adam.15 The researchers state that “even if we assume a normal mutation rate for the Y chromosome (about 1 mutation per chromosome per generation), we would only need 300 generations (about six thousand years), to get 300 mutations.”15 As with the previous mtDNA work, this is the most straightforward way to apply the DNA clock concept, which also provides data in perfect agreement with a biblical timeframe for the origins of man.

Perhaps the most remarkable data supporting a young creation were recently published by a large group of secular scientists who are involved with mapping DNA variation across the entire human genome.26 This massive effort has just produced a huge dataset that the researchers call “a global reference for human genetic variation.” In their report, they state:

Analysis of shared haplotype lengths around f2 variants suggests a median common ancestor ~296 generations ago (7,410 to 8,892 years ago), although those confined within a population tend to be younger, with a shared common ancestor ~143 generations ago (3,570 to 4,284 years ago).26

Amazingly, these are fairly accurate dates for both the original creation event and the Babel dispersion after the Flood. The confined populations are descended from the people groups created at the Tower of Babel when the languages became confused. Of course, the median common ancestor of all humans would represent Adam and Eve.

Conclusion

The evolutionary paradigm of a molecular clock is deeply flawed in that it assumes evolution on a grand scale and literally involves conducting the whole analysis as a hypothetical exercise rather than as an empirical experiment. In contrast, creation scientists and even some secular researchers have taken a straightforward empirical approach without any assumptions about time, and the results yield dates of not more than about 6,000 to 10,000 years. Thus, when the mythical evolutionary restrictions are removed and the data are analyzed empirically, biblical timescales are the result.

References

Tomkins, J. P. and J. Bergman. 2015. Evolutionary molecular genetic clocks—a perpetual exercise in futility and failure. Journal of Creation. 29 (2): 26-35. Much of the content of this current article was published previously at a more technical level. See Tomkins, J. P. 2015. Empirical genetic clocks give biblical timelines. Journal of Creation. 29 (2): 3-5.
Nachman, M. W. and S. L. Crowell. 2000. Estimate of the mutation rate per nucleotide in humans. Genetics. 156 (1): 297-304.
Kondrashov, A. S. 2003. Direct estimates of human per nucleotide mutation rates at 20 loci causing Mendelian diseases. Human Mutation. 21 (1): 12-27.
Xue, Y. et al. 2009. Human Y Chromosome Base-Substitution Mutation Rate Measured by Direct Sequencing in a Deep-Rooting Pedigree. Current Biology. 19 (17): 1453-1457.
Lynch, M. 2010. Rate, molecular spectrum, and consequences of human mutation. Proceedings of the National Academy Science. 107 (3): 961-968.
Campbell, C. D. and E. E. Eichler. 2013. Properties and rates of germline mutations in humans. Trends in Genetics. 29 (10): 575-584.
Sanford, J. et al. 2007. Mendel’s Accountant: A biologically realistic forward-time population genetics program. Scalable Computing: Practice and Experience. 8 (2): 147-165.
Sanford, J. et al. 2007. Using Computer Simulation to Understand Mutation Accumulation Dynamics and Genetic Load. Lecture Notes in Computer Science. 4488: 386-392.
Sanford, J. C. and C. W. Nelson. 2012. The Next Step in Understanding Population Dynamics: Comprehensive Numerical Simulation. Studies in Population Genetics. M. C. Fusté, ed. InTech, 117-136.
Brewer, W. H., J. R. Baumgardner, and J. C. Sanford. 2013. Using Numerical Simulation to Test the “Mutation-Count” Hypothesis. Biological Information: New Perspectives. R. J. Marks III et al, eds. Hackensack, NJ: World Scientific Publishing, 298-311.
Gibson, P. et al. 2013. Can Purifying Natural Selection Preserve Biological Information? Biological Information: New Perspectives. R. J. Marks III et al, eds. Hackensack, NJ: World Scientific Publishing, 232-263.
Nelson, C. W. and J. C. Sanford. 2013. Computational Evolution Experiments Reveal a Net Loss of Genetic Information Despite Selection. Biological Information: New Perspectives. R. J. Marks III et al, eds. Hackensack, NJ: World Scientific Publishing, 338-368.
Sanford, J. C., J. R. Baumgardner, and W. H. Brewer. 2013. Selection Threshold Severely Constrains Capture of Beneficial Mutations. Biological Information: New Perspectives. R. J. Marks III et al, eds. Hackensack, NJ: World Scientific Publishing, 264-297.
Sanford, J. 2008. Genetic Entropy and the Mystery of the Genome, 3rd ed. Waterloo, NY: FMS Publications.
Sanford, J. C. and R. W. Carter. 2014. In Light of Genetics...Adam, Eve, and the Creation/Fall. Christian Apologetics Journal. 12 (2): 51-98.
Osgood, J. 1981. The Date of Noah’s Flood. Creation. 4 (1): 10-13.
Sanford, J., J. Pamplin, and C. Rupe. Genetic Entropy Recorded in the Bible? FMS Foundation. Posted on kolbecenter.org July 2014.
Crow, J. F. 1997. The high spontaneous mutation rate: Is it a health risk? Proceedings of the National Academy of Sciences. 94 (16): 8380-8386.
Tennessen, J. A. et al. 2012. Evolution and Functional Impact of Rare Coding Variation from Deep Sequencing of Human Exomes. Science. 337 (6090): 64-69.
Fu, W. et al. 2013. Analysis of 6,515 exomes reveals the recent origin of most human protein-coding variants. Nature. 493 (7431): 216-220.
Jeanson, N. 2013. Recent, Functionally Diverse Origin for Mitochondrial Genes from ~2700 Metazoan Species. Answers Research Journal. 6: 467-501.
Carter, R. W. 2007. Mitochondrial diversity within modern human populations. Nucleic Acids Research. 35 (9): 3039-3045.
Madrigal, L. et al. 2012. High mitochondrial mutation rates estimated from deep-rooting costa rican pedigrees. American Journal of Physical Anthropology. 148 (3): 327-333.
Parsons, T. J. et al. 1997. A high observed substitution rate in the human mitochondrial DNA control region. Nature Genetics. 15 (4): 363-368.
Gibbons, A. 1998. Calibrating the Mitochondrial Clock. Science. 279 (5347): 28-29. Emphasis added.
The 1000 Genomes Project Consortium. 2015. A global reference for human genetic variation. Nature. 526 (7571): 68-74.
* Dr. Tomkins is Research Associate at the Institute for Creation Research and received his Ph.D. in genetics from Clemson University.

Cite this article: Jeffrey P. Tomkins, Ph.D. 2015. Genetic Clocks Verify Recent Creation. Acts & Facts. 44 (12).

4 replies, 121 views

Reply to this thread

Back to top Alert abuse

Always highlight: 10 newest replies | Replies posted after I mark a forum
Replies to this discussion thread
Arrow 4 replies Author Time Post
Reply Appalling Evolutionist ignorance on mitochondrial Eve. (Original post)
nolidad Feb 19 OP
Troll2 Feb 19 #1
nolidad Feb 20 #3
nolidad Feb 20 #4
SatansSon666 Feb 19 #2

Response to nolidad (Original post)

Tue Feb 19, 2019, 11:21 AM

1. Improved Calibration of the Human Mitochondrial Clock Using Ancient Genomes

Reliable estimates of the rate at which DNA accumulates mutations (the substitution rate) are crucial for our understanding of the evolution and past demography of virtually any species. In humans, there are considerable uncertainties around these rates, with substantial variation among recent published estimates. Substitution rates have traditionally been estimated by associating dated events to the root (e.g., the divergence between humans and chimpanzees) or to internal nodes in a phylogenetic tree (e.g., first entry into the Americas). The recent availability of ancient mitochondrial DNA sequences allows for a more direct calibration by assigning the age of the sequenced samples to the tips within the human phylogenetic tree. But studies also vary greatly in the methodology employed and in the sequence panels analyzed, making it difficult to tease apart the causes for the differences between previous estimates. To clarify this issue, we compiled a comprehensive data set of 350 ancient and modern human complete mitochondrial DNA genomes, among which 146 were generated for the purpose of this study and estimated substitution rates using calibrations based both on dated nodes and tips. Our results demonstrate that, for the same data set, estimates based on individual dated tips are far more consistent with each other than those based on nodes and should thus be considered as more reliable.

https://academic.oup.com/mbe/article/31/10/2780/1015730

Introduction
Accurate estimates of mutation rates are crucial for a thorough investigation of the evolutionary history of virtually any species (Ho and Larson 2006; Ho et al. 2008), including ours (Scally and Durbin 2012). Because differences between the DNA of any two individuals correspond to mutations accumulated since their common ancestor, knowing the rate at which such changes arise allows estimating the time since divergence between any two stretches of DNA. This approach, referred to as “the molecular clock,” has been frequently applied to date key chapters in human evolutionary history, such as the dawn of humankind millions of years ago (Sarich and Wilson 1967; Hasegawa et al. 1985; Goodman 1999; Carroll 2003) or the expansion of anatomically modern humans (AMHs) from an African cradle some 100 k years ago (Stringer and Andrews 1988; Ingman et al. 2000; Stringer 2002; Cavalli-Sforza and Feldman 2003; Relethford 2008; Tattersall 2009). Mitochondrial DNA (mtDNA) has often been the marker of choice for this kind of investigations thanks to attractive characteristics such as high copy number, apparent lack of recombination, and high substitution rate (Ingman et al. 2000).

Accurately estimating substitution rates is not a trivial affair. For humans, different methodologies have produced disparate and sometimes irreconcilable estimates, thus raising doubts about the timescale of major evolutionary events (Ho and Endicott 2008; Endicott et al. 2009; Scally and Durbin 2012). Despite recent statistical advances allowing for fine modeling of rate heterogeneity among sites and lineages (Welch and Bromham 2005), further research is still needed to improve our confidence in molecular estimates of mutation rates (Hipsley and Muller 2014). In particular, the most reliable calibration points for converting molecular genetic divergences among branches into absolute times remain to be identified.

Phylogenetic rate estimates have historically been based on a single, external human–chimpanzee divergence calibration point at the root of the tree (Ovchinnikov et al. 2000; Tang et al. 2002; Mishmar et al. 2003; Soares et al. 2009), but recent evidence (Ho et al. 2005; Ho, Shapiro, et al. 2007; Ho and Endicott 2008) has strengthened previous suspicions (Stoneking et al. 1992; Bandelt et al. 2006) that calibration points within the human tree would be preferable (Bandelt et al. 2006; Stoneking et al. 1992). Consequently, several estimates of rates have been generated following the adoption of a series of nodes (branching points in the tree) calibrated with archaeological and/or biogeographic information (Atkinson et al. 2008; Endicott and Ho 2008; Ho and Endicott 2008; Henn et al. 2009; Pereira et al. 2010).

Even more recently, the increasing availability of sequence data from ancient DNA (aDNA) (Green et al. 2008; Krause, Briggs, et al. 2010; Fu, Meyer, et al. 2013; Fu, Mittnik, et al. 2013) has allowed phylogenetic trees to be calibrated from their tips (Rambaut 2000; Drummond et al. 2002). As a first attempt, Krause, Fu, et al. (2010) combined the use of the human–chimpanzee divergence calibration with tip calibrations from whole-genome sequencing of two extinct hominids (one Neanderthal and one Denisovan 38,310 and 40,000 years old, respectively). Interestingly, their results indicate that the estimated divergence times are largely dominated by the prior assumption on the root and are not sensitive to the two external tips. More recently, Brotherton et al. (2013) and Fu, Mittnik, et al. (2013) both made use of two distinct panels of ancient AMHs (aAMHs) whole-mtDNA sequences to perform the first estimations of mtDNA substitution rates from tip calibrations alone.

Previously published phylogenetic rates are not readily comparable because they were generated from different sets of sequences, used different genes or mitochondrial-genome sections, or assumed different combinations of calibration methods (root-, node-, and tip based). Additional statistical methodological differences between studies include 1) consideration of rate heterogeneity among sites and lineages, 2) correction for multiple substitutions at the same site (i.e., saturation), 3) incorporation of uncertainty around the age of calibration points, and 4) different demographic models. All the factors listed above are known to potentially influence substitution rate and timescale estimates (Welch and Bromham 2005; Drummond et al. 2006; Ho and Larson 2006; Henn et al. 2009; Nielsen and Beaumont 2009; Sauquet et al. 2012; Molak et al. 2013).

In this study, we compared rates and timescale estimates obtained using various calibration approaches (root, internal nodes, and tips) but using the same data set and statistical framework. To this aim, we assembled a large, worldwide data set of 30 high-quality ancient and 320 contemporary whole human mtDNA genomes. Of the contemporary sequences, 146 were generated for the purpose of this study.

...

Discussion
In this study, we investigated the influence of the calibration strategy on estimates of human mitochondrial substitution rates and divergence times using whole mitochondrial genomes of 320 modern and 26 ancient samples. Comparisons of tip- versus node-based phylogenetic estimates have been performed in a number of other species before (Ho, Kolokotronis, et al. 2007; Gilbert et al. 2008; Ho et al. 2008). However, they all relied on comparisons of internal (radiocarbon ages of ancient sequences) to external (split between the species under study and an outgroup) calibration points. Because of its detailed archaeological history, humans offer a unique opportunity to compare calibration points based both on internal tips and nodes over a comparable time-scale.

Our results show that the calibration strategy has no significant influence on the reconstructed topology but has a notable effect on substitution rate and divergence time estimates. Despite using the same analytical steps on a single data set, we obtained slower substitution rates (by a factor of 0.65-fold) for tip-based estimates (compared with node-based ones, see supplementary appendix S12, Supplementary Material online). This result is at odds with observations made in numerous other species where the use of aDNA sequences yielded faster rate estimates (Ho, Kolokotronis, et al. 2007; Ho, Shapiro, et al. 2007; Millar et al. 2008; Ho, Lanfear, Bromham, et al. 2011). The faster rate for calibration based on ancient sequences has been explained by the “time-dependency of molecular rates” hypothesis, which postulates an acceleration over recent times in coding sequences due to the time needed for selection to purge slightly deleterious mutations (Ho, Lanfear, Bromham, et al. 2011). However, in our study the range of ages for calibrated nodes and tips largely overlaps. This impede any theoretical prediction on which rates should be faster. Internal nodes may nevertheless lead to a systematic overestimation of rates if genetic divergence precedes population divergence, as discussed by Peterson and Masel (2009).

Our tip-based estimates of substitution rates are highly consistent with the values recently published by Brotherton et al. (2013) and Fu, Mittnik, et al. (2013) who used two very different panels of aDNA sequences to calibrate the human mtDNA tree. Consistently with the differences between tip- and node-based calibrations, we report here, our tip-based rate estimates are slower (by a factor of 0.63-fold) than the ones obtained using internal node calibration by Endicott and Ho (2008) but are faster (by a factor of ∼1.5-fold) than previous fossil-calibrated rates (Green et al. 2008; Soares et al. 2009). Although we cannot know which of these estimates are closest to the value of the “real” long-term substitution rate of human mtDNA, we can evaluate which calibration approach leads to the most consistent estimates. The answer to this question is unambiguous; the variance over individually calibrated substitution rates is 11 times smaller for tips than internal nodes. Moreover, all of the 21 substitution rates estimated from aAMH sequences had overlapping 95% HPD (fig. 3). The situation is strikingly different for node-based calibrations, where substitution rate estimates strongly depended on the demographic episode used for dating, with only four out of ten individually calibrated rates having overlapping HPDs.

These results strongly suggest that tip calibration estimates are far more consistent than internal node-based ones. However, tip-based calibration also point to slower mean substitution rates than those based on internal nodes. Thus, one important question we need to answer is whether tip-based calibrations are affected by some systematic bias that might lead to slow (and homogeneous) substitution rates. Such potential systematic bias could arise for various reasons, including 1) sequencing errors and/or PMDs in ancient sequences (Ho et al. 2005; Ho, Heupink, et al. 2007; Ho, Kolokotronis, et al. 2007; Rambaut et al. 2009), 2) data sets with low information content (Debruyne and Poinar 2009; Ho, Lanfear, Phillips, et al. 2011), 3) demographic and/or genetic model misspecification (Emerson 2007; Navascues and Emerson 2009; Ho, Lanfear, Bromham, et al. 2011), and 4) other factors such as a clumped distribution of ancient sample ages (Ho, Lanfear, Bromham, et al. 2011; Ho, Lanfear, Phillips, et al. 2011). However, we feel confident that none of the sources of bias listed above is affecting our rate estimates.

A bias generated by sequencing errors and/or PMDs in the ancient samples seems unlikely for a series of reasons. First, the sequences were generated by several laboratories, using different sequencing technologies and post processed considering different contamination assays. The read depth of all the sequences included is high, thus minimizing the risk of incorrectly called SNPs. Second, our new method devised to detect an excess/deficit of deaminated singletons suggests that none of the sequences which were included was affected by detectable levels of postmortem DNA damage. This was confirmed by the application of the PMD model in BEAST, which did not influence significantly the estimation of the substitution rates.

Our ancient sequence age randomization test clearly indicated that our data set contains a strong enough signal to accurately calibrate the reconstructed human mtDNA phylogeny. This, combined with the fact that convergence was systematically achieved in BEAST, disqualifies both the hypothesis of “low information content” and “restricted ancient sample age distribution.” Finally, we did not detect any significant influence of varying demographic models and prior choices on our Bayesian inferences (results not shown), which argues against the remaining listed systematic biases. Moreover, confidence in our tip-based estimates is bolstered by the results from two recent independent analyses (Brotherton et al. 2013; Fu, Mittnik, et al. 2013). Both articles report very similar tip-based rate estimates to ours despite using different genetic/demographic models in BEAST, as well as sequences from different ages and geographic origins. Taken together, all the available evidence suggests that tip-based calibration of human mtDNA are fairly immune to systematic biases and constitutes a strong case for them to be the most biologically relevant.

The superior performance of tip calibration over node calibration is not entirely surprising when we consider the various sources of uncertainties associated with both calibration strategies. The uncertainty associated with tip-dating simply mirrors the uncertainty in the estimated age of the sequences—in this case, the error of the C-14 radiocarbon dating technology (Bowman 1990; Molak et al. 2013). This is a well-characterized source of error which can and should be integrated into phylogenetic inference (Drummond et al. 2006; Ho and Phillips 2009), even if most recent studies still rely on point values (Krause, Fu, et al. 2010; Fu, Mittnik, et al. 2013).

In contrast, the uncertainty around dated nodes is far more complex and multifactorial and is likely to lead to different degrees of reliability associated to each node. First, there is generally considerable uncertainty associated with the age of the colonization/migration event including error in the dating of the archaeological, anthropological, and historical evidence. The age of the oldest evidence for human presence is unlikely to coincide exactly with the demographic expansion. Very generally, we would predict to see a delay in the appearance of traces of human presence after the expansion of AMHs into any new area (Signor and Lipps 1982). However, we could also think of cases where the initial settlement which led to the archaeological evidence was followed by local population extinction and thus predates the age of the of the clade in the phylogenetic reconstruction; such a scenario has been suggested for the Qafzeh–Skhul early AMHs who are generally believed to have been part of an early, failed out-of-Africa exit (Oppenheimer 2012).

Second, even if a demographic event had been accurately dated, the age of the node in the phylogenetic tree might not coincide with it for a number of reasons (Edwards and Beerli 2000; Ho and Phillips 2009; Balloux 2010; Firth et al. 2010; Crandall et al. 2012). For instance, the phylogenetic node of interest may correspond to the most recent common ancestor (MRCA) of the sampled sequences rather than the split of the population of interest. But even if the population was accurately sampled, we might still envision cases where the age of the coalescence does not coincide with the demographic event. Such a mismatch could arise if the founding population was already polymorphic for the marker under study, so that the estimated coalescence event is older than the population split (i.e., incomplete lineage sorting). A similar situation could arise when multiple waves of colonists contributed to a demographic expansion. Conversely, the population might have experienced a reduction in size later on, so that the TMRCA could coincide with this subsequent population bottleneck. We could think of additional scenarios and the situation would become even more complex if we considered a possible effect of natural selection. To summarize, node calibration can be affected by many sources of error, and it is thus nearly impossible to model the age uncertainty around nodes satisfyingly.

There is an extensive debate in the literature on the existence and significance of time-dependent rates of molecular evolution (Ho, Shapiro, et al. 2007; Ho, Lanfear, Bromham, et al. 2011). This acceleration in substitution rates in recent generations has previously been described and discussed in several species (Ho, Shapiro, et al. 2007; Ho, Lanfear, Bromham, et al. 2011; Duchene et al. 2014), including humans (Henn et al. 2009). It is generally ascribed to the time needed for natural selection to weed out slightly deleterious mutations from the population (Endicott et al. 2009; Soares et al. 2009). We observed a subtle but significant negative linear correlation between the age of the ancient sequence used for calibration and the substitution rate estimated (supplementary appendix S13, Supplementary Material online), which is consistent with this prediction. Stronger evidence for purifying selection comes in the form of the stark difference in the substitution rates of first and second codons (PC1+2) versus third codon (PC3) (fig. 4A). Although PC3 mutations accumulate linearly with time, we see a clear acceleration in the rate starting at around 30 ka for the mostly nonsynonymous mutations at PC1+2.

There has also been a vigorous debate in the literature regarding the extent to which positive selection and local adaptation may have shaped the current distribution of human mitochondrial, with a particular focus on the possible role of climate (Mishmar et al. 2003; Kivisild et al. 2006; Balloux et al. 2009; Soares et al. 2012). The fact that a relaxed clock fits our data best would be in line with the idea that rate variation in human mtDNA is subject to positive selection. Alternatively, this variation in evolutionary rates could stem from purifying selection on mtDNA being more effective in some human populations than others due to differences is population size and past demography. However, the effect we detect is extremely subtle and only statistically significant thanks to the large data set. We also failed to find any ecological or demographic factor that could explain variation in inter-individual rate variation satisfactorily (results not shown). Strict and relaxed rates are extremely close, and the distribution of TMRCAs for all pairs of sequences estimated with either rate are essentially interchangeable (R2 = 0.98; fig. 4B) and would lead to near identical results in the context of using either rate for dating past human demographic events.

Using both anatomically modern and archaic ancient mtDNA sequences to calibrate the tree, we obtain a rate of 0.75 × 10−8, 3.32 × 10−8, and 2.14 × 10−8 substitutions per site per year for the coding region PC1+2, PC3, and the whole molecule, respectively. The rates changed little when we ignored the external calibration tips (archaic humans). This result points to external calibration tips having limited influence on rate estimates, as previously discussed by Krause, Fu, et al. (2010).

Using tip calibration, we also estimated the coalescence dates of various nodes of interests in the tree. We obtained a value of 4.1 Ma (95% HPD: 2.9–5.4) for the divergence between Hominins and chimpanzee. This estimation may appear too young when compared with the dates that are generally derived from the fossil record. The Toumaï fossil (Sahelanthropus tchadensis), dated to 6–7 My (Brunet et al. 2002), is usually interpreted as being on the Hominin line and setting a minimum date for the divergence (Vignaud et al. 2002). The apparent conflict between our tip-based estimation and the fossil record could be explained if Toumaï were somewhat younger than previously reported (Brunet et al. 2005) or if we assumed more complex speciation scenarios where an initial split was followed by an extended period of gene flow before the final separation, as suggested by Patterson et al. (2006).

We estimated a split time between Homo neanderthalensis and H. sapiens mtDNAs of 389 ka (95% HPD: 295–498). This is consistent with the 407 ka (95% HPD: 315–506) estimates of Endicott et al. (2010) which used the radiocarbon dates of the same five Neanderthal genomes but younger than the 550 ka (95% HPD: 496–604) estimates of Soares et al. (2009) obtained assuming a “6.5 Ma human–chimpanzee divergence”-based calibration. Considering the widely held view that H. neanderthalensis evolved from H. heidelbergensis in Europe (Sawyer et al. 2007), one would expect the appearance of H. heidelbergensis in Europe to predate the time of the split between the two lineages. The Boxgrove tibia (from Sussex, England), attributed to H. heidelbergensis, dates to approximately 500 ka (Roberts et al. 1994), which would be in line with our estimate for the timing of the split and this interpretation of the fossil record.

Our estimate of 143 ka (95% HPD: 112–180) for the TMRCA of all modern human mtDNA is slightly younger but highly consistent with the 157 ka (95% HPD: 120–197) value obtained by Fu, Mittnik, et al. (2013). We estimate the coalescence of the L3 haplogroup (the lineage from which all non-African mtDNA haplogroups descend), often used to date the “out-of-Africa” event, to 72 ka (95% HPD: 54–93), a value also consistent with Fu, Mittnik, et al. (2013) estimation of 78 ka (95% HPD: 62–95). This estimation rather places a conservative upper bound of 93 ka for the time of the last major gene exchange between non-African and sub-Saharan African populations. As pointed out by Fu, Mittnik, et al. (2013), it is important to recognize that this divergence time may merely represent the most recent gene exchanges between the ancestors of non-Africans and the most closely related sub-Saharan Africans and thus may reflect only the most recent population split in a long, drawn-out process of population separation (Scally and Durbin 2012).

Finally, our results also allowed us to check whether the coalescence dates of some major haplogroups associated with human migrations (table 2) are consistent with the archaeological evidence (supplementary appendix S14). For six out of ten of the colonization/migration events considered (Postglacial expansion, Sahul, Sardinia, Japan, Madagascar, and Europe settlement), we observed 95% age HPD distributions of haplogroups overlapping with the archaeological dates. However, in the case of the Canary Islands, Remote Oceania, New Zealand, and the Americas, the estimated coalescence times were systematically older than the archaeological evidence. Potential explanations for such discrepancies include ancestral polymorphism in the founding population or complex demographic histories involving multiples waves of colonists.


In conclusion, our results demonstrate that the recent availability of ancient high-quality mtDNA genomes offers a powerful tool to robustly date past evolutionary events of our own species. Using the age of ancient sequences leads to far more reproducible inferences and allows circumventing the large number of assumptions behind node and root calibration, which in turn should lead to an improvement of the estimation of human mitochondrial substitution rates. It should be possible to obtain increasingly narrow and precise substitution rate estimates by including additional ancient genomes in the analyses, as they will become available. In this context, ancient isolates from geographic regions which are not represented yet, such as Africa and Australia, would be particularly helpful, as these would allow fine calibration of further clades in the human mitochondrial genome. From a more general point of view, the growing availability of ancient sequences due to sequencing technology improvements should allow reliable tip-calibrated phylogenetic rate and divergence time estimates to be obtained in many species for which internal nodes split times information are presently not available.

Reply to this post

Back to top Alert abuse Link here Permalink


Response to Troll2 (Reply #1)

Wed Feb 20, 2019, 12:19 PM

3. Several glaring errors to point out immediately.

One is the assumption that requires use of pongid dna.

2 they have H.neandertal and h.Sapien sapien diverging over 350,000 years agoi!

Second they base their results only on specified areas instead of larger regions of mtDna!

They also diverge chimps and humans 6.5 mya

Reply to this post

Back to top Alert abuse Link here Permalink


Response to Troll2 (Reply #1)

Wed Feb 20, 2019, 02:48 PM

4. Now for a few more errors.

First I am heartened to see that these scientists used far more samples than is the norm!

2. They correctly pointed out how hard it is to establish a norm for generational mutations. so the consensus figure is between 100-300 mutations/generation!

3. The ugly truth The convergence of 2 divergent genres' (ape and men) is not based on genetic research or sampling per se, but by algorithms created to crunch basic data and reach conclusions.

The simple command for the algorithm is thus:

Given the data programmed, calculate how long it would take to find have a common ancestor! (now that is English and simplified but a correct representation of what the algorithm is designed for)

The computer will spit out an answer. It is not concerned whether the answer is based on reality or not- it just works out the math and gives an answer based on th garbage in! you know GIGO?

Anyone with th erequisite expertise can do the same with the genome of man and that of a daffodil- all it will do is calculate based on the scant info programmed and tell you how long it will take to find th emomma of both man and daffodil!

Reply to this post

Back to top Alert abuse Link here Permalink


Response to nolidad (Original post)

Tue Feb 19, 2019, 01:12 PM

2. Lmfao.

Didn't you just lightly insult cold warrior for copying you?

Now you do the same fucking thing.

Mr. 162 IQ

Lmao.. indeed.

Reply to this post

Back to top Alert abuse Link here Permalink

Sciencescience