Bacteria are becoming increasingly resistant to antibiotics; some butterflies and fish are developing new colours for better camouflage; and a series of laboratory experiments have revealed small but significant changes in various microorganisms. Are these phenomena conclusive evidence of evolution unfolding before our eyes?

In the first part of this article, we presented some significant examples of “evolution before our very eyes”[1] from an evolutionary perspective. These illustrated the observable and testable phenomenon of microevolution, pointing to the process of macroevolution—assumed to be the inevitable long-term result of microevolution.

The creationist interpretation of the evidence presented in the first part of this series is that, while microevolution is real, it is limited and cannot produce macroevolution in the long term. In what follows, we will investigate microevolutionary processes more deeply to test their limits.

What is information?

Firstly, it is important to clarify a central issue in the creation-evolution debate: disagreement does not concern the existence of a complex “design” or “technical plan” governing the development and life of every organism. Everyone agrees that this “blueprint for construction and operation” exists in DNA and other molecular mechanisms. In fact, the existence of a plan for the composition of life is immediately obvious to anyone merely observing these extremely complex, self-replicating mechanisms known as “living organisms.”

It is not the existence of the plan of life that is in dispute, but rather the identity of its author. Legitimately inspired by the totality of human experience with regard to complex and functional mechanisms, creationists argue that there can be no such design without an intelligent designer. Evolutionists, on the other hand, propose naturalistic mechanisms—starting from random changes, natural selection, and long periods of time—to explain the existence of the “technical plan” of life. They do not refute the existence of the functional plan, merely the existence of its author, instead proposing natural laws.

This clarification is essential to elucidate a term that is more difficult to define in context, namely “information”. The development and functioning of an organism can be broadly reduced to its blueprint, which is essentially just genetically encoded information. Similarly, the construction and operation of a submarine can be reduced to its technical blueprints, which is also information. However, the nature and workings of the technical blueprints of life are much harder to figure out. Unlike the technical language of engineering, which is used to draw up submarine plans, the genetic language used to draw up the plans of life is, for the most part, unknown. Despite remarkable advances in molecular biology, we are still “astronomically” far from being able to read an organism’s plan from its genetic information in the same way that an engineer reads the technical plan of a submarine.

If we understood this language and could interpret the technical plan of an organism, it might be easier to determine whether it is the work of an intelligent author or merely the consequence of purposeless natural laws. But we do not know it. In this context of lack of knowledge, it is easier to acknowledge that the information in the “technical plan” of life has different meanings for creationists and evolutionists. From a creationist perspective, information is viewed through the lens of human engineering, akin to the information present in the technical plans of a submarine. The emphasis is on determinism, high specificity, and precise goals. From an evolutionary point of view, however, things are viewed less strictly due to the belief in a naturalistic origin of information. The focus is on informational redundancy, variability, and functional versatility, and the absence of a predetermined purpose; information has no functional value except in context.

What could cause colour and behavioural changes in a population of guppy fish in just a few dozen generations if predatory fish were introduced to their environment?

Throughout this article, we will use the term “functional information” in its broadest sense: information is anything that describes a functional biological design, whether active or latent, at any level, from the molecular to the anatomical. For clarity, this definition deliberately disregards the question of the origin of this information—whether it is the result of intelligent design or natural processes.

Mechanisms of microevolution

It is evident that species can undergo significant changes in a relatively short period of time. These morphophysiological changes are undoubtedly generated by changes and rearrangements of genetic information in their blueprints. The mechanism by which these changes occur, especially its ability to generate new functional information, is the main point of conflict between the evolutionary and intelligent design paradigms. Regardless of this conflict, however, changes can be divided into three broad categories from a strictly informational point of view:

Microevolution as loss of information

Most mutations are rightfully considered to be detrimental to an individual’s survival. For humans, mutations degrade the existing genetic code at such a rate that, according to some calculations, our species should have disappeared at least 100 times since its estimated appearance.[2] The neutral theory argues that, although most mutations are likely to be harmful or even lethal, they do not become fixed in a species’ genome. Most mutations that become fixed are neutral or pseudoneutral. However, the cumulative effect of acquiring pseudoneutral mutations over time, even when mediated by sexual reproduction,[3] is still harmful to genetic information.[4] The problem of “mutation-induced extinction” remains unresolved from an evolutionary point of view, and empirical data suggest that the genetic blueprints of organisms are degrading rather than accumulating new functions and valences.

Furthermore, even when considering mutations that can lead to new functional information by chance, this is usually achieved through a net loss of functional information. For instance, a mutation that destroys a bacterium’s ability to metabolise a certain sugar could endow it with the “new” ability to thrive in environments where that sugar would otherwise be harmful. Michael Behe[5] reviewed the main cases of new functions accumulated through microevolution reported in the literature and came to the following conclusion[6]: in most cases, he found that the reported “new” function was merely the result of the degradation of older functions, similar to the above example of sugar metabolism.

While, in a broader sense, even the destruction of existing functional information can be considered new information, this mechanism cannot, in any way, underlie the process of “complexification” of life through natural evolution. The basic principle (and an exercise in common sense) is that when random changes occur in an organised mechanism, as in the case of random mutations, the degree of organisation of the system decreases—often catastrophically—in almost all cases. Only in a tiny number of cases can some benefit be observed, and this is mostly based on a net loss of information, as Behe also found. Ultimately, the disastrous ratio of harmful to beneficial mutations can only lead to species extinction, not the complexification of functional information required by macroevolution.

Microevolution as adaptability

However, there are also well-documented examples of microevolution. What could cause a change in colour and behaviour in a population of guppy fish within just a few dozen generations after predatory fish were introduced into their environment? And how did lizards transferred to a Croatian island develop new digestive tract structures (cecal valves) suited to their new, predominantly vegetarian diet in just a few decades? Isn’t this accumulation of new functional information too rapid to have resulted from random mutations?

In such cases, the changes seem too dramatic and occur too quickly to be reasonably attributed to the accumulation of random information under selective pressure. The amount of raw information required for new colouration or digestive morphological structures is simply too great to accumulate in such a short time. It is much more reasonable to assume that these new functions and structures are predefined in the genome of these species and that situations of selective pressure merely lead to their “expression.” Indeed, even from an evolutionary perspective, it is reasonable to assume that guppy populations have encountered predators numerous times throughout their evolutionary history, resulting in variations in phenotype that correspond to camouflage in various environments being expressed repeatedly and encoded in their genome. Thus, when predators reappear in a guppy population’s range, they simply select from the natural variations that were already present in the species’ genome. This scenario would fully explain the ultra-rapid microevolution observed; however, in this case, we are not talking about new information finding its way into the genome through microevolution, but rather information that was already present.

Similarly, a close study of the lizards on the Croatian island reveals that cecal valves are not foreign morphological structures to this family of reptiles; they are simply less common. Therefore, is it reasonable to assume that they appeared in these lizards out of nowhere in just three decades? Or is it more reasonable to assume that these structures existed previously, but were inactive in the genome, and that specific selective pressures reactivated them?

Based on these documented cases, we have good reason to speculate that the new information expressed by microevolution is not actually new, but rather inactive, “buried” information. However, in this case, reactivation cannot be called microevolution; rather, it is a pre-existing mechanism of adaptability to the environment. Furthermore, from an evolutionary perspective, microevolution as adaptability can no longer be explained as “evolution happening before our very eyes,” but, at most, as “evolution that happened long ago and is now being reactivated.” Unfortunately, this reliance on long-past and unverifiable events is characteristic of evolutionary theory.

Microevolution as the addition of information

So, is it possible for new, functional information to accumulate in the genome through microevolution? In theory, anything can happen by pure chance, so the answer is yes. However, even from an evolutionary perspective, we have seen that this is highly unlikely.

But what can we say from a creationist point of view about the main example of microevolution “before our very eyes,” as presented by Dawkins in his book? This example is Richard Lenski’s experiment at Michigan State University. In this experiment, one of the 12 independent lines of E. coli bacteria spontaneously acquired the ability to metabolise citrate in the presence of oxygen after being cultivated in the laboratory over a long period of time.

This new function can be partly explained by adaptability because E. coli bacteria can metabolise citrate, but not in the presence of oxygen as in the experiment. Usually, when oxygen is present, an inhibitory signal prevents the process from happening, thereby protecting the bacteria from the risks associated with metabolising citrate in the presence of oxygen. We can also partly explain this microevolutionary event in terms of loss of function, as defined by Behe: a safety system that controlled a dangerous process for the bacterium was spontaneously deactivated. Under the strict conditions of the experiment, this proved fortunate for the organism, giving it clear advantages. However, if the same bacterium were released into the wild, it would be at a disadvantage due to the lack of a safety system that may be essential.

Can fundamentally new functional information accumulate in the genome through microevolution? In theory, anything can happen by pure chance, so the answer is yes.

Therefore, we can conclude that the ability to metabolise citrate already existed in the E. coli genome but was normally inactive, and a loss-of-function mutation activated this metabolic function. However, detailed genetic analysis has also revealed a certain degree of duplication of genetic information and acquisition of function. It is difficult to determine the extent to which this can be considered new information; therefore, the problem of Lenski’s experiment may not yet be fully explainable from a creationist perspective.

However, the evolutionary perspective also encounters problems because the ability to metabolise citrate evolved in two steps separated by over 10,000 generations of bacteria, which is highly improbable by chance alone.[7] Dawkins uses this example to invalidate the creationist argument of irreducible complexity.

The theory of irreducible complexity argues that a biologically complex function is extremely unlikely to develop gradually through evolution, as it usually requires several biological components to be present simultaneously in order to have any selective value; each component individually has no selective value and therefore cannot develop gradually. Dawkins counters this theoretical argument with empirical evidence found by Lenski. Here, an accumulation of initially functionally useless information (first component) “is observed” to take place, followed much later by an isolated second mutational step involving further useless information (the second component).

Together, these two steps bring about a new function. Dawkins does not elaborate on the probability of these two steps that have no selective value occurring by chance. This may be because, in other contexts, he usually combats the argument of irreducible complexity from the opposite position, namely that the emergence of new and complex biological functions does not actually require useless intermediate steps, but rather that any intermediate evolutionary step, no matter how small, has selective value. This line of argument effectively concedes that the emergence of complex functions through evolutionary steps with no individual selective value is highly improbable. However, when an experiment such as Lenski’s appears to show this, Dawkins himself argues exactly the opposite. Thus, it appears that both a statement and its opposite can be used to support the theory of evolution by the same people.

Artificial selection

Artificial selection is a process that has been known to humans for thousands of years. Its dramatic results can be seen today in most species of agricultural plants and domestic animals.

By selecting specimens with useful traits and breeding them, humans have created plant varieties and animal breeds that are almost unrecognisable from their wild counterparts. Wheat, corn, and almost all grains and vegetables look so different from their wild counterparts today that they are almost unrecognisable as the same species. The same can be said of dogs and other domestic animals. Just look at the difference between a Chihuahua and a Great Dane—look at how different they are from the wolf, from which they most likely originated! These variations are so striking that they formed the backdrop against which Charles Darwin developed his theory of natural selection. In his book, On The Origin of Species, Darwin ponders what natural selection could achieve over millions of years, given that artificial selection by humans has produced such dramatic results over just thousands of years.

Artificial selection is indeed highly relevant to evolution by natural selection because the two processes can be considered practically the same phenomenon: random mutations under selective pressures. The only difference is that in one case the selective pressure comes from humans and in the other it comes from the natural environment. Otherwise, from an evolutionary point of view, we are essentially dealing with the same phenomenon.

Random mutations

1) Specimens develop without well-defined limitations in the direction of improving selected characteristics.

2) In the event that organisms can no longer survive in nature due to the exaggerated expression of the selected trait, this stage should not present an insurmountable barrier to the application of artificial selection, given that the specimens are maintained under favourable conditions rather than natural ones. Basically, we would reasonably expect as much change as possible from the artificial selection of an organism.

3) The steps of a group’s transformation are highly unlikely to be faithfully repeated in another artificial selection experiment because they are based on random mutations.

4) It is highly unlikely that the steps of group transformation are reversible.

Predefined adaptability

1) Whatever characteristic is selected, its variability should be limited.

2) It is expected that this limitation of variability will be quite strict, regardless of how favourable the conditions are in which the specimens are kept.

3) The transformation steps of a group are likely to be repeatable if an identical artificial selection experiment is attempted. In an identical experiment, we would simply be repeating a more or less deterministic adaptation process.

4) The transformation steps of a group may even be reversible. This would mean that functional information is not necessarily lost in the process of artificial selection, but rather “inactivated.”

Observed artificial selection

1) Every species has clear limits to which artificial selection can be applied. Attempting to push these limits incurs costs: the more intensively artificial selection is applied to a plant variety (or animal breed) to favour a particular trait, the more susceptible that variety becomes to disease and adverse natural factors. These limits and costs transcend the standard explanation of genetic diversity loss through selection, as this can be restored through crossbreeding with previous variants of the variety, or through hybridisation.

2) The limits of variability in a species’ characteristics are like walls that artificial selection cannot overcome.

3) Artificial selection is a remarkably repeatable process in both plants and animals. Even in Lensky’s experiment, two of the 12 bacterial lines followed an identical path of adaptation, despite not being under artificial selection.

4) In some cases, it has been possible to resurrect breeds that were thought extinct, as with the famous example of the Quagga zebra in South Africa*.

*Opinions are divided as to whether it is possible to “revive” an extinct species or whether it is only possible to “imitate” the original.

The limitations of microevolution

Observing artificial selection provides an excellent opportunity to test hypotheses about the limitations of microevolution that would otherwise be impossible to test except during inaccessible periods of time. If microevolution is indeed based on the constant generation of new functional information arising from random mutations, then artificial selection should exhibit the characteristics presented in the first group of the table above. Conversely, if microevolution is based on adaptability mechanisms intelligently predefined in the genome that are sensitive to selective pressures, we should observe completely different characteristics of artificial selection, such as those in the second group. You can compare these predictions with the actual observations of artificial selection in the third group.

Conclusions

Evolutionists are irritated when creationists accept microevolution but reject macroevolution. This dilemma has been expressed as follows: if a process (microevolution) can get you from your bedroom to your kitchen in a few seconds, why wouldn’t it be good enough to get you from Boston to Los Angeles in a year (macroevolution)? Perhaps there would be less confusion if creationists stopped using the term “microevolution” when they do not mean the same thing as evolutionists and rather used a term describing a form of predefined adaptability or regulated variability in organisms. Of course, both microevolution in the evolutionary sense and creationist adaptability are based, at least in part, on observable, seemingly random mutations, and both models consider that most mutations are detrimental to the organism. The difference between the two models lies in how mutations that do not appear to be harmful are interpreted: as possible new information from an evolutionary perspective, or as possible switches in the organism’s predefined adaptability mechanism.

Currently, the evidence for microevolution itself, as well as the parallels that can be drawn between natural and artificial selection, is more consistent with a model of predefined adaptability in organisms than with the evolutionary model, with which they are actually in contradiction.

Footnotes
[1]“Taken from Richard Dawkins, The Greatest Show on Earth, chapter ‘Before our very eyes,’ Free Press—Transworld, 2009.”
[2]“AS Kondrashov, ‘Contamination of the genome by very slightly deleterious mutations: why have we not died 100 times over?’, Journal of Theoretical Biology, vol. 175, no. 4, Aug. 21, 1995, pp. 583-594.”
[3]“Genetic recombination is considered to be a process that reduces the degree of degradation of the genome by mutations.”
[4]“Jane Charlesworth, Adam Eyre-Walker, ‘The McDonald–Kreitman Test and Slightly Deleterious Mutations,’ Molecular Biology and Evolution, vol. 25, no. 6, Jan. 14, 2008, pp. 1007-1015.”
[5]“Author of Darwin’s Black Box and one of the most prominent molecular biologists advocating the existence of an intelligent Creator of life.”
[6]“Michael J. Behe, ‘Experimental Evolution, Loss-of function mutations and “The first rule of adaptive evolution”,’ The Quarterly Review of Biology, vol. 85, no. 4, December 2010.”
[7]“The first step of mutations must reach fixation in the genome in order to survive 10,000 generations, and fixation is unlikely in the absence of a selective advantage.”

“Taken from Richard Dawkins, The Greatest Show on Earth, chapter ‘Before our very eyes,’ Free Press—Transworld, 2009.”
“AS Kondrashov, ‘Contamination of the genome by very slightly deleterious mutations: why have we not died 100 times over?’, Journal of Theoretical Biology, vol. 175, no. 4, Aug. 21, 1995, pp. 583-594.”
“Genetic recombination is considered to be a process that reduces the degree of degradation of the genome by mutations.”
“Jane Charlesworth, Adam Eyre-Walker, ‘The McDonald–Kreitman Test and Slightly Deleterious Mutations,’ Molecular Biology and Evolution, vol. 25, no. 6, Jan. 14, 2008, pp. 1007-1015.”
“Author of Darwin’s Black Box and one of the most prominent molecular biologists advocating the existence of an intelligent Creator of life.”
“Michael J. Behe, ‘Experimental Evolution, Loss-of function mutations and “The first rule of adaptive evolution”,’ The Quarterly Review of Biology, vol. 85, no. 4, December 2010.”
“The first step of mutations must reach fixation in the genome in order to survive 10,000 generations, and fixation is unlikely in the absence of a selective advantage.”