In a paper published in Nature on 2023-07-05, “Evolution of a minimal cell” [full text at link], researchers report engineering a minimal cell, starting with the ruminant bacterial parasite Mycoplasma mycoides, deleting genes until they arrived at a minimal set which was self-replicating, as had earlier been done by the Minimal Genome Project to create Mycoplasma laboratorium.
They then cultured this synthetic organism, whose original fitness was less than half that of the original bacterium from which it was derived, over 2000 generations, comparing it to the natural organism, subjecting it to a variety of environmental challenges. Here is the abstract describing the results.
Possessing only essential genes, a minimal cell can reveal mechanisms and processes that are critical for the persistence and stability of life. Here we report on how an engineered minimal cell contends with the forces of evolution compared with the Mycoplasma mycoides non-minimal cell from which it was synthetically derived. Mutation rates were the highest among all reported bacteria, but were not affected by genome minimization. Genome streamlining was costly, leading to a decrease in fitness of greater than 50%, but this deficit was regained during 2,000 generations of evolution. Despite selection acting on distinct genetic targets, increases in the maximum growth rate of the synthetic cells were comparable. Moreover, when performance was assessed by relative fitness, the minimal cell evolved 39% faster than the non-minimal cell. The only apparent constraint involved the evolution of cell size. The size of the non-minimal cell increased by 80%, whereas the minimal cell remained the same. This pattern reflected epistatic effects of mutations in ftsZ , which encodes a tubulin-homologue protein that regulates cell division and morphology. Our findings demonstrate that natural selection can rapidly increase the fitness of one of the simplest autonomously growing organisms. Understanding how species with small genomes overcome evolutionary challenges provides critical insights into the persistence of host-associated endosymbionts, the stability of streamlined chassis for biotechnology and the targeted refinement of synthetically engineered cells.
So, even with the constraint of having lost the gene(s) which allow the bacterium to increase its size, evolution figured out how to reconfigure its internal operation to recover all of the fitness which was lost by deletion of genes from its natural progenitor. Further, this process of evolution occurred 39% faster than evolution of the natural organism able to increase its size (presumably because the synthetic bacteria were able to reproduce faster due to their smaller genome and size).
I would like to hear Stephen C. Meyer’s thoughts on this. Of course, from an informational standpoint, the information necessary for this living organism comes from the external “mind” of scientists. I also note that the body form remains that of a bacterium. Evolution within species, including artificial and accelerated - like here - remains something very different from creating entirely new species. The evidence Meyer presents has persuaded me that the best explanation for the arising of new species is intelligent design. The new information required for this to occur -Meyer shows convincingly - simply cannot be explained by random mutation and natural selection. New information of that magnitude, arising from random combinations is, statistically, just about impossible. For it to have happened the multitude of times to account for the sheer number of species - I don’t think so. Few writers have changed my thinking as much as Meyer.
A more recent and very different perspective is presented in Andreas Wagner’s 2015 book, Arrival of the Fittest: How Nature Innovates. I described the basic argument in this comment on 2023-02-28. The essence is that the calculations that Meyer and others present about the statistical improbability of even a moderate sized functional protein arising by random modifications to an existing protein sequence are based upon a linear organisation of amino acids in a chain (or, equivalently, codons in a DNA molecule). But when you arrange these one-dimensional strings into a multi-dimensional “protein space”, you find that due to the vastly higher dimensionality, each functional protein has enormously more neighbours which can be reached by a single unit replacement, and thus as many more pathways between structures which preserve functionality along the path. (From my comment):
The same experiment may be repeated on proteins, and one finds that starting with an 80 amino acid protein used to bind ATP and again changing just one amino acid at a time, there were 10^{93} equivalently functional proteins reachable from the starting point, all by a series of single residue changes of the kind a random mutation causes.
This means that as biology conducts its massively parallel search through the protein space, it is not constrained to having to leap directly from one functional protein from another, which might involve so many changes as to be statistically impossible, but is able to random walk from one very large region of functionality to another by single unit changes without ever losing functionality. Once arriving at a new basin of functionality, it can then explore further, increasing fitness in the new functionality, but now discarding the original functionality from which it originated.
Essentially, evolution defeats the combinatorial explosion of the number of possible proteins by its own combinatorial explosion of the number of variants of a given protein which retain its 3D folded functionality and a commensurate explosion in the number of pathways which can lead from one to another, all by just changing one amino acid or codon at a time.
I can’t visualize how this works without reading the book. Thanks for posting this theory. I thought, though, that folding and conformation are exquisitely sensitive to sequence - though I could be wrong.