Fifty thousand generations, still improving

I take all my hats off to Richard Lenski and his team. If you’ve never heard of them, they are the group that has been running an evolution experiment with E. coli bacteria non-stop for the last 25 years. That’s over 50 000 generations of the little creatures; in human generations, that translates to ~1.5 million years. This experiment has to be one of the most amazing things that ever happened in evolutionary biology.

(Below: photograph of flasks containing the twelve experimental populations on 25 June 2008. The flask labelled A-3 is cloudier than the others: this is a very special population. Photo by Brian Baer and Neerja Hajela, via Wikimedia Commons.)

It doesn’t necessarily take many generations to see some mind-blowing things in evolution. An irreducibly complex new protein interaction (Meyer et al., 2012), the beginnings of new species and a simple form of multicellularity (Boraas et al., 1998) are only a few examples. However, a few generations only show tiny snapshots of the evolutionary process. Letting a population evolve for thousands of generations allows you to directly witness processes that you’d normally have to glean from the fossil record or from studies of their end products.

Fifty thousand generations, for example, can tell you that they aren’t nearly enough time to reach the limit of adaptation. The newest fruit of the Long-Term Evolution Experiment is a short paper examining the improvement in fitness the bacteria experienced over its 25 years (Wiser et al., 2013). “Fitness” is measured here as growth rate relative to the ancestral strain; the faster the bacteria are able to grow in the environment of the LTEE (which has a limited amount of glucose, E. coli‘s favourite food), the fitter they are. The LTEE follows twelve populations, all from the same ancestor, evolving in parallel, so it can also determine whether something that happens to one population is a chance occurrence or a general feature of evolution.

You can draw up a plot of fitness over time for one or more populations, and then fit mathematical models to this plot. Earlier in the experiment, the group found that a simple model in which adaptation slows down over time and eventually grinds to a halt fits the data well. However, that isn’t the only promising model. Another one predicts that adaptation only slows, never stops. Now, the experiment has been running long enough to distinguish between the two, and the second one wins hands down. Thus far, even though they’ve had plenty of time to adapt to their unchanging environment, the Lenski group’s E. coli just keep getting better at living there.

Although the simple mathematical function that describes the behaviour of these populations doesn’t really explain what’s happening behind the scenes, the team was also able to reproduce the same behaviour by building a model from known evolutionary phenomena. For example, they incorporated the idea that two bacteria with two different beneficial mutations in the same bottle are going to compete and slow down overall adaptation. (This is a problem of asexual organisms. If the creatures were, say, animals, they might have sex and spread both mutations at the same time.) So the original model doesn’t just describe the data well, it also follows from sensible theory. So did the observation that the populations which evolved higher mutation rates adapted faster.

Now, one of the first things you learn about interpreting models is that extrapolating beyond your data is dangerous. Trends can’t go on forever. In this case, you’d eventually end up with bacteria that reproduced infinitely fast, which is clearly ridiculous. However, Wiser et al. suggest that the point were their trend gets ridiculous is very, very far in the future. “The 50,000 generations studied here occurred in one scientist’s laboratory in ~21 years,” they remind us, then continue: “Now imagine that the experiment continues for 50,000 generations of scientists, each overseeing 50,000 bacterial generations, for 2.5 billion generations total.”

If the current trend continues unchanged, they estimate that the bugs at that faraway time point will be able to divide roughly every 23 minutes, compared to 55 minutes for the ancestral strain. That is still a totally realistic growth rate for a happy bacterium!

I know none of us will live to see it, but I really want to know what would happen to these little guys in 2.5 billion generations…

***

References:

Boraas ME et al. (1998) Phagotrophy by a flagellate selects for colonial prey: a possible origin of multicellularity. Evolutionary Ecology 12:153-164

Meyer JR et al. (2012) Repeatability and contingency in the evolution of a key innovation in phage lambda. Science 335:428-432

Wiser MJ et al. (2013) Long-term dynamics of adaptation in asexual populations. Science, published online 14/11/2013, doi: 10.1126/science.1243357

A difficult landscape for the RNA world?

I’m back, and right now I can’t really decide if I should be squeeful or sad about Jiménez et al. (2013).

On the side of squeeing, I have some pretty compelling arguments.

  1. It’s an RNA world paper. I’m an unabashedly biased fan of the RNA world. (Not that my opinion matters, seeing as that’s the only origin-of-life hypothesis I actually know anything about. It’s like voting for the only party whose campaign ads you’ve seen.)
  2. I find the actual experiment ridiculously cool. It’s a bit like that mutation study about heat shock protein 90 that I wrote about aaaaages ago, except these guys evaluated the relative fitness of pretty much every single possible RNA molecule of 24 nucleotides. Yes, that is 4^24 different RNA molecules, each in many copies. And they did it twice, just to make sure they weren’t mistaking statistical flukes for results [1].
  3. It explores the landscape of evolution and digs into Big Questions like, how inevitable/reproducible is evolution? Or, as Stephen Jay Gould would put it, what would happen if we replayed the tape of life?

On the other hand, the findings are a bit… bleak. So the experimental setup was to select from this huge pool of RNA sequences for ones that could bind GTP, which is basically a building block of RNA with an energy package attached. In each round of selection, RNAs that could attach the most strongly to GTP did best. (The relative abundances of different sequences were measured with next-generation sequencing.) The main question was the shape of the fitness landscape of these RNAs: how common are functional GTP-binding sequences, how similar do they have to be to perform this function, how easily one functional sequence might mutate into another, that sort of thing.

And, well.

  1. There were only 15 fitness peaks that consistently showed up in both experiments. (A fitness peak consists of a group of similar sequences that are better at the selected function than the “masses”.) That sounds like GTP-binding RNAs of this size are pretty rare.
  2. The peaks were generally isolated by deep valleys – that is, if you were an RNA molecule sitting on one peak and you wanted to cross to another, you’d have to endure lots of deleterious mutations to get there. In practical terms, that means you might never get there, since evolution can’t plan ahead [2].

On the other other hand…

  1. This study considered only one function and only one environment. We have no idea how the look of the landscape would change if an experiment took into account that a primordial RNA molecule might have to do many jobs to “survive”, and it might “live” in an environment full of other molecules, ions, changing temperatures, whatever. (That would be a hell of an experiment. I think I might spontaneously explode into fireworks if someone did it.)
  2. It’s not like this is really a problem from a plausibility perspective. The early earth did have a fair amount of time and potentially, quite a lot of RNA on its hands. I don’t think it originally would have had much longer RNA molecules than the ones in this experiment, not until RNA figured out how to make more of itself, but I’m pretty sure it had more than enough to explore sequence space.

4^24 molecules is about 2.8 x 10^14, or about half a nanomole (one mole is 6 x 10^23 molecules). One mole of 24-nt single-stranded RNA is roughly 8.5 kilos – I’d think you can fit quite a bit more than a billionth of that onto an entire planet with lots of places conducive to RNA synthesis. So I see no need to panic about the plausibility of random prebiotic RNA molecules performing useful (in origin-of-life terms) functions. (My first thought when I read this paper was “oh my god, creationism fodder,” but on closer inspection, you’d have to be pretty mathematically challenged to see it as such.)

So, in the end… I think I’ll settle for *SQUEEE!* After all, this is a truly fascinating experiment that doesn’t end up killing my beloved RNA world. On the question of replaying the tape, I’m not committed either way, but I am intrigued by anything that offers an insight. And this paper does – within its limited scope, it comes down on the side of evolution being very dependent on accidents of history.

Yeah. What’s not to like?

***

Notes:

[1] I’ve worked a bit with RNA, and I have nothing but admiration for folks who do it all the time. The damned molecule is a total, fickle, unstable pain in the arse. And literally everything is full of almost unkillable enzymes that eat it just to mock your efforts. Or maybe I just really suck at molecular biology.

[2] I must point out that deleterious mutations aren’t always obstacles for evolution. They can contribute quite significantly to adaptation or even brand new functions. I’m racking my brain for studies of real living things related to this issue, but all I can find at the moment is the amazing Richard Lenski and co’s experiments with digital organisms, so Lenski et al. (2003)  and Covert et al. (2013) will have to do for citations.

***

References:

Covert AW et al. (2013) Experiments on the role of deleterious mutations as stepping stones in adaptive evolution. PNAS 110:E3171-3178

Jiménez JI et al. (2013) Comprehensive experimental fitness landscape and evolutionary network for small RNA. PNAS advance online publication, 26/08/2013, doi: 10.1073/pnas.1307604110

Lenski RE et al. (2003) The evolutionary origin of complex features. Nature 423:139-144

A virus with half a wing

Richard Lenski’s team is one of my favourite research groups in the whole world. If the long-term evolution experiment with E. coli was the only thing they ever did, they would already have earned my everlasting admiration. But they do other fascinating evolution stuff as well. In their brand new study in Science (Meyer et al., 2012), they explore the evolution of a novelty – in real time, at single nucleotide resolution.

For their experiments, they used a pair of old enemies: the common gut bacterium and standard lab microbe E. coli, and one of its viruses, the lambda phage. Phages (or bacteriophages, literally “bacterium eaters”) are viruses that infect bacteria. They are also some of mother nature’s funkiest-looking children. Below is an example, because if you haven’t seen one of them, you really should. I borrowed this electron micrograph of phage T4 from GiantMicrobes, where you can get a cute plushie version 😛

Phages work by latching onto specific proteins in the cell membrane of the bacterium, and literally injecting their DNA into the cell, where it can start wreaking havoc and making more viruses. Meyer et al.‘s phage strain was specialised to use an E. coli protein called LamB for attachment.

The team took E. coli which (mostly) couldn’t produce LamB because one of the lamB gene’s regulators had been knocked out. Their virus normally couldn’t infect these bacteria, but a few of the bacteria managed to switch lamB on anyway, so the viruses could vegetate along in their cultures at low numbers. Perfect setup for adaptation!

Meyer and colleagues performed a lot of experiments, and I don’t want to go into too much detail about them (hey, is that me trying not to be verbose???). Here are some of their intriguing results:

First, the phages adapted to their LamB-deficient hosts. They did so very “quickly” in terms of what we usually think of as evolutionary time scales (naturally, “evolutionary time scales” mean something different for organisms with life cycles measurable in minutes). Mutations in the gene coding for their J protein (the one they use to attach to LamB) enabled them to use another bacterial protein instead. Not all experimental populations evolved this ability, but those that did succeeded in less than 2 weeks on average.

The new protein target, OmpF, is quite similar to LamB, which might explain how the viruses evolved the ability to use it so quickly. But more interesting than the speed is the how of their innovation. Amazingly, all OmpF-compatible viruses shared two specific mutations. Another mutation always occurred in the same codon, that is, it affected the same amino acid in the J protein. A fourth mutation invariably occurred in a short region near the other three. Altogether, these four mutations allowed the virus to use OmpF. Plainly, we are dealing with more than mere convergent evolution here. Often, many different mutations can achieve the same thing (see e.g. Eizirik et al., 2003), but in this case, a very specific set of them appeared necessary. I’ll briefly revisit this point later, but first we have another fascinating result to discuss!

By comparing dozens of viruses that did and didn’t evolve OmpF compatibility, the researchers determined that all four mutations were necessary for the new ability. Three were not enough; there were many viral strains with three of the four mutations that couldn’t do anything with LamB-deficient bacteria. On the surface, this sounds almost like something Michael Behe would say (see Behe and Snokes, 2004), except the requirement for more than one mutation clearly didn’t prevent innovation here. Given the distribution of J mutations, it’s also likely that they were shaped by natural selection, even in virus populations that didn’t evolve OmpF-compatibility. So what did the first three mutations do? What use was, as it were, half a new J protein?

The answer would delight the late Stephen Jay Gould: the new function was a blatant example of exaptation. Exaptations are traits that originally had one function, but were later co-opted for another. While three mutations predisposed the J gene to OmpF-compatibility, they also improved its ability to bind its original target. Thus, there was a selective advantage right from the first mutation. And, in essence, this is what we see over and over again when we look at novelties. Fish walk underwater, non-flying dinosaurs cover their eggs with feathered arms, and none of them have the first clue that their properties would become success stories for completely different reasons.

In the paper, there is a bit of discussion on co-evolution and how certain mutations in the bacteria influenced the viruses’ ability to adapt to OmpF, but I’d like to go back to the convergence/necessity point instead. I have a few half-formed thoughts here, so don’t expect me to be coherent 😉

We’ve seen cases where the same outcome stems from different causes, like in the cat colour paper cited above. Then there is this new function in a virus that seems to always come from (almost) the same set of mutations. Why? I’m thinking it has to do with (1) the complexity of the system, (2) the type of outcome needed.

Proteins interact with other proteins through very specific interfaces. Sometimes, these interactions can depend on as little as a single amino acid in one of the partners. If you want to change something like that, there is simply little choice in what you can do without screwing everything up. On the other hand, something like coat colour in mammals is controlled by a whole battery of genes, each of which may be amenable to many useful modifications. And when it comes to even more complex traits like flying (qv. aside discussing convergence and vertebrate flight/gliding in the mutations post), the possibilities are almost limitless.

So there’s that, and there is also what you “want” with a trait. There may be more ways to break a gene (e.g. to lose pigmentation) than to increase its activity. When the selectively advantageous outcome is something as specific as a particular protein-protein interaction, the options may be more restricted again. (To top that, the virus has to stick to the bacterium with a very specific part in its structure, or the whole “inject DNA” bit goes the wrong way.) Now that I read what I wrote that sounds like there will be very few “universal laws” of evolutionary novelty (exaptation being one of them?). Hmm…

References

Behe MJ and Snoke DW (2004) Simulating evolution by gene duplication of protein features that require multiple amino acid residues. Protein Science 13:2651-2664

Eizirik E et al. (2003) Molecular genetics and evolution of melanism in the cat family. Current Biology 13:448-453

Meyer JR et al. (2012) Repeatability and contingency in the evolution of a key innovation in phage lambda. Science 335:428-432