Archive for the ‘Papers’ Category
The controversy over mistakes in a key paper used to justify austerity reminded me of what seems to be another simple mathematical mistake in a high profile paper. The authors of this paper tried to tot up the total value of the world’s ecosystem services—the ways that humans benefit from the natural world. This kind of valuation is pretty controversial: critics feel that putting a dollar value on nature misses the point and leaves some ecosystems vulnerable, while advocates argue that it’s the only realistic way to include nature in a discussion that all too often just looks at ‘the economy, stupid’.
Costanza et al. ran the figures for a whole range of ecosystem services, and estimated that we get some $33 trillion of value per year. The paper was published in Nature, and according to Google Scholar, has since been cited some 9488 times. I won’t discuss how they got all the numbers and what they mean; if you’re interested, have a look at the paper itself, and critiques such as this and this. My focus today is more specific.
Looking at the breakdown of values for different services and biomes (table 2), by far the largest single value is for nutrient cycling, described in table 1 as “storage, internal cycling, processing and acquisition of nutrients”, for example “nitrogen fixation, N, P and other elemental or nutrient cycles.” That’s mostly from marine systems: the total value of nutrient cycling by marine ecosystems appears to be $15.3 trillion per year, a little under half of the grand total. To get behind that number, we need to dive into the supplementary information. The relevant section reads:
We assumed that the oceans and coastal waters are serving as sinks to all the world’s water that flows from rivers, and that the receiving marine waters provide a nutrient cycling service. If we assume that roughly one-third of this service is provided by estuaries (Nixon et al. 1996 in press) and the remainder by coastal and open ocean, (assume 1/3 by shelf and 1/3 by ocean), then the total quantity of water treated is 40 x 1012 m3 y-1. Replacement costs to remove N and P were estimated at $0.15 – 0.42 m-3 (Richard et al. 1991 as quoted in Postel and Carpenter 1997). Thus, the replacement cost for each biome’s (1/3) contribution to the total value is $2.0 x 1012–$5.6 x 1012. By hectare, the value for ocean (32200 x 106 ha) is then $62.1 – 174 ha-1 y-1.
First, where does that total quantity of water come from? It looks like it should be the total flow of the world’s rivers, and this Russian paper from 1993 confirms that, putting total river runoff at 42,700 km3 per year, about the same as the volume given. I’m not entirely sure about this way of calculating replacement value—if we had to recycle those nutrients ourselves, wouldn’t we find a more efficient way than filtering them out of all the world’s rivers?—but let’s accept the assumptions for now.
Multiplying the total volume by the estimated treatment values (15 to 42 ¢ per cubic metre) gives us $6.0–16.8 trillion, a range with a midpoint of $11.4 trillion. That’s quite a bit lower than the value $15.3 trillion total we got for nutrient cycling in the oceans above. Where has the extra come from?
As the quotation mentions, the authors assume that three biomes—estuaries, the continental shelf and river estuaries—each perform a roughly equal amount of that nutrient cycling. So the total value is split into three chunks of $3.8 trillion, one assigned to each ecosystem. But then a fourth chunk of $3.8 trillion turns up, assigned to seagrass & algae beds (and by itself making those the third most valuable biome per hectare). The notes for nutrient cycling in this biome just say “For calculation methods, see notes for Ocean.” I can only guess that the authors decided that seagrass and algae beds had an important role in recycling nitrogen and phosphorus, but forgot to recalculate it to split the total over four biomes.
I’ve checked over this several times, and when I first spotted it, I bounced it off one of my professors, although unfortunately I no longer have the e-mails. If I’m wrong, I hope the authors will accept my apologies. But even in that case, I don’t think that the information in the paper and the supplementary information clearly justify these numbers, which make up the biggest part of the headline figure.
Of course, unlike in the austerity paper, this doesn’t really affect the conclusions of the study. Whether the central figure is $33 trillion or $29 trillion, it’s clearly enormous: the authors highlight that it’s more than the world’s GNP ($18 trillion in 1997, when the paper came out). And that number is just the centre of a range of $16–54 trillion, so there was already plenty of uncertainty. More importantly, it’s probably an underestimate, because of all kinds of services and factors that couldn’t be included in the analysis. It’s clear that we benefit immensely from the natural world.
So is this $3.8 trillion dollar slip wholly unimportant? Why am I writing about it? Well, I don’t think it’s a great testament to the peer review process. All the information to spot this was there in the supplementary information, but even for a paper in the world’s most prominent journal, reviewers apparently didn’t reach for a calculator and check through the numbers. Happily there are moves afoot to improve reproducibility, and people are working on better tools for scientific computing. But how many other papers out there have simple mistakes in the numbers?
But I’d prefer to look at this more optimistically. Like Thomas Herndon, who dug out the Excel mistake in the now-infamous Reinhart & Rogoff paper, I spotted this while I was an undergraduate, reading about ecosystem services for my conservation module. Checking a calculation is a lot easier than finding and justifying all the numbers in the first place, and there are plenty of undergrads ;-). We might not be able to replicate every lab experiment, but re-checking published calculations is well within the realm of possibility.
The paper in question:
Costanza, R., d’Arge, R., de Groot, R., Farber, S., Grasso, M., Hannon, B., Limburg, K., Naeem, S., O’Neill, R., Paruelo, J., Raskin, R., Sutton, P., & van den Belt, M. (1997). The value of the world’s ecosystem services and natural capital Nature, 387 (6630), 253-260 DOI: 10.1038/387253a0
The supplementary information no longer seems to be on the Nature website, but there’s a copy linked from this Duke University page.
Today, I’m venturing into the world of Arabidopsis, a plant I usually leave to the geneticists. More specifically, into it and its relatives’ evolutionary past.
DNA sequences can be used to estimate how long ago species separated. Once they separate, they stop interbreeding, and their DNA sequences start to evolve separately. So the more differences there are in their DNA, the longer it is since they split. Different bits of DNA change at different speeds: from essential genes evolving very slowly, to so called ‘junk DNA’, which doesn’t code for anything, so can change much faster.
All the data needs calibrating, though. We’ve got to know how fast particular bits of DNA change, before we can use them to date anything. The key is fossils. Say you’ve got two groups of living species, one of which has a novel and distinctive feature that the other group lacks (and presumably never had). If you find a fossil that looks like your species, and you can see it has that key feature, then the two groups must have split by the time of the fossil. I won’t go into the various ways you might find the age of the fossil, but there’s info on Wikipedia.
The paper I’ve read today did this calibration for the cabbage family, Brassicaceae (which includes Arabidopsis, along with a number of vegetables including cabbage, broccoli, turnips and oilseed rape), and some related families, like that of capers, the little buds used in Mediterranean cooking. They used five fossils which have previously been found of these plants. Perhaps the most important was a fossil seed pod from a plant called Thlaspi primaevum, which lived about 30 million years ago. Under a scanning electron microscope, they could see a characteristic pattern of ridges that it shares with living Thlaspi species, meaning that they must have split off from a closely related genus at least that long ago.
Using these fossil dates, and a more flexible model of evolution, they estimated the age of various splits between species. They put the split between the model plant Arabidopsis thaliana, and its close relatives like A. lyrata back to about 13 million years ago, three times previous estimates, and reckon that those species separated from vegetables like cabbage (Brassica) about 43 million years ago, about twice previous estimates. There are big margins of error around those “optimal reconstructions”, though.
So what does that tell us? Apart from having to re-examine how quickly various genes evolved, it places the earliest split of the ‘Brassicales’, when the ancestors of the cabbage and the caper bush separated, back in the Cretaceous, before the mass extinction that killed the dinosaurs. That suggests that those plants, after surviving the mass extinction, began to diversify to take advantages of new opportunities afterwards. Their survival and diversification could also be connected with genome duplications, where the plants ended up with two copies of all their genes, which can open up new possibilities to evolution.
The authors also suggest that the timing matches up with speciation in a family of butterflies called the Pierids, which includes the cabbage white and the orange tip. Their caterpillars can munch on these plants, as gardeners know all too well, because they can break down the toxic chemicals which protect them (the same chemicals that give mustard its sharp flavour). That’s interesting, because it suggests that the plants and the butterflies might have diversified together. Perhaps the butterflies adapted to the diversification of their host plants, or perhaps each group drove speciation in the other.
Finally, it puts the origins of Arabidopsis thaliana, the favourite of geneticists, back in the Miocene, a somewhat warmer period than the Pliocene, when previous estimates had it splitting. That might change our ideas about how it evolved.
Beilstein, M., Nagalingum, N., Clements, M., Manchester, S., & Mathews, S. (2010). Dated molecular phylogenies indicate a Miocene origin for Arabidopsis thaliana Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0909766107
For today, I’ve dug up a paper (I forget how) from 1998, when I was still in primary school, about why people like spicy foods, and why some cultures use more spice than others. The idea that we acquired a taste for spices to keep harmful bacteria in check isn’t implausible, but the evidence in the paper is more interesting than conclusive.
First off: does it work? Do spices have antibacterial properties? Yes, according to studies that the authors found. All the spices which had been tested were effective against some bacteria, and four (garlic, onion, allspice & oregano) had an effect on all the bacteria on which they were tested. Of course, publication bias might be hiding experiments where spices didn’t affect bacteria, but the researchers found studies for 30 of the 43 spices they were looking at.
The ‘secondary compounds’ which give spices their various, powerful flavours, are probably part of the plants’ defences against pests and diseases, so it’s not a great surprise that they’d work against other bacteria.
The focus of the paper is the researchers’ rather epic study of traditional recipes. From 93 traditional recipe books, they read over 4500 meat-based recipes (meat is more likely to cause food poisoning) from 36 countries, noting the spices used in each. Most recipes included at least one spice, with onions and pepper the two most common overall. They match this up with the climate in each country, to find that hotter countries have more recipes with spices in, use more spices per recipe, and use the available spices more often. This, they say, is because bacteria will multiply quicker in warm countries, so more spices will be needed to keep them in check. On the other hand, they didn’t find any connection between spice use and rainfall.
It’s an interesting idea, but the statistics they use to check it aren’t great. It all hinges on whether each country can be treated as an independent case. They spend the best part of a page trying to justify this, but in the end seem to say “if we grouped some countries together, there wouldn’t be enough to do statistics with.” I sympathise, but I reckon it might be misleading to do statistical tests at all. There’s a particular problem at the cooler end of the temperature range, where almost all the countries are in Europe. If we share a common, relatively non-spicy cuisine, that could be making the correlation look stronger than it really is.
They also look more specifically at which spices are being used: for example, they reckon that hotter countries use more of the spices that are most effective against bacteria. But this, too, is a bit misleading. Even ‘traditional’ recipes can include relatively new foods; what could be more English than roast potatoes, from a plant we knew nothing of a few centuries ago? Chilli peppers are also from South America, so couldn’t have been used in Indian cuisine before Columbus. Many of the spices they look at have been spread around the world only in the last few centuries, presumably into cultures which had already developed tastes for more or less spicy food.
The patterns are interesting, and going through thousands of recipes must have taken a fair amount of work. But I don’t think the statistical analysis is enough to close the case. Perhaps an experimental approach would compliment it: how well do different combinations of spices delay bacteria growing on meat at different temperatures? And could humans hit upon this without knowing about bacteria, by associating non-spicy food with the nausea caused by food poisoning? The ethics committee might have something to say about testing the latter, but perhaps it could be done with rats.
Billing, J., & Sherman, P. (1998). Antimicrobial Functions of Spices: Why Some Like it Hot The Quarterly Review of Biology, 73 (1), 3-49 DOI: 10.1086/420058
Phytoplankton—single celled green floaters—fulfil the same role in the oceans as plants do on land. They’re the basis of the food chain, capturing energy from sunlight, and eventually feeding just about everything else. So the news that they’ve declined by about 40% since 1950 (Nature News) is rather worrying. Let’s take a look at where the number came from.
The standard way of finding the amount of phytoplankton in seawater is to measure the concentration of chlorophyll, the green pigment used in photosynthesis. Essentially, you can just test how green the water is, although modern methods are a bit cleverer. Using satellites, you can even remotely measure vast areas of ocean. But people didn’t make those measurements much until the 1960s (and not with satellites until 1979). So the researchers combined them with an even simpler method, which has been done a lot since the 1930s. The ‘Secchi disk’ is lowered into the water until it can no longer be seen, giving a measure of how clear the water is. After throwing out measurements near the shore, where mud can reduce visibility, they fit pretty well with the chlorophyll measurements.
What did they find? Well, the number the media picked up on was the global average 1% decline per year. That’s one percent of current levels, so working back gives you just under a 40% drop since 1950. That average, though, hides quite a bit of variation in the actual change:
It looks like there’s a clear decline in the Atlantic ocean and the polar oceans, a smaller decline in the Pacific, while plankton actually increased in the Indian ocean. The authors go one step further, breaking it down into large grid squares. That shows still more variation, but without a clear pattern, at least to my eye.
I mentioned a link with climate change. It works like this: when you heat a pan of water on a stove, you get convection currents, as warm water rises, cools, and sinks again. But the oceans are mostly heated from the top, by sunlight, which means a layer of warm water forms, sitting on top of a huge depth of colder water. Phytoplankton can only grow near the surface, because they need sunlight, but they quickly use up the nutrients there, and then need water mixed in from the depths to grow. As a result, regions of upwelling water, such as off the coasts of Peru and Antarctica, have particularly rich sea life. The ocean currents that drive them aren’t expected to stop any time soon, but warmer temperatures at the surface could be reducing smaller scale mixing.
This isn’t just conjecture. The scientists separated the year-by-year variation in plankton levels from the overall trends, and compared that variation to various ocean ‘oscillations’. These are roughly regular patterns in temperature and pressure, the best known of which is the El Niño/Southern Oscillation in the Pacific. In most areas with oscillations, there was less plankton in warmer years (the pattern didn’t fit for the North Indian ocean, perhaps due to the effects of the monsoon rains).
I’m a bit surprised, reading the paper, that they didn’t explore any of the other things that could be affecting the plankton. There’s a brief list of possible factors, including nutrients coming from the land, ocean circulation, and the effects of other organisms in the sea, but then only the surface temperature and the resulting ‘mixed layer depth’ are given any discussion at all.
If phytoplankton are on the decline due to global warming, that’s not just bad news for the algae. As I described above, almost* everything in the oceans ultimately relies on phytoplankton. They’re also a key part of the carbon cycle, removing CO2 from the air. That leads to a positive feedback: as we release more CO2 and warm the earth, we also slow down its absorption by life in the oceans.
Boyce, D., Lewis, M., & Worm, B. (2010). Global phytoplankton decline over the past century Nature, 466 (7306), 591-596 DOI: 10.1038/nature09268
*Treasure your exceptions: some things living at hydrothermal vents can get all their energy from dissolved chemicals.
Most of our staple crops are annuals—plants that grow from seed, produce the next generation of seeds and then die, all in one year. In particular, the ‘big three’ crops, rice, wheat and maize, are all annuals. What would life be like if we instead grew perennials—plants that last more than one year? No more yearly ploughing and sowing.
First things first: we’ve already got plenty of perennial crops. Many fruits, such as apples, grapes and kiwis, grow on trees and vines, and plants like the tomato can grow as annuals or perennials. But they’re luxuries, not our daily bread. The cereals and pulses that we depend on are almost all annuals. Read the rest of this entry »
Science via Youtube today. Let’s start with some smoke rings. They go an impressively long way—much further than a simple puff of smoke fired with the same force would:
So, why might a moss need to do the same thing?
It’s all about spores. Mosses spread by spores, a bit like microscopic seeds. For peat moss (Sphagnum), growing low on the ground in bogs, the challenge is to catch the wind, getting its spores high enough that eddies in the air carry them away. It launches them, after building up 2–5 times atmospheric pressure behind them, but that by itself wouldn’t be enough: like a puff of smoke, the dust-like spores would quickly slow down, staying in the still air near the ground, and settling back to the ground. So they blow tiny smoke rings:
Could other plants use the same trick? Spores, pollen and the smallest seeds (such as those of orchids) could all potentially be ‘puffed’ like this. But most plants are high enough to catch the wind easily: the authors reckon it’s only about 10cm up in the bogs where peat moss lives. White mulberry, recently featured on QI as the ‘fastest thing in biology’, flings its pollen with a catapult mechanism (here’s the paper for subscribers) that couldn’t generate a ring vortex.
Maybe the best place to look would be fungi, some of which face a similar challenge to the peat moss when launching their spores. There are other ways to approach it, though. Take a look at these Pilobolus fungi, which use a ‘water pistol’ approach to launching spores (this paper is open access). Interestingly, the pressure they use to launch is similar to that in the moss.
Finally, to round off this post of Youtube science, let’s take a closer look at vortex rings (the technical name for smoke rings). They hold together by rolling through the air on the outside, while the inside’s moving forward faster. Here it is with ink in water:
Whitaker, D., & Edwards, J. (2010). Sphagnum Moss Disperses Spores with Vortex Rings Science, 329 (5990), 406-406 DOI: 10.1126/science.1190179
How many species are there here? It’s a beguilingly simple question, and a fundamental area of interest. A moment’s thought shows that the bigger here is, the more species there will be. So, if we start from a little patch of my lawn, and take successively larger heres until we’ve included the whole world, we can draw a ‘species area curve’. It generally looks a bit like this:
It’s got three distinct parts: at local scales, the number of species increases sharply as you look at larger areas, then at regional scales it slows down, but at the very largest scales it picks up again. It’s easy to come up with possible explanations in words: at first, the number of species increases as you happen to ‘catch’ more species in your area, then it levels off because you’re mostly finding the same species again, and finally climbs as you encounter ‘exotic’ species that don’t live near your starting point. But can a mathematical model of species come up with the same sort of result?
Enter neutral theory. Laid out in a book by Stephen Hubbell, it tried to model a group of species by ignoring all the differences between them, imagining that every individual has the same chance of dying, the same chance of reproducing, and the same (small) chance of producing a new species. This is, to say the least, controversial, but remember that it’s a model: of course reality’s not like that, what’s interesting is how well such a simple model fits ecological patterns like the species area curve.
The very simplest version of neutral theory completely disregards where individuals are: when there’s a gap to be filled, any individual has the same chance of filling it. An extra development is the idea of a ‘metacommunity’, where individuals die and reproduce within one population, but occasionally disperse from one population to another
That sort of model can’t study the intricacies of species-area curves, though. Both of the studies referenced below used versions of neutral theory that do take account of where each individual is: ‘spatially explicit’ models, in the jargon.
James Rosindell and Stephen Cornell made a computer simulation, in which each individual occupied one square of a grid (it helps to imagine trees in a forest, rather than moving animals). When one dies, its square is most likely to be filled by the offspring of a nearby individual. This led to a species area curve with more or less the right shape, and by running the simulation many times, with different settings, they were able to get a decent fit to real-world data; it turned out that their original model had favoured the nearby individuals a bit too much when filling gaps, and they had to allow slightly more dispersal from farther away.
Computer simulations are unwieldy, though. For every change, the model must be run several times over, and some changes will make it slower: Rosindell and Cornell admit that at least one possibility was too “computationally expensive” to test. So James O’Dwyer and Jessica Green set out to make a mathematical model, a set of equations, based on probabilities, to act as a shortcut between the settings and the result.
They, too, start off with a grid, except that they allow more than one individual to share a square. The equations for this I think I can understand. Then they turn it into a different type of equation (a “moment generating function”), and then work out what happens if you make the squares of the grid infinitesimally small. At this point they use some maths from quantum field theory, the proportion of Greek letters goes up, and curly Z and downward-pointing-triangle put in appearances, so I won’t pretend to understand it at all. The result, however, is a curve with three parts, and realistic numbers for things like the speciation rate do give it the right shape.
So what does this tell us? Well, it seems that the pattern we see for the number of species in different sized areas can be explained without considering either biological factors, such as competition and adaptation, or geographical ones, such as the arrangement of landmasses. And it sets the bar for anyone studying the effect of such things: can they explain the pattern better than a neutral model does?
O’Dwyer, J., & Green, J. (2010). Field theory for biogeography: a spatially explicit model for predicting patterns of biodiversity Ecology Letters, 13 (1), 87-95 DOI: 10.1111/j.1461-0248.2009.01404.x
Rosindell, J., & Cornell, S. (2009). Species–area curves, neutral models, and long-distance dispersal Ecology, 90 (7), 1743-1750 DOI: 10.1890/08-0661.1