Saturday, April 27, 2013

The Methane Game

A few days ago, I became aware of a potentially game-changing development in fossil carbon energy: The successful prototype mining venture of methane hydrate, potentially "an energy source that could free not just Japan but much of the world from the dependence on Middle Eastern oil that has bedeviled politicians since Churchill’s day."

Added to the current developments in "fracking", "a technique used to release petroleum, natural gas (including shale gas, tight gas, and coal seam gas), or other substances for extraction", this spells potential access to huge amounts of fossil carbon for relatively cheap energy for a number of decades.

In a quick comment at Judith Curry's blog, I proposed that this would be "a game-changer" for issues of fossil carbon.

The question is, what plans for the future can be made under these new circumstances, and what enabling technology would be necessary to support them.

Let me start with a few assumptions. Given the behavior of virtually all major polities in the years since the Kyoto protocol was signed, there seems to be little chance of any long-term reduction in the use of energy based on fossil carbon. This is certainly a debated subject, but the efforts by the EU and Australia seem to me to be temporary, and in any event will have little effect without cooperation by the developing world, which doesn't seem forthcoming.

A good thing in my view, since the entire Industrial Revolution, and the widespread improvements in lifestyle and human happiness it entails, have been powered by energy from non-human, and essentially non-biological, sources. Starting with wind and water power, during the middle ages, the addition of steam, primarily enabled by innovations of James Watt, touched off a complete revamping of the lifestyle of common people.

As the exponential increase in fossil power during the 20th century brought the advantages of Western European style life to ever-larger parts of the world, the level of CO2 in the atmosphere also increased, in very roughly equivalent exponential fashion. This has been plausibly attributed to the increased burning of fossil carbon.

There is widespread concern regarding the risks of increasing CO2 levels, especially if they continue to a point well above anything the planet has experienced in millions of years. Access to fossil methane, as petroleum accessed by "fracking", and now the mining of methane hydrate clathrate from the ocean floors, totally changes the game.

Efforts so far have concentrated on raising the price of energy to the point that non-fossil alternatives can compete. However, given the general unworkability of that approach (IMO), a focus on increased research and development of, and perhaps subsidies for, processes that remove CO2 from the air are likely to be the best approach.

For the short term, reductions in production of CO2 can be achieved by building new power facilities to burn methane rather than coal (or oil). Combined with a host of possible energy-saving initiatives, this could provide a temporary reduction in the amount of CO2 entering the atmosphere, at least relative to what would happen with coal-fired power plants.

But what about the longer term? The current buzz-word today is "renewable" energy, with a vague meaning something like "not using up fossil reserves that can't be replaced". Technologies like windmills, solar power, and nuclear fission are all lumped into this category. So are "biofuels"[2], including fuels produced by agriculture or engineered algae (cyanobacteria).

Biomethane

Technically, of course, "bio-" anything means something produced by biological processes. Coal and oil only escape this definition because they have been modified by geological action from the biologically produced material that was laid down hundreds of millions of years ago. When it comes to methane hydrate from the sea floors, some comes from seepage from petroleum sources, but most is essentially unmodified from the form produced during anaerobic decay of sea-floor detritus.

Despite this, I can predict with total confidence that the term "biomethane" will be used from the start to denote methane produced biologically as a "renewable" fuel. And that's what I intend to discuss in the rest of this post.

While some petroleum methane may be produced by geological action on more complex precursors, most is produced through a complex process of decay of organic matter. This process involves an even more complex ecology, with a variety of different eubacteria and archaea working together[3]. Interestingly, though, the actual creation of methane often seems to be performed by a single archaeal species, Methanobacterium bryantii, e.g. strain M.O.H. This reaction is as follows:

4H2 + CO2 → CH4 + 2H2O with an energy yield of "−131 kJ (per mole of CH4)"[3]

We have to be aware that energy yields are heavily dependent on relative concentrations, however this reaction is very much downhill, and can draw down the hydrogen to trace levels, sufficient that the reactions that produce it are also functionally downhill[3].

Now, how can we produce an artificial version of this process powered by modern sunlight? Many people would probably jump immediately to using photosynthetic organisms, contained in easily manufactured vessels exposed to sunlight, in analogy to the system used by Audi (above). There has certainly been some interest in creating hydrogen directly using photosynthesis[4]. Perhaps, by combining this with a Hydrogen-eating methanogen, it would be possible to produce a sunlight-powered reactor to produce methane.

In my view, however, there is a much easier way. There is already a lot of work being done on mechanisms to convert sunlight to electrical energy, especially photovoltaic (PV) and concentrated solar power (CSV). These techniques get conversion efficiencies much higher than normal crop plants, probably as high as anything using photosynthesis, although that's certainly open to dispute. However, an important point is that hydrogen can be produced directly from electricity through electrolysis, at very high efficiencies.

This is important because oxygen photosynthesis creates reactive oxygen species (ROS), " chemically reactive molecules containing oxygen." This, in turn, creates a very destructive environment for all the various enzymes and other compounds involved in biosynthesis. These materials have to be replaced frequently, which is wasteful in terms of energy that could otherwise be used for synthesizing methane or whatever other materials are required for civilization. When hydrogen from external sources is fed into the reaction, and energy needed by the cell is provided by reducing CO2 to methane, neither of the processes that are the main creators of ROS need to take place.

So, do we use Methanobacterium bryantii to convert hydrogen to methane? Well, perhaps. There is a host of other methanogen species, many of them associated with sea-floor vulcanism, which may be able to perform the process much faster[1]. This seems to me to represent an opportunity for forward-looking biotech companies: Starting with the variety of natural species, engineer an artificial organism that can be "programmed" to reproduce to the limits of necessary nutrients, then settle down and convert hydrogen to methane at a rapid clip. The lack of ROS would make each non-reproducing bacterial body much more stable, with much lower replacement needs for important molecules. Because the process doesn't require sunlight, it could be carried out in self-contained reaction vessels, with a variety of possible designs.

This approach would leverage the ongoing investment in solar power technology, as well as allowing a long-term strategy based on methane-fired power plants, which would not need to go out of service during a shift to "renewable" energy, but simply use methane from a "renewable" source. It would avoid potential problems with using engineered organisms for oxygen photosynthesis, which is wasteful in terms of the rapid replacement rate required for many important enzymes. It would take advantage of over two centuries experience in creating manufactured technology, with probably much simpler bio-technology needed to create high-speed methanogens.

Space Solar Power (SSP)

For the long run, the most probable source of energy will be the Sun, from power stations in space. While this may be several decades down the road, it's important to recognize that the technology proposed here will fit right in. Electricity at the surface can be dedicated to electrolysis, producing hydrogen. Unlike systems based on photosynthesis, conversion of hydrogen and CO2 to methane can be performed in large, out-of-sight reaction chambers, perhaps floating underwater. If we assume that energy from space is converted to electricity (from microwaves) at a rectenna a few kilometers across, floating in the ocean, we can place the reaction chambers underwater, out of the way but easily accessible for hydrogen from electrolysis. CO2 can be transferred from ocean water, in the form of bicarbonate, or perhaps ocean water from which the oxygen has been removed can be fed right through the reaction. Or perhaps regenerative technologies can be developed that precipitate the carbonate from the ocean water, transfer it into the (topological) environment of the reaction chamber, and re-dissolve it.

While this last option might take some development, we should note that hydrolysis can produce substantial pressures of hydrogen, which means the reaction can be downhill even with extremely low CO2 pressures. This differential from the surrounding ocean could likely be used to create a good rate of diffusion or other transfer from the ocean. Key technological development would include artificial membrane materials with appropriate extreme levels of diffusivity. And, of course, the necessary bio-technology.

Overall, this seems like a very workable option, with enabling technologies that are all within our current level of technological development. Methane can be easily returned to the ocean floor as solar power finally replaces its use, although even with space solar power it might be more practical to convert the space power to methane and ship it to smaller local power stations for a while, at least in some locales.

The technology doesn't have to wait until space solar power is on-line, it will work fine with CSP or PV power, technology that is already on-line, although not self-supporting without subsidies.

And returning methane to the sea floor also doesn't have to wait. There's no reason why a proportion of the methane produced by this method can't be simply returned to the form of methane hydrate and deposited on the sea-floor. Moreover, it's quite possible that as future power technology develops, it will be feasible to capture the CO2 and return it to the ocean floor. The use of bio-technology to extract CO2 from sea water, and combine it with hydrogen from clean power, may end up being the most cost-effective way of capturing the output of carbon-burning power stations.

Finally, this technology isn't limited to creating "renewable" energy. It can not only capture and "put back" carbon from burning of fossil fuels while it's running, it can capture carbon that was already burned. That means that, as this technology comes on-line sufficiently, it can draw down the atmospheric pCO2 to the point it was at the beginning of the Industrial Revolution. If that turns out to be appropriate, once we have a better understanding of how changes to atmospheric pCO2 impacts the environment.

1.  Hydrogen-limited growth of hyperthermophilic methanogens at deep-sea hydrothermal vents by Helene C. Ver Eecke, David A. Butterfield, Julie A. Huber, Marvin D. Lilleyd, Eric J. Olson, Kevin K. Roe, Leigh J. Evans, Alexandr Y. Merkel, Holly V. Cantin, and James F. Holden PNAS August 21, 2012 vol. 109 no. 34 13674-13679

2.  Microalgae for biodiesel production and other applications: A review by Teresa M. Mata, Antonio A. Martins, Nidia. S. Caetano Renew Sustain Energy Rev (2009), doi:10.1016/j.rser.2009.07.020

3.  Physiology, Ecology, Phylogeny, and Genomics of Microorganisms Capable of Syntrophic Metabolism by Michael J. McInerney, Christopher G. Struchtemeyer, Jessica Sieber, Housna Mouttaki, Alfons J. M. Stams, Bernhard Schink, Lars Rohlin, Robert P. Gunsalus Annals of the New York Academy of Sciences (2008) DOI: 10.1196/annals.1419.005

4.  Energy biotechnology with cyanobacteria by S Andreas Angermayr, Klaas J Hellingwerf, Peter Lindblad, M Joost Teixeira de Mattos Current Opinion in Biotechnology Volume 20, Issue 3, June 2009, Pages 257–263 doi: 10.1016/j.copbio.2009.05.011

Read more!

Thursday, September 27, 2012

Look What I found!

Way back in 1998 I was involved in a project Called Newstrolls, an early blog created by a few people moving from the old Hotwired threads as it died away. I participated in many of the threads, as well as writing a few articles.

I long thought that those articles had vanished in the ether, but it turns out a project called The Wayback Machine had archived Newstrolls several times. There is a complete collection of the articles I wrote, from September 1, 1998 to January 7, 1999. After that, disagreements with the management led me to stop writing them, as I was very busy with my work and didn't consider it a rewarding use of my free time.

Here's a listing of the articles, as archived by http://archive.org.

Bricks and Mortar January 7, 1999.

Just Say No! December 12, 1998.

Everything Looks Like a Nail December 4, 1998.

Counterparty Surveillance November 14, 1998.

Fools and Their Money October 13, 1998.

After the Near Miss October 3, 1998.

Cultural Engineering September 25, 1998.

The Potemkin Economy September 17, 1998.

Culture And Chaos September 10, 1998.

Margin Call for Crony Capitalism September 1, 1998.

Of course, it was a long time ago, and some of the subjects were sort of immediate: the Russian default, the Microsoft Anti-Trust Suit, Long-Term Capital Management, Internet development, and the contemporary state of political interference with the economy.

Many of the links are broken, and I've learned a lot about how humans work that I didn't know then. And, of course, it didn't have the sort of references my work from 2009 on did. But I put a lot of work into those articles, and readers who remember those times might enjoy them. Read more!

Tuesday, October 25, 2011

The Azolla Alternative

The recent pre-publication of four papers by the Berkeley Earth Surface Temperature team has raised the level of furor in at least some venues over the future (if any) of the IPCC and how to handle the issues of rising CO2.  I've discussed this in another venue, along with my confidence in remediation (CO2 draw-down) as preferable to expensive efforts at mitigation.

I'm going to put a few numbers behind this statement:
With the right approach, IMO, we could start a process today that would probably result in the ability to draw down CO2 within a 5-10 year active time, using (bio-)technology that might mature within 20 years.
Now, let's start with assuming a 100ppm (parts per million) draw-down, which would be equivalent to reducing our current level of ~400ppm to 300ppm, equivalent to a date prior to 1960. Just how much carbon would we have to remove from the atmosphere?

The density of the atmosphere at sea level is about 1Kg/M3, and the pressure is roughly 10 tons/M2. Now, the average molecular weight of air is about 29, while the average molecular weight of CO2 is about 44. Parts per million are calculated by volume, which corresponds to number of molecules.

So 100ppm CO2 would weigh (44/29)*(100/1,000,000)*10,000Kg/M2 = ~1.5Kg/M2. The carbon would be about 12/44 of that, or about 414g/M2, or 414 tons/Km2. The Earth's surface area is about 5.1*108Km2, which adds up to 211.17GTon carbon to be removed. Given the nuclear connotations of "gigaton", I'm going to call it Petagrams (Pg), which is common in carbon literature.

The Azolla Event

About 50 million years ago (MYA), there occurred an event of global importance called the Eocene Azolla event. Evidence suggests it began around 49.3MYA, lasting until around 48.1MYA, thus lasting about 1.2MY.[1] In this event, it appears that fresh water from Arctic rivers formed a layer over the surface of the heavier salt water, and the entire Arctic (or large parts of it) experienced a massive bloom of a fern called Azolla.

Azolla is an intriguing plant, actually a symbiosis between a secondarily degenerate fern (Azolla sp.) and a blue green algae (Anabaena azollae). It's just a few inches in size, and floats entirely on water, without normally anchoring. It's one of the fastest growing plants known, capable of producing 25-90gm/day/M2.[2] It normally expands vegetatively, although under appropriate circumstances it will reproduce sexually.[3]

The speed with with the Eocene Azolla grew appears to have been such that it reduced the atmospheric level of CO2 from 3500ppm to 650ppm,[4] probably within that small 1.2 million year stretch.[5] Thus, it makes a great candidate for drawing down CO2.

Let's do some more numbers. The circumstances under which Wagner grew Azolla Nilotica were probably not as optimized as could be done with modern technology, and I'm going to assume that harvesting processes could keep the Azolla growing at maximum rate continuously. 90 gm/day/M2 is equivalent to the same number of tons/Km2/day, multiplied by 360 gives 32,400 tons of biomass/Km2/year. Carbon content for various European strains of Azolla ranged from 37-42%.[6] I'm going to assume 40% (by weight) coming to 12,969, or roughly 13Kilotons Carbon/Km2/year.

Technology

Now comes the technology. I'm going to assume for the moment that within a decade or two we have the technology to float a layer of fresh water on top of salt for very large areas. Later I'll go into possible methods, but for the moment let's just assume one million square kilometers, less than 1/7th the area of Australia. This works out to 13 Pg (=gigatons)/year. Remember above we said that we have ~212Pg to remove in order to draw-down 100ppm? Dividing 212 by 13 gives about 16.3 years. Double this, and we're down to 8.2 years, triple it and we're down to about 5.5 years. And that's still less than half the area of Australia.

I'm not going to go into detail regarding methods. Fresh water is lighter than sea water at the same temperature, which explains how the fresh water managed to stay separate from underlying salt water during the Azolla Event. The difference is only about 2.5%, however, which is pretty small. The lower levels of the Arctic appear to have been anoxic,[7] in the same way the Black Sea is today.

It's remotely possible that simply floating a layer of fresh water on top of salt water might work, but I'm going to assume not. Given this, the easiest way I see to handle it is with an intermediate layer of some viscoelastic material, with a density intermediate between fresh and sea water. More material intensive, but perhaps cheaper, might be a sort of "air mattress" with a lot of internal tensile stiffening. Another option would be to maintain a layer of pressurized air topped with a stiff but slightly flexible layer, with water above it.

Obviously, all these options would require a good deal of engineering and development. However, consider the difference between 1991 and today. No cell phones (except huge experimental clunkers that only worked in a few areas). The Internet was just getting set up, and mostly just existed in educational environments. When we look at the difference just 20 years has made, there's no good reason to suppose we couldn't do this.

The total amount of the world's good quality agricultural land is around 16.5 million Km2.[8] I've discussed using 3 million Km2. The total amount of lower quality agricultural land in the world is around 43.7 million Km2. This has been described in the following terms:
If there is a choice, these soils must not be used for grain crop production, particularly soils belonging to Class IV. All three Classes require important inputs of conservation management. In fact, no grain crop production must be contemplated in the absence of a good conservation plan. Lack of plant nutrients is a major constraint and so a good fertilizer use plan must be adopted. Soil degradation must be continuously monitored. Productivity is not high and so low input farmers must receive considerable support to manage these soils or be discouraged from using them. Land can be set aside for national parks or as biodiversity zones. In the semi-arid areas, they can be managed for range. Risk for sustainable grain crop production is 40-60%.[9]
As our population expands, methods to manage, control, and maintain these lands will become increasingly expensive, relative to more basic types of agriculture. At the same time, technology to manage activities on the water will be coming down in cost. Very likely, they'll meet at some point, at which point it will be cheaper to build new prime agricultural land floating on the ocean than continue using poorly suited terrestrial land. Long before this happens, simple technologies like that necessary for the Azolla Alternative will have become cost effective.

I'm going to leave economic and political issues for another post.

Refs:

Bocchi, S., Malgioglio, A. (2010) Azolla-Anabaena as a Biofertilizer for Rice Paddy Fields in the Po Valley, a Temperate Rice Area in Northern Italy International Journal of Agronomy Volume 2010 (2010), Article ID 152158, 5 pages doi:10.1155/2010/152158

Eswaran, H., Beinroth, F., Reich, P. (1999) Global Land Resources & Population Supporting Capacity Published in: Eswaran, H., F. Beinroth, and P. Reich. 1999. Global land resources and population supporting capacity. Am. J. Alternative Agric. 14:129-136.

Pearson, P.N., Palmer, M.R. (2000) Atmospheric carbon dioxide concentrations over the past 60 million years Nature 406 (6797): 695–699. doi:10.1038/35021000. PMID 10963587

Speelman, E., Damsté, J.S., März, C., Brumsack, H., Reichart, G. (2010) Arctic Ocean circulation during the anoxic Eocene Azolla event Geophysical Research Abstracts Vol. 12, EGU2010-13875, 2010

Speelman, E.N., Van Kempen, M.M., Barke, J., Brinkhuis, H., Reichart, G.J., Smolders, A.J., Roelofs, J.G., Sangiorgi, F., de Leeuw, J.W., Lotter, A.F., Sinninghe Damsté, J.S. (2009) The Eocene Arctic Azolla bloom: environmental conditions, productivity and carbon drawdown Geobiology (2009),
7, 155–170 DOI: 10.1111/j.1472-4669.2009.00195.x

Wagner, G.M. (1997) Azolla: A Review of Its Biology and Utilization The Botanical Review 63(I): 1-26, January-March 1997

Zahran, H.H., Abo–Ellil, A.H., Al Sherif, E.A. (2007) Propagation, taxonomy and ecophysiological characteristics of the Azolla-Anabaena symbiosis in freshwater habitats of Beni-Suef Governorate (Egypt) Egyptian Journal of Biology, 2007, Vol. 9, pp 1-12 Read more!

Tuesday, September 13, 2011

Laccognathus embryi

I have to share this, although I suspect everybody already has read about it, or soon will. Here's the headline and lead paragraph I read (From Google News):

375M-Year-Old Predatory Fish Prowled North America Before Backboned Animals
A new species of large predatory beast of a fish, packing a powerful bite, was already on the prowl in ancient North American waterways before backboned animals existed, researchers say.


After some searching, I found this:

Predatory Fish Once Prowled Ancient Canadian Arctic

A species of fish previously thought to have only existed in Eastern Europe once prowled ancient North American waterways during the Devonian Period, before backboned animals existed on land. [my emphasis]


I just can't think of a comment fit to publish even on a blog. *Sigh!* Read more!

Tuesday, August 30, 2011

3 Chimpanzee Movies

From a couple of articles by Kimberley J. Hockings, et als.: Chimpanzees Share Forbidden Fruit, and Road crossing in chimpanzees: A risky business.

Crossing a Road (movie): Note how the first male out stands watch. AFAIK the 2nd and 3rd are young males: apprentices.

Crop Raiding (movie)

Sharing raided food (movie)

I don't have time to discuss the subject at the moment, but I'm guessing these will be interesting. Compare this "hunter’s anecdotal report" quoted from Guillot, 1956, in Dr. Hockings doctoral thesis Human-chimpanzee coexistence at Bossou, the Republic of Guinea: a chimpanzee perspective:
"I remember one strange encounter I had in the jungle. A troop of chimpanzees was crossing the jungle path ahead of me, an old male, the leader, stood glaring at us from a distance of a few paces. At intervals he intensified his gruntings to hurry up the rest of the troop, cursing the stragglers. The last chimpanzee to cross was a terrified female. Suddenly the big male gave a bound towards her, seized her and shook her and grunted at her something we could not interpret. Whatever it was, it forced her to turn back into the bush. She reappeared a moment later, and now, clinging to her back with both hands and feet, was a grimacing little baby chimpanzee, which in her terror she had abandoned. Then she leaped into the air with her baby in her arms and disappeared among the foliage of the trees. All was now in order, and the old male gave a couple of triumphant grunts, made a gesture as much to say that the path was free for me, and disappeared into the jungle, the last of his troop.”
She calls it "largely anthropomorphic and likely to be somewhat embellished, but serves to highlight the protective nature of the adult male chimpanzee during this high-risk encounter." I wonder how embellished it actually was. Read more!

Tuesday, June 7, 2011

Neurology and the Soul

I've just found time to read John Wilkin's Is the soul something we should be agnostic about?, as well as two posts he links to: Sean M. Carroll's Physics and the Immortality of the Soul and PZ Myers' Ain't no heaven, ain't no afterlife of any kind, either, say the physicists.

Are you folks kidding me? Or has physics actually discovered and verified an underlying source of determinism while my back was turned? Or is everybody missing at least one part of the big picture? (Or am I imagining things?)

The underlying assumption in all these arguments is that there's no way for something going on in "spirit space" to interact with the real world. Now, I don't claim to be the physicist Sean M. Carroll is, in fact my understanding is amateur and older than decoherence. But my understanding is that, in practical terms, quantum indeterminacy still reigns, at least with regard to even theoretically predicting the outcome of local wave function collapse (or, if you wish, "decoherence").

Consider the situation where an action potential arrives at a synapse, and releases a certain amount of neurotransmitter. The number of molecules of neurotransmitter vary within a small range due "indeterminacy", and the number of receptors for that neurotransmitter that are actually active will also vary, depending on many factors within the cell, many of them also slightly "indeterminate". Thus the actual size and shape of the current resulting from that action potential can vary within small limits. (In fact, even with a fixed number of molecules of neurotransmitter and receptors, there will be some variation in current due to indeterminacy of position of each neurotransmitter molecule while diffusing across the synaptic gap.)

Now, let's suppose that that one action potential is just on the border of causing the receiving neuron to fire an action potential. That is, given the current (heh) condition of the nearby dendritic arbor, the amount of current necessary to cause an action potential to fire is right in the middle of the potential variation (in current) due to indeterminacy.

Does the neuron actually fire? Or does it end up in a state of superposed states of firing and not firing? Well, I think we can state that it fires, that is that decoherence has taken place. Do we actually know the source of all the information involved in the decoherence?

We don't, of course. People who state that decoherence has proven that everything happening on the quantum state is completely deterministic are simply projecting their own prejudice (i.e. religious convictions) on what is still a highly controversial field. There's plenty of room in those little wave function collapses for huge amounts of information to flow into our universe.

We certainly don't know how many of the neurons in our brains actually balance on the head of this pin. For that matter, the calculations that go on in the dendrites to integrate the information from the current flows in the synapses also depend on distributed molecules of receptors, most of which open and close "randomly" depending on quantum processes that contain "indeterminacy".

So, is it possible for:
some sort of blob of spirit energy that takes up residence near our brain, and drives around our body like a soccer mom driving an SUV?

as Sean M. Carroll mocks and PZ Myers quotes? Well, conceivably, if we assume these "spirit" people are using the word "energy" metaphorically. (Which they probably are since they don't understand physics or thermodynamics well enough to use it in a technical sense.)

Of course the blobs of "spirit energy", actually some sort of informational phenomenon, would have to have some way of predicting the outcomes of all their interventions in these decoherences. Perhaps time and information work differently in "spirit space". Perhaps they can "see" the potential outcomes of different combinations of interventions directly, rather than having to compute it with incredibly powerful modeling. In the same way, perhaps, that a man riding a balloon can see the road ahead without having to rely on asking passing strangers about it.

Of course, this is all very interesting, and would make a great "magic system" for a fantasy novel, but is there any evidence, no matter how tenuous, that such a thing might be so?

Actually yes. Compared to other anthropoid species, humans have a third or so higher ratio of glial cells to nerve cells in at least on area of the dorsolateral prefrontal cortex (area 9L):
Based on the nonhuman species mean LS regression, humans displayed a 46% greater density of glial cells per neuron than expected.

[...]

From this prediction, glial density in humans fell within the 95% PIs (observed log glial density = 5.19; predicted = 5.02; upper PI = 5.40, lower PI = 4.63) and represented 32% more glia than expected.[1]
Perhaps the human brain has evolved, over the last few million years, to be "ridden" by a "blob of spirit energy", and supporting the receipt of information from the blob is what requires the extra glial activity.

Of course, the actual increase isn't all that great, and:
The human frontal cortex displays a higher ratio of glia to neurons than in other anthropoid primates. However, this relative increase in glia conforms to allometric scaling expectations, when taking into consideration the dramatic enlargement of the human brain. We suggest that relatively greater numbers of glia in the human neocortex relate to the energetic costs of maintaining larger dendritic arbors and long-range projecting axons in the context of a large brain.[1]
So this "evidence" is highly tenuous. But that's very different from saying it would require a new formulation of natural law.

So when PZ Myers says:
The biologists' perspective, which is a little less fundamental, is simply that there is no identifiable 'receiver' localized in the brain (no, not even the pineal gland, as Descartes believed), distributed physiological activity is associated with thought, and injury, disease, and pharmacology can all profoundly influence the mind. Furthermore, the way the brain works involves trans-membrane ion fluxes and synaptic activity — it's all electrochemistry and biochemistry. In addition to that new physics, we'd need a new chemistry to explain how spirit interacts with neurotransmitters or gene expression or protein phosphorylation.
Well, we don't need "new" physics (although we would need to add some stuff to the one we have) and we don't need new chemistry. The receiver is distributed, just like the physiological activity.

Despite what atheists would like to believe, there are still big holes in our scientific understanding of the world; big enough to drive the biggest spirit.

For the moment, I'd recommend agnosticism.


Notes:

1. Chet C. Sherwood, Cheryl D. Stimpson, Mary Ann Raghanti, Derek E. Wildman, Monica Uddin, Lawrence I. Grossman, Morris Goodman, John C. Redmond, Christopher J. Bonar, Joseph M. Erwin, and Patrick R. Hof Evolution of increased glia–neuron ratios in the human frontal cortex PNAS September 12, 2006 vol. 103 no. 37 13606-13611 doi: 10.1073/pnas.0605843103 Read more!

Sunday, June 5, 2011

Evolutionary Theory for Creationists

I was recently informed by a creationist that "evolution is a lie!" I went to the trouble of thinking through and writing down my response, so I thought I'd share it with my readers. I created it as a .PDF so clean copies can be printed for creationists who "don't get" the internet. If you want to print it, or save it as a .PDF, click on the word "File" under "Google docs" over at the top left, and select "Print(PDF)".

Evolutionary Theory for Creationists

Can you tell it was written by an agnostic?

This document is entirely original with me, except that the "old saying" I heard somewhere: I don't remember where, I don't know who said it first, and it's something of a paraphrase anyway. With this post I'm putting this document in the public domain as a public service: feel free to copy, modify, and use the result as you please, for profit or not. (Of course, if you claim credit, you'll be "guilty" of plagiarism, but not (AFAIK) theft.) Credit would be nice, but I don't insist on it.

AK Read more!

Wednesday, February 2, 2011

Thundersnow

ResearchBlogging.org


Yesterday's occurrence of thundersnow in Chicago had me looking for explanations. Wiki gave a lightweight summary, with just enough technical jargon to make it hard for a typical reader. Subsequent searching lead me to a bunch of good peer-reviewed data on the electrification of thunderstorms, but little of use understanding thundersnow.[3] [4] [5] [6]

I finally found a very recent survey by David M. Schultz and R. James Vavrek,[1] which while somewhat technical, gave me the insight I was looking for.

Summarizing everything: Thundersnow occurs when the conditions for thunderstorm-type convection are present at the same time as for general snow (in large amounts). This includes the presence of humid air above the freezing point while general temperatures, especially at the ground, are below freezing. A high lapse-rate is also necessary, in order to drive the rapid updraft which creates hail and/or graupel. Substantial electrification requires this.[3]

We don't yet know for sure how this electrification is caused,[1] [2] [3] [4] [5] [6] but the best guess involves collisions between growing graupel/hail particles and small ice particles.[1] [7]

Some Personal Observation:

My difficulty finding explanations is explained: we don't even know precisely what mechanisms lead to lightning even in thunderstorms, much less thundersnow (which has been much, much less studied). I had always assumed (and you know what that does) that electrification resulted from friction of ice particles with dry air, somehow I had never previously noticed the absence of this mechanism from those considered. Since both simulations and direct experimental measurements of this process would have been easy even in the 19th century, we can presumably rule this mechanism out.

None of the articles intended for general consumption (that I read) explicitly mentioned that we don't know the mechanism for electrification, which would have saved me considerable time trying to find it. (Although the descriptions of the theories and research were well worth the reading.) This points up a general defect in science reporting: the fact that the public is being kept pretty much in the dark regarding how much isn't really known for sure in science.

Schultz, D., & Vavrek, R. (2009). An overview of thundersnow Weather, 64 (10), 274-277 DOI: 10.1002/wea.376

Links:

1  An overview of thundersnow


2  A Climatology of Thundersnow Events over the Contiguous United States Open Access


3  Thunderstorm Electrification (Semi) Open Access


4  The 29 June 2000 Supercell Observed during STEPS. Part II: Lightning and Charge Structure Open Access


5  Relationships between Convective Storm Kinematics, Precipitation, and Lightning Open Access


6  The Electrical Structure of Thunderstorms(Semi) Open Access


7  The Ice Crystal–Graupel Collision Charging Mechanism of Thunderstorm Electrification Open Access Read more!

Sunday, May 30, 2010

The Final (so far) Step in Language Evolution

ResearchBlogging.org A recently published paper[1] has impelled me to discuss a favorite theory of mine, involving the actual stages in which our species language skills evolved.  Or rather, the final stage.  The paper itself, Dissociating neural subsystems for grammar by contrasting word order and inflection (by Aaron J. Newmana, Ted Supallab, Peter Hauserc, Elissa L. Newportb, and Daphne Bavelier), reports the investigation of differential brain region usage in interpreting two different types of language:  inflected and positional.

In inflected language, a word's relationship to the rest of the sentence is determined (communicated) through case markers such as inflections (I'm including agglutinating languages in this class).  For instance, the subject of a sentence or clause is identified by different inflection (such as case endings) from the object of a verb or preposition.  By contrast, in a positional language (or construction within a language) this information is communicated by its position in the sentence, clause, or phrase.

What Newmana et al. have shown is that different regions of the brain are activated when a hearer (seer, actually, in this case since they studied American Sign Language) encounters phrases or clauses determined by inflection vs. word order:

To summarize, we exploited a property of American Sign Language, unique among languages thus far studied with neuroimaging, to directly compare the neural systems involved in sentence processing when grammatical information was conveyed through word order as opposed to inflectional morphology.  Critically, this comparison was made within subjects, while tightly controlling syntactic complexity and semantic content.  Reliance on word order (serial position) cues for resolving grammatical dependencies activated a network of areas related to serial working memory.  In contrast, the presence of inflectional morphology increased activation in a broadly distributed bilateral network featuring the inferior frontal gyri, the anterior lateral temporal lobes, and the basal ganglia, which have been implicated in building and analyzing grammatical structure.  These dissociations are in accord with models of language organization in the brain that attribute specific grammatical functions to distinct neural subregions, but are most consistent with those models that attribute these mapping specificities to the particular cognitive resources required to process various types of linguistic cues.[1]
The Proposed Theory

Now my theory is that the final stage in the evolution of our language skills involve these inflected (and agglutinating) constructions.  Several facts point to this sequence.... (read the rest in the full post)

The simplest form of language is the pidgin, a "simplified language that develops as a means of communication between two or more groups that do not have a language in common".  In its simplest form a pidgin might consist of 2-3 word sentences containing a noun and a verb, with perhaps another noun representing the object of the verb.

A common theory is that many such pidgins develop naturally into creoles, which are usually typified by

  • a lack of inflectional morphology (other than at most two or three inflectional affixes),
  • a lack of tone on monosyllabic words, and
  • a lack of semantically opaque word formation.
(Some exceptions to this standard have been noted.)  This process might take as little as one generation, leading to the widely accepted hypothesis that children come with "hard-wired" expectations of certain features in a language, and, in essence, they look for them hard enough to find them in their parents' pidgin even though it wasn't really there.[5] [A1] 

What I would suggest is that entirely positional languages of this type represented the previous stage of language evolution, with the development of case marking, agreement, and flexible word order as the final evolutionary step.

For any proposed evolutionary step, a selective value must be identified.  In this case, I would propose that, very simply, flexible word order enables substantially improved epic poetry,[2] [3] which in turn enables multi-generational transmission of essential myths, which in their turn can encode selectively valuable historical information.[4] [7]

I'm going to defer discussion of the adaptive value of myths, and even epic, for a moment while we take a look at how languages evolve in the presence of writing.  Although the earlier forms of a language or protolanguage can be inferred from a study of its current structure, most of our information regarding linguistic development comes from written records of some sort, which means that the culture involved had some contact with writing.

Now if writing can carry information across long time periods, it can to some extent replace epic poetry and myth, reducing the need for full flexibility of word order.  This being the case, we should expect the languages of cultures using writing (or in close contact with other cultures that use writing) to show a trend of replacing fully flexible word order constructions with more word-order dependent constructions.  And so it is, especially in that most-studied of language groups, Indo-European ("IE"): 

It has long been noticed that there are trends in language change, such that certain types of development occur often and in unrelated languages.  For instance, English is one of many languages that have formed future markers from a verb of motion.  The development of Indo-European adverbial particles to adpositions, apparently independently in its daughter languages, results from reanalysis of underlying structures and is a very early development of configurational syntax in the language family.[2]
Thus, we can see a general trend from case-marked (inflected) constructions to systems dependent on word order.  In English this process has been carried almost to completion.  We must note here that the only languages we know about in this group are those of cultures that had, or were in close contact with somebody else who had, writing.  I would predict that any IE languages that remained completely isolated from writing likely retained the fully flexible constructions of the parent language, although until we find them in contact with somebody capable of writing down reasonably large amounts of them we wouldn't know about them, and at that time we would find them already evolving towards more positional constructions.

Since the work of Lord and Parry, it's been recognized (with some debate) that the epic in its original entirely oral form was composed "on the fly" for a specific performance.[A2]  The actual basic story might stay the same, but each performance was a unique, one-time, composition made up of "formulas", "a formula being 'an expression which is regularly used, under the same metrical conditions, to express a particular essential idea'[3]". 

The language of epic was thus not an actual spoken dialect but a conventionalised form that developed in a manner typical of orally transmitted poetry and later became a prestigious literary variety.[2]
We can see, here, that the complete flexibility of word order was a tremendous help in creating formulas to fit any necessary metrical/rhyme pattern.  By the time the Arcado/Cypriot dialect of Proto-Greek was being written down the loss of this flexibility was already under weigh, although the epic traditions in old Ionia (what later came to be called Achaea) may have retained the older form. 

The older IE [Indo-European] system used case to indicate spatial relations, and any accompanying P-word [prepositional and preverbal particles] added meaning without taking over the case function; but both Mycenaean and classical Greek increasingly used prepositional phrases with the preposition governing the noun phrase and determining its case, even though (from a diachronic perspective) it had been the case functions which originally determined what cases a preposition would govern.  In Homer both systems (independent case and configurational syntax) are still present, and [...] a [...] strictly synchronic view of the data is possible if the variation is viewed as the normal outcome of grammaticalisation processes, which typically generate changes which may coexist with the original constructions.  Greek speakers must have been able to recognise and produce both free and syntactically restricted uses of P-words for some time while the reanalysis of the constructions was taking place, just as English speakers can use ‘going to’ in two different syntactic constructions.[2]
Now, it's important to realize that this shift is exactly the one studied by Newmana et al.  It involves the switch from a system that "increased activation in a broadly distributed bilateral network featuring the inferior frontal gyri, the anterior lateral temporal lobes, and the basal ganglia," to one that "activated a network of areas related to serial working memory."  (Since word-order is an essential part of prepositional (and postpositional) phrases.) 

I would argue that as the presence of writing reduced the dependence on epic for essential long-term cultural memory, languages relaxed into the easier, because older and more thoroughly evolved, positional constructions.  This relaxation, added to the fact that pidgins and (most, if not all) creoles tend to be positional, point to it being the older, original mode.  Indeed, the fact that a well developed temporally serial working memory would have been essential for complex movement in the arboreal environment means that all the building blocks would have been there for many tens of millions of years; the evolution of language simply had to tie to these pre-existing (and mostly pre-adapted) features. 

By contrast, the brain structures and circuits needed for inflected language constructions may well have been brand new, or at least substantially adapted from some pre-existing system for handling complex transformations. 

At this point we need to consider what selective advantage the capacity for epic poetry, and the myth it transmitted, offered our evolving ancestors. 

The Adaptive Value of Myth

Any modern study of myth, IMO, should start with When They Severed Earth from Sky:  How the Human Mind Shapes Myth by Elizabeth Wayland Barber & Paul T. Barber.  Quoting from the first chapter:

[I]f people were so smart--just like us--100,000 years ago, why do the myths they passed down often seem so preposterous to us? And not just to us.  Even ancients like the Greek poet Pindar, who made his living telling such stories ca. 500 B.C., sometimes felt constrained to a disclaimer: "Don't blame me for this tale!" The narrators present these myths as "histories".  Yet how can we seriously believe that Perseus turned people to stone by showing them the snaky-locked head of a monster, or that a man named Herakles (or Hercules) held up the sky for a while, slew a nine-headed water monster, moved rivers around, and carried a three-headed dog up from the land of the dead? Or that a man named Methuselah lived for almost a millennium? That an eagle pecked for years at the liver of a god tied to a mountain, or that mortal men--Beowulf, St. George, Siegfried, and Perseus included--actually fought dragons? And how can one view people like the Greeks or the Egyptians, who each believed simultaneously in three or four sun gods, as having intelligence? Didn't they notice a contradiction there? Why did people in so many cultures spend so much time and attention on these collections of quaint stories that we know of as "myths"?

The problem lies not in differing intelligence but in differing resources for the storage and transmission of data.  Quite simply, before writing, myths had to serve as transmission systems for information deemed important; but because we--now that we have writing--have forgotten how nonliterate people stored and transmitted information and why it was done that way, we have lost track of how to decode the information often densely compressed into these stories, and they appear to us as mostly gibberish. And so we often dismiss them as silly or try to reinterpret them with psychobabble.  As folklorist Adrienne Mayor points out, classicists in particular "tend to read myth as fictional literature, not as natural history" [Mayor 2000b, 192]--not least because humanists typically don't study sciences like geology, palaeontology, and astronomy, and so don't recognize the data.  [my emphasis]
One of the most striking examples they give of long-term preservation of information involves the collapse of Mount Mazama to create Crater Lake in Oregon.  To briefly summarize, a major underground spirit fell in love with a local girl and when his demand for marriage was rejected had a temper tantrum (and battle with the great spirit in the sky) that blew the top off the mountain, resulting in the present caldera lake

Geological analysis confirms that there was once a mountain on that spot, and that it erupted violently, spewing around 50 km3 of magma, ash, and lava-bombs until the emptying of its magma chamber caused the caldera walls to collapse inward, forming a pit some 4000 feet deep that later filled with water ([refs]), just as the myth says.  Since the eruption happened almost 7700 years ago ([ref]), this myth must have been carried down for nearly eight millennia.

Our own (typical) assumption, as we read something like the Klamath myth, is that since we do not agree with the Klamath explanation for this fiery occurrence, there is nothing worth looking at scientifically in the story.  But one of our problems as modern observers of myth (or even observers of events such as car accidents) is that people tend to present their observations and their assumed explanations all tangled up together.  On the other hand, if we strip away the explanations proffered but keep and investigate the observations, we can see that the observations in myths are fairly accurate (as far as they go), and at the very least they alert us to something of geological interest that happened in a particular place.  Furthermore, if we take for the moment the Kalamath step of assuming that the Curse of Fire was caused by a wilful being (more of this below), then we can see that the quite logical strategy is to placate that being—with a gift, bribe, or sacrifice—which is exactly what they did in their attempt to prevent or delay future destructive eruptions.  That is, the myth unrolls logically from its own premises—it is not haphazard.  In fact, there are many myths concerning geological events in the Pacific Northwest ([ref]), where until the nineteenth century the population remained stable, that is unreplaced by cultures that had not witnessed the events and therefore did not know what was referred to.[7]
Note that these mythic memories don't need to be geological: 

An example that can serve to illustrate the historical reality lying behind the mythological narration is provided by the famous combat between Heracles and the Hydra of Lerna.  The analysis of this famous story deserves some attention because it can provide useful insight regarding the origin and factual basis of a myth, as well as other mechanisms of myth-making ([ref]).

The slaying of the Hydra has been one of the myths most widely considered, since antiquity, to rest on natural processes.  The always regenerating many heads of the Hydra have been interpreted as a symbol of the many water-sources feeding the large swamps near Lerna, and the struggle between Hercules and the monster therefore an image of the draining effort.  After finally chopping her main ‘head’, said to be immortal, the hero buried it forever, putting a huge and heavy rock on it.  Kirk (1974), following an interpretation first proposed by Palaephatus, maintains instead that this myth more likely records ancient political events.  In a manner similar to the killing of the Minotaur in the Palace of Knossos, the killing of the Hydra at Lerna, as well as the related myth about the killing of the Nemean lion (the first two labours of Heracles, the Mycenaean hero), seems to contain memories of ancient political events in addition to references about fertility rites.

Strong connections are known to have existed between Lerna until the Early Bronze Age (Lerna III), and the Cretan civilization.  The end of Lerna III was in part evidently due to the invasion of the Indo-European Greeks in c. 2200 BC.  These patriarchal Indo-European-speaking invaders, from whom later the Mycenaeans would originate, marked the end of the Early Bronze Age in many areas of the East Mediterranean.  According to typical Minoan settlement patterns, the political and religious centre and the ‘head’ of the local community, would have been the Palace of Lerna (‘House of Tiles’).  The destruction of the Lernean Palace (2300–2200 BC) is marked by the peculiar singularity, seemingly unique in the whole of Greece, that the Palace was buried by the conquerors under an enormous funerary tumulus ([ref]), considered nevertheless an enigma by archaeologists because it contains no tombs.

This unusual tumulus, deliberately positioned above the ‘head’ of the defeated society, strictly corresponds to the huge mythological rock placed by Heracles above the head of the beast ([ref]).  As such, the facts described by tradition largely coincide with what can be observed on the site.  Even the position of the buried Palace, corresponds to the location of the head of the Hydra, buried in the myth on the side of the road to Elaeus.  The mythological account can therefore be regarded as quasi-historical, recalling an Early Bronze Age phase of the Mycenean conquest of the Greek mainland against the Lernean Minoan related settlement.  The seeming truth behind the myth, and the relevance of the tumulus itself, apparently was already forgotten by the end of the Middle Helladic period (c. eighteenth–seventeenth century BC), as indicated by the fact that the tumulus was then reoccupied by the village after being left untouched for nearly 500 years.  We can thus consider this date as the moment when the local historical memory transmitted by oral tradition became a new myth as transmitted by Hesiod, Ovid, Apollodorus and other ancient writers, because the politico-religious factual story lying behind the myth had been forgotten.[7]
This long-term cultural memory isn't limited to historic events, either.  For example, in the Iliad, the process of sacrifice and sacrificial meal is twice described in almost identical language (1.458-469 and 2.421-432).  The entire description of the process may have been a single formula. 

While we today may see little value in the long-term retention of sacrificial processes, the same technique could have been used for the hunting of animals that are temporarily unavailable, the killing of "monsters" (rare large predators that are making nuisances of themselves), even the manufacture of weapons and other tools requiring materials temporarily unavailable.  Even if the precise process has been distorted in the centuries since its last use, it would have given a creative inventor a major "leg up" in recreating it.

Overall, I think we can regard the long-term cultural memory process of myths as offering a major benefit to the population that possesses it.

Epic as a Carrier of Myth

The final link in the chain is either the easiest or the hardest:  epic poetry as the primary and most effective carrier of myth.  It's generally accepted that poetical systems such as meter, alliteration, assonance, and rhyme, aid in memorizing both complete poems and the poetic formulas used in longer epics.[3]  There is also evidence that these techniques add to the acceptance of the message by the audience.[9]  The use of the formulas in epic, in turn, eases the "on-the-fly" composition during performance.  If the story-teller wants to say "next morning", but instead recites "when rosy-fingered dawn the early one appeared", there's much more time for composing the next portion of the performance.  The latter formula would fit metric patterns the former would not.  Because it's a formula, it doesn't have to actually be composed on the fly, rather the story-teller pulls it from his (or perhaps her) mental toolbox based on its "fit" within the metrical pattern and recites it "on autopilot" while thinking about the following formulas.

We have, then, the following logical sequence: 

  • Poetic tools (meter, etc.) aid in memorization and audience acceptance,
  • composing epic "on the fly" requires a large toolbox of "formulas" with tight requirements to fit into metrical (and other) slots,
  • fully flexible word order allows a much larger collection of "formulas",
  • the use of inflection allows flexible word order,
  • therefore the capacity to use and understand inflected constructions provides the most effective transmission of any story (mythic or otherwise).
Of course, this sequence is hardly "proven", and testing it probably involves some serious challenges given the widespread influence of written material on just about everybody.

Nevertheless, it provides a coherent model for the development of inflection and the associated tools of flexible word order as an evolutionary adaptation, allowing us to perhaps glimpse the evolutionary precursor to our modern language skills.

Appendices:

A1  "... widely accepted hypothesis that children come with "hard-wired" expectations of certain features in a language":  A brief Google search or perusal of the Wiki articles will find many mentions of opposition to this idea.  I'm not going to get into this much, IMO this opposition represents the last-ditch defense of the discredited[6] "blank slate"[8] concept of human cognition, and thus falls into the same class with the last-ditch opposition to plate tectonics in the '70's, religious opposition to evolution, and the more specious of attempts to "falsify" the "Greenhouse effect".

(Yes, I know some people will feel their sacred oxen have been gored here, but science is about finding the facts, and IMO twisting the facts to support an outdated Marxist ideology of how human cognition works is just as bad as twisting them to support Biblical literalism.)

A2  "... the epic in its original entirely oral form was composed "on the fly" for a specific performance":  I'm referring here to conditions before any contact with writing had taken place.  Even writing in another culture in close contact with the one under discussion.  Obviously, we can only project what this would have been like, since writing of some sort was necessary to preserve the literature itself, at least until Milman Parry began his studies recording oral performances,[3]  and of course writing was already present in the culture he studied.  The introduction of writing not only (potentially) replaces some of the essential purpose (value) of epic and other poetry, it permits the permanent preservation of a specific poetic composition, making it available for rote memorization.

It seems likely to me that the appearance of the written Iliad, as well as any other recorded (in writing) performances contemporaneous with it, would have seriously distorted the entire formulaic tradition, potentially distorting our model of what the original, truly pre-literate, epic tradition was like.  Barry Powell has suggested that the Greek alphabet was specifically created to record the work of Homer,[10] especially the Iliad, and while that may be plausible, I find it more likely that the alphabet was created to record the more typical shorter performances found in the Little Iliad and the other poems of the "Epic Cycle".

These performances would have lasted about 2 hours at full speed; if they were dictated at 1/3 speed an entire dictation session would have required about 6 hours, a good day's work for an aoidos and his scribe.  Based on Parry's research, only the more skilled of aoidoi would have been able to do this.  Alternatively, the inventor of the Greek alphabet might have developed some sort of shorthand allowing him to write at full performance speed.

In the beginning, I suspect that recitals of these dictated poems would have been much less popular than true oral performance:  the most likely scenario is a ship captain or other traveler who carried his written texts with him and performed them in locales where (and when) no true aoidos (or rhapsode) was available.  Or perhaps he was carrying stories popular in the eastern Aegean Sea (i.e. Chios) to the Ionian settlements in the West

Once the Greek alphabet was created, and in use (if perhaps only by one man), it seems entirely plausible that a particularly skilled aoidos ("Homer") undertook to create a special masterpiece good for a 20-hour performance.  Thus, the Iliad.  If the same aoidos also dictated the Odyssey, it seems likely that it was many years later.

We know even less about the process by which the recording of Mesopotamian and Babylonian mythologies interacted with the presence of writing.


Newman, A., Supalla, T., Hauser, P., Newport, E., & Bavelier, D. (2010). Dissociating neural subsystems for grammar by contrasting word order and inflection Proceedings of the National Academy of Sciences, 107 (16), 7539-7544 DOI: 10.1073/pnas.1003174107

Links:

1.  Dissociating neural subsystems for grammar by contrasting word order and inflection

2.  Prepositions and preverbs in Hellenistic Greek

3.  The Singer of Tales by Albert Lord

4.  When They Severed Earth from Sky:  How the Human Mind Shapes Myth by Elizabeth Wayland Barber & Paul T. Barber

5.  The language instinct: how the mind creates language (page 21) by Steven Pinker

6.  The fateful hoaxing of Margaret Mead: a historical analysis of her Samoan Research by Derek Freeman

7.  Exploring the nature of myth and its role in science

8.  The Blank Slate by Steven Pinker

9.  Birds of a feather flock conjointly (?): Rhyme as reason in aphorisms

10.  Homer and the origin of the Greek alphabet By Barry B. Powell


Read more!

Wednesday, April 7, 2010

Animals Without Oxygen

ResearchBlogging.org The recent discovery of animals that appear to live entirely without oxygen[1] has confirmed a scenario of convergent evolution in the development of hydrogenosomes, demonstrating with near certainty that mitochondria have evolved into hydrogenosomes multiple times.  This was already pretty will demonstrated by the discovery of hydrogenosomes with mitochondrial DNA,[4] [6] [7] as well as the fact that they, and mitosomes (similar organelles that do not produce hydrogen) "all share one or more traits in common with mitochondria (Fig. 2), but no traits common to them all, apart from the double membrane and conserved mechanisms of protein import, have been identified so far."[7] 

These animals are members of the phylum Loricifera, which is (distantly) related to arthropods and other members of the general taxon Ecdysozoa.[1][5]  Members of this phylum tend to have very complex life cycles (for metazoans), with at least some species having a parthogenic larval stage intervening between stages of adult sexual reproductions.[5] 

This is an exciting discovery, both in terms of the potential discoveries in energy biochemistry and what it says regarding the overall evolution of the Eukaryotes.

Efforts to pin down the exact "evolutionary tree" of the early Eukaryotes have, more and more, shown a tangled relationship among various proteins (and their coding genes), implicating a large amount of lateral transfer.[7]  The role of mitochondria has changed during this process.  At one time the various amitochondrial Eukaryotes were regarded as (possibly) descended from the ancestral premitochondrial Eukaryote.  By now, however, it's pretty clear that most (probably all) of these lineages are descended form mitochondrial Eukaryotes.[7] 

To complicate the picture, a number of (probably distantly related) lineages possess mitochondria that are facultative anaerobes.[8]  Among these lineages are several animals such as parasitic helminths such as Fasciola hepatica and Ascaris suum.  It seems plausible that the probable hydrogenosomes of these newly discovered Loricifera are descended from such facultatively anaerobic mitochondria.  (Or more precisely, descended from the original mitochondria via such facultatively anaerobic mitochondria.) 

It's interesting to speculate regarding the specific metabolic pathways these species use.  One possibility is that these organelles, despite looking like hydrogenosomes, are actually using sulfate as an electron donor, producing hydrogen sulfide.  Another is that they actually produce hydrogen, which is then used by other organisms for energy, to reduce sulfates to hydrogen sulfide.  Certainly the former would provide more energy for a multicellular creature.  OTOH it would also have (probably) required a longer evolutionary path to reach.  The question I suppose, is whether a simple hydrogen-producing metabolism could have provided enough energy for such a creature. It certainly seems plausible that a sulfate (or sulfite) reducing metabolism could have evolved in a facultative anaerobe, followed by streamlining into an obligate anaerobe.

I'm certainly looking forward to the publication of further research on these animals.

T/H Nick Anthis

Danovaro, R., Dell'Anno, A., Pusceddu, A., Gambi, C., Heiner, I., & Kristensen, R. (2010). The first metazoa living in permanently anoxic conditions BMC Biology, 8 (1) DOI: 10.1186/1741-7007-8-30

1 The first metazoa living in permanently anoxic conditions Open Access (Preliminary PDF)

2 Anaerobic Metazoans: No longer an oxymoron Open Access (Preliminary PDF)

3 Anaerobic animals from an ancient, anoxic ecological niche Open Access (Preliminary PDF)

4 Organelles in Blastocystis that Blur the Distinction between Mitochondria and Hydrogenosomes Open Access

5 An Introduction to Loricifera, Cycliophora, and Micrognathozoa Open Access

6 Degenerate mitochondria Open Access

7 Eukaryotic evolution, changes and challenges

8 Mitochondria as we don’t know them Read more!

Wednesday, January 20, 2010

I'm Back! (Sort of)

I've been rather quiet lately, with a bunch of projects taking up my time and sending me all over the countryside with little opportunity for the internet, and most of that taken up with a recently completed project, the brag for which you may see at the upper left: I was selected to be a judge for Open Lab 2009, in the neurology division, which was sort of flattering, given that my expertise in the subject is entirely self-taught, mostly from books and technical papers.

I had (AFAIK) only one entry, which wasn't selected, which didn't surprise me. My big posts make extensive use of links (e.g. to explain vocabulary), especially to Wiki, and Open Lab is basically a book, with the selected blog posts printed on real paper. Handling links is presumably a tough issue, and posts that make extensive use of links aren't really all that appropriate.

Which brings me to the next subject of this post, which is blogging itself. (For reasons of time I'm not going to supply links for most of what I say.) Blogging started out (AFAIK) as mostly a way for people to put up links to interesting web pages, perhaps with a few comments. Much of it remains like this, but science blogging has evolved, in part, to something more sophisticated, including explanations of technical issues, and (including here) discussions at relevant tangents.

I was marginally involved in the early days of blogging, being a contributer to HotWired Threads, and one of the original contributers to NewsTrolls. NewsTrolls was an early blog set up by a HotWired Threads contributer called Pasty Drone, as HotWired Threads was winding its way down into obscurity.[1],  It focused on news items, and had a comments section much as modern blogs do. (This site has since vanished, as have the old Threads on HotWired.)

Even then, I was somewhat skeptical of this format, or rather I didn't see it as being optimum for what I wanted to write. My own purpose, usually, is to make a point based on and relating to peer-reviewed science, while most science blogging today is more a matter of explaining the technical aspects of peer-reviewed science for those who don't understand enough to get it from the paper itself.

However, Blogger makes a great platform for expressing myself without having to code the entire site by hand, so for the moment that's where I'm at.


Links

1.  Educational Blogging by Stephen Downes


Read more!

Sunday, November 1, 2009

The Wheel Has Turned (No Spoilers)

The latest book in Robert Jordan's Wheel of Time series, The Gathering Storm has appeared in B&N, although my last understanding is that it wasn't due 'till 11/03. I don't have time for a real review, and by the time I've created one everybody interested will probably have read the book, but herewith a few notes.

For those unfamiliar with the series, I have linked to the Wiki page, but haven't read it. It may not be completely reliable, as the series may be a contentious subject and Wiki sometimes has problems with these. However, it should give you a general idea. If you want to become familiar, I suggest starting with the first book in the series ("Eye of the World") and reading forwards. Don't rely on any sort of summaries. Watch out also for some half-sized books (for children) which split the first few volumes into smaller chunks. A careful perusal of the Tor website or Wiki should allow you to find which is which.

Jordan's posthumous co-writer, Brandon Sanderson, says in the preface, introduction, or whatever (I don't have the book with me as I write) that we should consider this the first 1/3 of the final book of the series ("A Memory of Light"), and I strongly agree.

He also mentions that he has written in his own style, rather than trying to imitate Jordan. I find the writing itself rather similar, however he (IMO) takes a different approach to simultaneity, with different scenes much more separated in time than Jordan. (I may be mistaken about this, I just don't have time to go back and double check. And in any case, this doesn't include the first and last chapters of previous books, whose scenes have always been somewhat out of sync.)

I found this book much more satisfying than the last few, in the sense that many more issues are being resolved than opened. This was to be expected, but I can confirm it.

I'm not going to discuss plot details, or even list which issues have been resolved, but I will say that it's my impression that there are far fewer surprises here than in previous books. That doesn't mean that any issues were resolved in precisely the ways I had anticipated, but that in general the resolution fit within my broader expectations.

I strongly recommend reading it if you're following the series, and I can say that IMO Sanderson is doing a good enough job that we can expect to enjoy the finalization of the series almost as much as if Jordan had done it himself. Read more!