"Melanoma is the most lethal form of skin cancer and represents a substantial cause of productive years of life lost to cancer, especially when occurring in young persons," the authors write as background information in the study. "Among non-Hispanic white girls and women aged 15 to 39 years in the United States, age-adjusted incidence rates of cutaneous melanoma among adolescents have more than doubled during a 3-decade period (1973-2004), with a 2.7 percent increase annually since 1992."
To assess the relationship between the incidence of melanoma and socioeconomic status and UV-radiation exposure, Amelia K. Hausauer, B.A., of the Cancer Prevention Institute of California and the School of Medicine, University of California San Francisco, and colleagues examined data from the California Cancer Registry. The authors focused on melanoma diagnoses that occurred January 1, 1988 through December 31, 1992 and January 1, 1998 through December 31, 2002.
Data were included from a total of 3,800 non-Hispanic white girls and women between the ages of 15 and 39, in whom 3,842 melanomas were diagnosed. Regardless of the year of diagnosis, adolescent girls and young women living neighborhoods with the highest socioeconomic status were nearly 6-fold more likely to be diagnosed with malignant melanoma than those living in the lowest socioeconomic status.
When examining melanoma incidence by socioeconomic status, diagnosis increased over time in all groups, however these changes were only significant among adolescent girls and young women in the highest three levels of socioeconomic status. Increasing levels of socioeconomic status were positively correlated with higher risks of developing melanoma. Additionally, higher rates of UV-radiation exposure were associated with increased rates of melanoma only among adolescent girls and young women in the highest two levels of socioeconomic status.
Girls and women living in neighborhoods with the highest socioeconomic status and highest UV-radiation exposure experienced 73 percent greater melanoma incidence relative to those from neighborhoods with the lowest socioeconomic status and highest UV-radiation, and an 80 percent greater melanoma incidence relative to those living in neighborhoods with the lowest socioeconomic status and lowest UV-radiation exposure.
"Understanding the ways that socioeconomic status and UV-radiation exposure work together to influence melanoma incidence is important for planning effective prevention and education efforts," the authors conclude. "Interventions should target adolescent girls and young women living in high socioeconomic status and high UV-radiation neighborhoods because they have experienced a significantly greater increase in disease burden."
(http://www.sciencedaily.com/releases/2011/03/110321161908.htm)
Tuesday, March 22, 2011
Monday, March 21, 2011
Batteries Charge Quickly and Retain Capacity, Thanks to New Structure
Braun's group developed a three-dimensional nanostructure for battery cathodes that allows for dramatically faster charging and discharging without sacrificing energy storage capacity. The researchers' findings will be published in the March 20 advance online edition of the journal Nature Nanotechnology.
Aside from quick-charge consumer electronics, batteries that can store a lot of energy, release it fast and recharge quickly are desirable for electric vehicles, medical devices, lasers and military applications.
"This system that we have gives you capacitor-like power with battery-like energy," said Braun, a professor of materials science and engineering. "Most capacitors store very little energy. They can release it very fast, but they can't hold much. Most batteries store a reasonably large amount of energy, but they can't provide or receive energy rapidly. This does both."
The performance of typical lithium-ion (Li-ion) or nickel metal hydride (NiMH) rechargeable batteries degrades significantly when they are rapidly charged or discharged. Making the active material in the battery a thin film allows for very fast charging and discharging, but reduces the capacity to nearly zero because the active material lacks volume to store energy.
Braun's group wraps a thin film into three-dimensional structure, achieving both high active volume (high capacity) and large current. They have demonstrated battery electrodes that can charge or discharge in a few seconds, 10 to 100 times faster than equivalent bulk electrodes, yet can perform normally in existing devices.
This kind of performance could lead to phones that charge in seconds or laptops that charge in minutes, as well as high-power lasers and defibrillators that don't need time to power up before or between pulses.
Braun is particularly optimistic for the batteries' potential in electric vehicles. Battery life and recharging time are major limitations of electric vehicles. Long-distance road trips can be their own form of start-and-stop driving if the battery only lasts for 100 miles and then requires an hour to recharge.
"If you had the ability to charge rapidly, instead of taking hours to charge the vehicle you could potentially have vehicles that would charge in similar times as needed to refuel a car with gasoline," Braun said. "If you had five-minute charge capability, you would think of this the same way you do an internal combustion engine. You would just pull up to a charging station and fill up."
All of the processes the group used are also used at large scales in industry so the technique could be scaled up for manufacturing.
They key to the group's novel 3-D structure is self-assembly. They begin by coating a surface with tiny spheres, packing them tightly together to form a lattice. Trying to create such a uniform lattice by other means is time-consuming and impractical, but the inexpensive spheres settle into place automatically.
Then the researchers fill the space between and around the spheres with metal. The spheres are melted or dissolved, leaving a porous 3-D metal scaffolding, like a sponge. Next, a process called electropolishing uniformly etches away the surface of the scaffold to enlarge the pores and make an open framework. Finally, the researchers coat the frame with a thin film of the active material.
The result is a bicontinuous electrode structure with small interconnects, so the lithium ions can move rapidly; a thin-film active material, so the diffusion kinetics are rapid; and a metal framework with good electrical conductivity.
The group demonstrated both NiMH and Li-ion batteries, but the structure is general, so any battery material that can be deposited on the metal frame could be used.
"We like that it's very universal, so if someone comes up with a better battery chemistry, this concept applies," said Braun, who is also affiliated with the Materials Research Laboratory and the Beckman Institute for Advanced Science and Technology at Illinois. "This is not linked to one very specific kind of battery, but rather it's a new paradigm in thinking about a battery in three dimensions for enhancing properties."
(http://www.sciencedaily.com/releases/2011/03/110320164225.htm)
Aside from quick-charge consumer electronics, batteries that can store a lot of energy, release it fast and recharge quickly are desirable for electric vehicles, medical devices, lasers and military applications.
"This system that we have gives you capacitor-like power with battery-like energy," said Braun, a professor of materials science and engineering. "Most capacitors store very little energy. They can release it very fast, but they can't hold much. Most batteries store a reasonably large amount of energy, but they can't provide or receive energy rapidly. This does both."
The performance of typical lithium-ion (Li-ion) or nickel metal hydride (NiMH) rechargeable batteries degrades significantly when they are rapidly charged or discharged. Making the active material in the battery a thin film allows for very fast charging and discharging, but reduces the capacity to nearly zero because the active material lacks volume to store energy.
Braun's group wraps a thin film into three-dimensional structure, achieving both high active volume (high capacity) and large current. They have demonstrated battery electrodes that can charge or discharge in a few seconds, 10 to 100 times faster than equivalent bulk electrodes, yet can perform normally in existing devices.
This kind of performance could lead to phones that charge in seconds or laptops that charge in minutes, as well as high-power lasers and defibrillators that don't need time to power up before or between pulses.
Braun is particularly optimistic for the batteries' potential in electric vehicles. Battery life and recharging time are major limitations of electric vehicles. Long-distance road trips can be their own form of start-and-stop driving if the battery only lasts for 100 miles and then requires an hour to recharge.
"If you had the ability to charge rapidly, instead of taking hours to charge the vehicle you could potentially have vehicles that would charge in similar times as needed to refuel a car with gasoline," Braun said. "If you had five-minute charge capability, you would think of this the same way you do an internal combustion engine. You would just pull up to a charging station and fill up."
All of the processes the group used are also used at large scales in industry so the technique could be scaled up for manufacturing.
They key to the group's novel 3-D structure is self-assembly. They begin by coating a surface with tiny spheres, packing them tightly together to form a lattice. Trying to create such a uniform lattice by other means is time-consuming and impractical, but the inexpensive spheres settle into place automatically.
Then the researchers fill the space between and around the spheres with metal. The spheres are melted or dissolved, leaving a porous 3-D metal scaffolding, like a sponge. Next, a process called electropolishing uniformly etches away the surface of the scaffold to enlarge the pores and make an open framework. Finally, the researchers coat the frame with a thin film of the active material.
The result is a bicontinuous electrode structure with small interconnects, so the lithium ions can move rapidly; a thin-film active material, so the diffusion kinetics are rapid; and a metal framework with good electrical conductivity.
The group demonstrated both NiMH and Li-ion batteries, but the structure is general, so any battery material that can be deposited on the metal frame could be used.
"We like that it's very universal, so if someone comes up with a better battery chemistry, this concept applies," said Braun, who is also affiliated with the Materials Research Laboratory and the Beckman Institute for Advanced Science and Technology at Illinois. "This is not linked to one very specific kind of battery, but rather it's a new paradigm in thinking about a battery in three dimensions for enhancing properties."
(http://www.sciencedaily.com/releases/2011/03/110320164225.htm)
Sunday, March 13, 2011
Bacteria hijack an immune signaling system to live safely in our guts
By Diana Gitig
Our immune system operates under the basic premise that "self" is different from "non-self." Its primary function lies in distinguishing between these entities, leaving the former alone while attacking the latter. Yet we now know that our guts are home to populations of bacterial cells so vast that they outnumber our own cells, and that these microbiota are essential to our own survival.
As a recent study in Nature Immunology notes, "An equilibrium is established between the microbiota and the immune system that is fundamental to intestinal homeostasis." How does the immune system achieve this equilibrium, neither overacting and attacking the symbiotic bacteria nor being lax and allowing pathogens to get through? It turns out that our gut bacteria manipulate the immune system to keep things from getting out of hand.
Like many stories of immune regulation, this one is a tale of many interleukins (ILs). Interleukins are a subset of cytokines, signaling molecules used by the immune system to control processes such as inflammation and the growth and differentiation of different classes of immune cells. IL-22 is known to be important in defense, both ridding the intestines of bacterial pathogens and protecting the colon from inflammation.
IL-22 is produced by the subset of T cells defined by their expression of IL-17, known as TH17 cells, as well as by innate lymphoid cells. Sawa et al. report that in the intestine, most of the IL-22 is produced by a specific subset of innate lymphoid cells that live there, and not TH17 cells.
Microbiota can repress this expression of IL-22 by inducing the expression of IL-25 in the epithelial cells lining the walls of the intestine. The researchers deduced this because IL-22 expression goes down in mice after weaning, when microbial colonization of the intestine dramatically increases. When adult mice were treated with antibiotics, IL-22 production went up again. IL-22 production also increased during inflammation.
Microbiota also induce the generation of TH17 cells and, even though these normally make IL-22, this induction further depresses its production. The TH17 ended up competing with the innate lymphoid cells for the same pool of regulatory cytokines; as a result, all of them got less and became less active.
These innate lymphoid cells thus play a critical role in maintaining intestinal homeostasis. They make IL-22, which induces the production of antibacterial peptides by the lining and protects the intestine from pathological inflammation. Symbiotic microbiota make a safe home by tamping down the production of IL-22 by inducing IL-25. The TH17 cells can contribute to this tamping down by competing for regulators. The authors conclude by stating that “this complex regulatory network… demonstrates the subtle interaction between the microbiota and the various forces of the vertebrate immune system in maintaining intestinal homeostasis.”
http://arstechnica.com/science/news/2011/03/a-signaling-pathway-helps-the-immune-system-interact-safely-with-gut-microbes.ars
Our immune system operates under the basic premise that "self" is different from "non-self." Its primary function lies in distinguishing between these entities, leaving the former alone while attacking the latter. Yet we now know that our guts are home to populations of bacterial cells so vast that they outnumber our own cells, and that these microbiota are essential to our own survival.
As a recent study in Nature Immunology notes, "An equilibrium is established between the microbiota and the immune system that is fundamental to intestinal homeostasis." How does the immune system achieve this equilibrium, neither overacting and attacking the symbiotic bacteria nor being lax and allowing pathogens to get through? It turns out that our gut bacteria manipulate the immune system to keep things from getting out of hand.
Like many stories of immune regulation, this one is a tale of many interleukins (ILs). Interleukins are a subset of cytokines, signaling molecules used by the immune system to control processes such as inflammation and the growth and differentiation of different classes of immune cells. IL-22 is known to be important in defense, both ridding the intestines of bacterial pathogens and protecting the colon from inflammation.
IL-22 is produced by the subset of T cells defined by their expression of IL-17, known as TH17 cells, as well as by innate lymphoid cells. Sawa et al. report that in the intestine, most of the IL-22 is produced by a specific subset of innate lymphoid cells that live there, and not TH17 cells.
Microbiota can repress this expression of IL-22 by inducing the expression of IL-25 in the epithelial cells lining the walls of the intestine. The researchers deduced this because IL-22 expression goes down in mice after weaning, when microbial colonization of the intestine dramatically increases. When adult mice were treated with antibiotics, IL-22 production went up again. IL-22 production also increased during inflammation.
Microbiota also induce the generation of TH17 cells and, even though these normally make IL-22, this induction further depresses its production. The TH17 ended up competing with the innate lymphoid cells for the same pool of regulatory cytokines; as a result, all of them got less and became less active.
These innate lymphoid cells thus play a critical role in maintaining intestinal homeostasis. They make IL-22, which induces the production of antibacterial peptides by the lining and protects the intestine from pathological inflammation. Symbiotic microbiota make a safe home by tamping down the production of IL-22 by inducing IL-25. The TH17 cells can contribute to this tamping down by competing for regulators. The authors conclude by stating that “this complex regulatory network… demonstrates the subtle interaction between the microbiota and the various forces of the vertebrate immune system in maintaining intestinal homeostasis.”
http://arstechnica.com/science/news/2011/03/a-signaling-pathway-helps-the-immune-system-interact-safely-with-gut-microbes.ars
Saturday, March 12, 2011
Scientists already making discoveries in wake of Japan's temblor
By Eryn Brown
Here's what experts have learned about the earthquake thus far.
Q: What caused it?
A: The earthquake occurred because a portion of the Pacific Plate is being pushed into and underneath the North American plate, forming a so-called subduction zone that built up so much pressure it ruptured, slipping as much as 60 feet.
"This was a planetary monster," said Thomas Jordan, director of the Southern California Earthquake Center at USC.
The earthquake occurred along a patch of an undersea fault that's about 220 miles long and 60 miles wide. Because the fault broke at a shallow depth, it shifted the sea floor, triggering tsunamis throughout the Pacific Ocean.
Q: Was it a surprise?
A: Yes and no. Seismologists said the quake was larger than they thought was possible in that part of the world. "We thought about the Big One as an 8.5 or so," said Susan Hough, a seismologist at the U.S. Geological Survey in Pasadena, Calif. Such an earthquake would have been about one-third as strong as an 8.9 quake.
"But it's not like an 8.9 hit Kansas," she added. "We know Japan is an active subduction zone."
What tripped scientists up was a lack of recent activity in the area, Jordan said. The last earthquake of this magnitude along this plate boundary occurred in the year 869. Seismologists had been debating the fault's potential to break, but they had little data to go on.
"The question was whether that section had locked - accumulating strain - or was it slipping slowly," Jordan said. "We now know that this is a plate boundary that was locked."
Q: You mean there were no hints at all?
A: Brian Atwater, a USGS seismologist based in Seattle, said that Japanese GPS data collected since the 1990s showed that the coast of Japan was being pulled inland at a rate of about 25 feet per century, another indication that the plates were stuck and energy was building between them.
(http://www.physorg.com/news/2011-03-scientists-discoveries-japan-temblor.html)
Here's what experts have learned about the earthquake thus far.
Q: What caused it?
A: The earthquake occurred because a portion of the Pacific Plate is being pushed into and underneath the North American plate, forming a so-called subduction zone that built up so much pressure it ruptured, slipping as much as 60 feet.
"This was a planetary monster," said Thomas Jordan, director of the Southern California Earthquake Center at USC.
The earthquake occurred along a patch of an undersea fault that's about 220 miles long and 60 miles wide. Because the fault broke at a shallow depth, it shifted the sea floor, triggering tsunamis throughout the Pacific Ocean.
Q: Was it a surprise?
A: Yes and no. Seismologists said the quake was larger than they thought was possible in that part of the world. "We thought about the Big One as an 8.5 or so," said Susan Hough, a seismologist at the U.S. Geological Survey in Pasadena, Calif. Such an earthquake would have been about one-third as strong as an 8.9 quake.
"But it's not like an 8.9 hit Kansas," she added. "We know Japan is an active subduction zone."
What tripped scientists up was a lack of recent activity in the area, Jordan said. The last earthquake of this magnitude along this plate boundary occurred in the year 869. Seismologists had been debating the fault's potential to break, but they had little data to go on.
"The question was whether that section had locked - accumulating strain - or was it slipping slowly," Jordan said. "We now know that this is a plate boundary that was locked."
Q: You mean there were no hints at all?
A: Brian Atwater, a USGS seismologist based in Seattle, said that Japanese GPS data collected since the 1990s showed that the coast of Japan was being pulled inland at a rate of about 25 feet per century, another indication that the plates were stuck and energy was building between them.
(http://www.physorg.com/news/2011-03-scientists-discoveries-japan-temblor.html)
Thursday, March 10, 2011
Scientists Discover Anti-Anxiety Circuit in Brain Region Considered the Seat of Fear
Stimulation of a distinct brain circuit that lies within a brain structure typically associated with fearfulness produces the opposite effect: Its activity, instead of triggering or increasing anxiety, counters it.
That's the finding in a paper by Stanford University School of Medicine researchers to be published online March 9 in Nature. In the study, Karl Deisseroth, MD, PhD, and his colleagues employed a mouse model to show that stimulating activity exclusively in this circuit enhances animals' willingness to take risks, while inhibiting its activity renders them more risk-averse. This discovery could lead to new treatments for anxiety disorders, said Deisseroth, an associate professor of bioengineering and of psychiatry and behavioral science.
The investigators were able to pinpoint this particular circuit only by working with a state-of-the-art technology called optogenetics, pioneered by Deisseroth at Stanford, which allows brain scientists to tease apart the complex circuits that compose the brain so these can be studied one by one.
"Anxiety is a poorly understood but common psychiatric disease," said Deisseroth, who is also a practicing psychiatrist. More than one in four people, in the course of their lives, experience bouts of anxiety symptoms sufficiently enduring and intense to be classified as a full-blown psychiatric disorder. In addition, anxiety is a significant contributing factor in other major psychiatric disorders from depression to alcohol dependence, Deisseroth said.
Most current anti-anxiety medications work by suppressing activity in the brain circuitry that generates anxiety or increases anxiety levels. Many of these drugs are not very effective, and those that are have significant side effects such as addiction or respiratory suppression, Deisseroth said. "The discovery of a novel circuit whose action is to reduce anxiety, rather than increase it, could point to an entire strategy of anti-anxiety treatment," he added.
Ironically, the anti-anxiety circuit is nestled within a brain structure, the amygdala, long known to be associated with fear. Generally, stimulating nervous activity in the amygdala is best known to heighten anxiety. So the anti-anxiety circuit probably would have been difficult if not impossible to locate had it not been for optogenetics, a new technology in which nerve cells in living animals are rendered photosensitive so that action in these cells can be turned on or off by different wavelengths of light. The technique allows researchers to selectively photosensitize particular sets of nerve cells. Moreover, by delivering pulses of light via optical fibers to specific brain areas, scientists can target not only particular nerve-cell types but also particular cell-to-cell connections or nervous pathways leading from one brain region to another. The fiber-optic hookup is both flexible and pain-free, so experimental animals' actual behavior as well as their brain activity can be monitored.
In contrast, older research approaches involve probing brain areas with electrodes to stimulate nerve cell firing. But an electrode stimulates not only all the nerve cells that happen to be in the neighborhood but even fibers that are just passing through on the way to somewhere else. Thus, any effect from stimulating the newly discovered anti-anxiety circuit would have been swamped by the anxiety-increasing effects of the dominant surrounding circuitry.
In December 2010, the journal Nature Methods bestowed its "Method of the Year" title on optogenetics.
In the new Nature study, the researchers photosensitized a set of fibers projecting from cells in one nervous "switchboard" to another one within the amygdala. By carefully positioning their light-delivery system, they were able to selectively target this projection, so that it alone was activated when light was pulsed into the mice's brains. Doing so led instantaneously to dramatic changes in the animals' behavior.
"The mice suddenly became much more comfortable in situations they would ordinarily perceive as dangerous and, therefore, be quite anxious in," said Deisseroth. For example, rodents ordinarily try to avoid wide-open spaces such as fields, because such places leave them exposed to predators. But in a standard setup simulating both open and covered areas, the mice's willingness to explore the open areas increased profoundly as soon as light was pulsed into the novel brain circuit. Pulsing that same circuit with a different, inhibitory frequency of light produced the opposite result: the mice instantly became more anxious. "They just hunkered down" in the relatively secluded areas of the test scenario, Deisseroth said.
Standard laboratory gauges of electrical activity in specific areas of the mice's amygdalas confirmed that the novel circuit's activation tracked the animals' increased risk-taking propensity.
Deisseroth said he believes his team's findings in mice will apply to humans as well. "We know that the amygdala is structured similarly in mice and humans," he said. And just over a year ago a Stanford team led by Deisseroth's associate, Amit Etkin, MD, PhD, assistant professor of psychiatry and behavioral science, used functional imaging techniques to show that human beings suffering from generalized anxiety disorder had altered connectivity in the same brain regions within the amygdala that Deisseroth's group has implicated optogenetically in mice.
The study was funded by the National Institutes of Health, the National Institute of Mental Health, the National Institute on Drug Abuse, the National Science Foundation, NARSAD, a Samsung Scholarship, and the McKnight, Woo, Snyder, and Yu foundations. Kay Tye, PhD, a postdoctoral researcher in the Deisseroth laboratory, and Rohit Prakash, Sung-Yon Kim and Lief Fenno, all graduate students in that lab, shared first authorship. Other co-authors are graduate student Logan Grosenick, undergraduate student Hosniya Zarabi, postdoctoral researcher Kimberly Thompson, PhD, and research associates Viviana Gradinaru and Charu Ramakrishnan, all of the Deisseroth lab.
(http://www.sciencedaily.com/releases/2011/03/110309131930.htm)
That's the finding in a paper by Stanford University School of Medicine researchers to be published online March 9 in Nature. In the study, Karl Deisseroth, MD, PhD, and his colleagues employed a mouse model to show that stimulating activity exclusively in this circuit enhances animals' willingness to take risks, while inhibiting its activity renders them more risk-averse. This discovery could lead to new treatments for anxiety disorders, said Deisseroth, an associate professor of bioengineering and of psychiatry and behavioral science.
The investigators were able to pinpoint this particular circuit only by working with a state-of-the-art technology called optogenetics, pioneered by Deisseroth at Stanford, which allows brain scientists to tease apart the complex circuits that compose the brain so these can be studied one by one.
"Anxiety is a poorly understood but common psychiatric disease," said Deisseroth, who is also a practicing psychiatrist. More than one in four people, in the course of their lives, experience bouts of anxiety symptoms sufficiently enduring and intense to be classified as a full-blown psychiatric disorder. In addition, anxiety is a significant contributing factor in other major psychiatric disorders from depression to alcohol dependence, Deisseroth said.
Most current anti-anxiety medications work by suppressing activity in the brain circuitry that generates anxiety or increases anxiety levels. Many of these drugs are not very effective, and those that are have significant side effects such as addiction or respiratory suppression, Deisseroth said. "The discovery of a novel circuit whose action is to reduce anxiety, rather than increase it, could point to an entire strategy of anti-anxiety treatment," he added.
Ironically, the anti-anxiety circuit is nestled within a brain structure, the amygdala, long known to be associated with fear. Generally, stimulating nervous activity in the amygdala is best known to heighten anxiety. So the anti-anxiety circuit probably would have been difficult if not impossible to locate had it not been for optogenetics, a new technology in which nerve cells in living animals are rendered photosensitive so that action in these cells can be turned on or off by different wavelengths of light. The technique allows researchers to selectively photosensitize particular sets of nerve cells. Moreover, by delivering pulses of light via optical fibers to specific brain areas, scientists can target not only particular nerve-cell types but also particular cell-to-cell connections or nervous pathways leading from one brain region to another. The fiber-optic hookup is both flexible and pain-free, so experimental animals' actual behavior as well as their brain activity can be monitored.
In contrast, older research approaches involve probing brain areas with electrodes to stimulate nerve cell firing. But an electrode stimulates not only all the nerve cells that happen to be in the neighborhood but even fibers that are just passing through on the way to somewhere else. Thus, any effect from stimulating the newly discovered anti-anxiety circuit would have been swamped by the anxiety-increasing effects of the dominant surrounding circuitry.
In December 2010, the journal Nature Methods bestowed its "Method of the Year" title on optogenetics.
In the new Nature study, the researchers photosensitized a set of fibers projecting from cells in one nervous "switchboard" to another one within the amygdala. By carefully positioning their light-delivery system, they were able to selectively target this projection, so that it alone was activated when light was pulsed into the mice's brains. Doing so led instantaneously to dramatic changes in the animals' behavior.
"The mice suddenly became much more comfortable in situations they would ordinarily perceive as dangerous and, therefore, be quite anxious in," said Deisseroth. For example, rodents ordinarily try to avoid wide-open spaces such as fields, because such places leave them exposed to predators. But in a standard setup simulating both open and covered areas, the mice's willingness to explore the open areas increased profoundly as soon as light was pulsed into the novel brain circuit. Pulsing that same circuit with a different, inhibitory frequency of light produced the opposite result: the mice instantly became more anxious. "They just hunkered down" in the relatively secluded areas of the test scenario, Deisseroth said.
Standard laboratory gauges of electrical activity in specific areas of the mice's amygdalas confirmed that the novel circuit's activation tracked the animals' increased risk-taking propensity.
Deisseroth said he believes his team's findings in mice will apply to humans as well. "We know that the amygdala is structured similarly in mice and humans," he said. And just over a year ago a Stanford team led by Deisseroth's associate, Amit Etkin, MD, PhD, assistant professor of psychiatry and behavioral science, used functional imaging techniques to show that human beings suffering from generalized anxiety disorder had altered connectivity in the same brain regions within the amygdala that Deisseroth's group has implicated optogenetically in mice.
The study was funded by the National Institutes of Health, the National Institute of Mental Health, the National Institute on Drug Abuse, the National Science Foundation, NARSAD, a Samsung Scholarship, and the McKnight, Woo, Snyder, and Yu foundations. Kay Tye, PhD, a postdoctoral researcher in the Deisseroth laboratory, and Rohit Prakash, Sung-Yon Kim and Lief Fenno, all graduate students in that lab, shared first authorship. Other co-authors are graduate student Logan Grosenick, undergraduate student Hosniya Zarabi, postdoctoral researcher Kimberly Thompson, PhD, and research associates Viviana Gradinaru and Charu Ramakrishnan, all of the Deisseroth lab.
(http://www.sciencedaily.com/releases/2011/03/110309131930.htm)
Tuesday, March 8, 2011
NASA says 'no support' for claim of alien microbes
by Kerry Sheridan Kerry Sheridan
WASHINGTON (AFP) – Top NASA scientists said Monday there was no scientific evidence to support a colleague's claim that fossils of alien microbes born in outer space had been found in meteorites on Earth.
The US space agency formally distanced itself from the paper by NASA scientist Richard Hoover, whose findings were published Friday in the peer-reviewed Journal of Cosmology, which is available free online.
"That is a claim that Mr Hoover has been making for some years," said Carl Pilcher, director of NASA's Astrobiology Institute.
"I am not aware of any support from other meteorite researchers for this rather extraordinary claim that this evidence of microbes was present in the meteorite before the meteorite arrived on Earth and and was not the result of contamination after the meteorite arrived on Earth," he told AFP.
"The simplest explanation is that there are microbes in the meteorites; they are Earth microbes. In other words, they are contamination."
Pilcher said the meteorites that Hoover studied fell to Earth 100 to 200 years ago and have been heavily handled by humans, "so you would expect to find microbes in these meteorites."
Paul Hertz, chief scientist of NASA's Science Mission Directorate in Washington, also issued a statement saying NASA did not support Hoover's findings.
"While we value the free exchange of ideas, data and information as part of scientific and technical inquiry, NASA cannot stand behind or support a scientific claim unless it has been peer-reviewed or thoroughly examined by other qualified experts," Hertz said.
"NASA also was unaware of the recent submission of the paper to the Journal of Cosmology or of the paper's subsequent publication."
He noted that the paper did not complete the peer-review process after being submitted in 2007 to the International Journal of Astrobiology.
According to the study, Hoover sliced open fragments of several types of carbonaceous chondrite meteorites, which can contain relatively high levels of water and organic materials, and looked inside with a powerful microscope, Field Emission Scanning Electron Microscopy (FESEM).
He found bacteria-like creatures, calling them "indigenous fossils" that originated beyond Earth and were not introduced here after the meteorites landed.
Hoover "concludes these fossilized bacteria are not Earthly contaminants but are the fossilized remains of living organisms which lived in the parent bodies of these meteors, e.g. comets, moons and other astral bodies," said the study.
"The implications are that life is everywhere, and that life on Earth may have come from other planets."
The journal's editor-in-chief, Rudy Schild of the Harvard-Smithsonian Center for Astrophysics, hailed Hoover as a "highly respected scientist and astrobiologist with a prestigious record of accomplishment at NASA."
The publication invited experts to weigh in on Hoover's claim, and both sceptics and supporters began publishing their commentaries on the journal's website Monday.
"While the evidence clearly indicates that the meteorites was eons ago populated with bacterial life, whether the meteorites are of actual extra-terrestrial origin might debatable," wrote Patrick Godon of Villanova University in Pennsylvania.
Michael Engel of the University of Oklahoma wrote: "Given the importance of this finding, it is essential to continue to seek new criteria more robust than visual similarity to clarify the origin(s) of these remarkable structures."
The journal did not immediately respond to requests for comment.
Pilcher described Hoover as a "NASA employee" who works in a solar physics branch of a NASA lab in the southeastern state of Alabama.
"He clearly does some very interesting microscopy. The actual measurements on these meteorites are very nice measurements, but I am not aware of any other qualification that Mr Hoover has in analysis of meteorites or in astrobiology," Pilcher said.
A NASA-funded study in December suggested that a previously unknown form of bacterium, found deep in a California lake, could thrive on arsenic, adding a new element to what scientists have long considered the six building blocks of life.
That study drew hefty criticism, particularly after NASA touted the announcement as evidence of extraterrestrial life. Scientists are currently attempting to replicate those findings.
(http://news.yahoo.com/s/afp/20110307/ts_alt_afp/usspacebiologyastrobiologynasa_20110307213642)
WASHINGTON (AFP) – Top NASA scientists said Monday there was no scientific evidence to support a colleague's claim that fossils of alien microbes born in outer space had been found in meteorites on Earth.
The US space agency formally distanced itself from the paper by NASA scientist Richard Hoover, whose findings were published Friday in the peer-reviewed Journal of Cosmology, which is available free online.
"That is a claim that Mr Hoover has been making for some years," said Carl Pilcher, director of NASA's Astrobiology Institute.
"I am not aware of any support from other meteorite researchers for this rather extraordinary claim that this evidence of microbes was present in the meteorite before the meteorite arrived on Earth and and was not the result of contamination after the meteorite arrived on Earth," he told AFP.
"The simplest explanation is that there are microbes in the meteorites; they are Earth microbes. In other words, they are contamination."
Pilcher said the meteorites that Hoover studied fell to Earth 100 to 200 years ago and have been heavily handled by humans, "so you would expect to find microbes in these meteorites."
Paul Hertz, chief scientist of NASA's Science Mission Directorate in Washington, also issued a statement saying NASA did not support Hoover's findings.
"While we value the free exchange of ideas, data and information as part of scientific and technical inquiry, NASA cannot stand behind or support a scientific claim unless it has been peer-reviewed or thoroughly examined by other qualified experts," Hertz said.
"NASA also was unaware of the recent submission of the paper to the Journal of Cosmology or of the paper's subsequent publication."
He noted that the paper did not complete the peer-review process after being submitted in 2007 to the International Journal of Astrobiology.
According to the study, Hoover sliced open fragments of several types of carbonaceous chondrite meteorites, which can contain relatively high levels of water and organic materials, and looked inside with a powerful microscope, Field Emission Scanning Electron Microscopy (FESEM).
He found bacteria-like creatures, calling them "indigenous fossils" that originated beyond Earth and were not introduced here after the meteorites landed.
Hoover "concludes these fossilized bacteria are not Earthly contaminants but are the fossilized remains of living organisms which lived in the parent bodies of these meteors, e.g. comets, moons and other astral bodies," said the study.
"The implications are that life is everywhere, and that life on Earth may have come from other planets."
The journal's editor-in-chief, Rudy Schild of the Harvard-Smithsonian Center for Astrophysics, hailed Hoover as a "highly respected scientist and astrobiologist with a prestigious record of accomplishment at NASA."
The publication invited experts to weigh in on Hoover's claim, and both sceptics and supporters began publishing their commentaries on the journal's website Monday.
"While the evidence clearly indicates that the meteorites was eons ago populated with bacterial life, whether the meteorites are of actual extra-terrestrial origin might debatable," wrote Patrick Godon of Villanova University in Pennsylvania.
Michael Engel of the University of Oklahoma wrote: "Given the importance of this finding, it is essential to continue to seek new criteria more robust than visual similarity to clarify the origin(s) of these remarkable structures."
The journal did not immediately respond to requests for comment.
Pilcher described Hoover as a "NASA employee" who works in a solar physics branch of a NASA lab in the southeastern state of Alabama.
"He clearly does some very interesting microscopy. The actual measurements on these meteorites are very nice measurements, but I am not aware of any other qualification that Mr Hoover has in analysis of meteorites or in astrobiology," Pilcher said.
A NASA-funded study in December suggested that a previously unknown form of bacterium, found deep in a California lake, could thrive on arsenic, adding a new element to what scientists have long considered the six building blocks of life.
That study drew hefty criticism, particularly after NASA touted the announcement as evidence of extraterrestrial life. Scientists are currently attempting to replicate those findings.
(http://news.yahoo.com/s/afp/20110307/ts_alt_afp/usspacebiologyastrobiologynasa_20110307213642)
Surgeon creates new kidney on TED stage
"It's like baking a cake," Anthony Atala of the Wake Forest Institute of Regenerative Medicine said as he cooked up a fresh kidney on stage at a TED Conference in the California city of Long Beach.
Scanners are used to take a 3-D image of a kidney that needs replacing, then a tissue sample about half the size of postage stamp is used to seed the computerized process, Atala explained.
The organ "printer" then works layer-by-layer to build a replacement kidney replicating the patient's tissue.
College student Luke Massella was among the first people to receive a printed kidney during experimental research a decade ago when he was just 10 years old.
He said he was born with Spina Bifida and his kidneys were not working.
"Now, I'm in college and basically trying to live life like a normal kid," said Massella, who was reunited with Atala at TED.
"This surgery saved my life and made me who I am today."
About 90 percent of people waiting for transplants are in need of kidneys, and the need far outweighs the supply of donated organs, according to Atala.
"There is a major health crisis today in terms of the shortage of organs," Atala said. "Medicine has done a much better job of making us live longer, and as we age our organs don't last."
(http://www.physorg.com/news/2011-03-surgeon-kidney-ted-stage.html)
Scanners are used to take a 3-D image of a kidney that needs replacing, then a tissue sample about half the size of postage stamp is used to seed the computerized process, Atala explained.
The organ "printer" then works layer-by-layer to build a replacement kidney replicating the patient's tissue.
College student Luke Massella was among the first people to receive a printed kidney during experimental research a decade ago when he was just 10 years old.
He said he was born with Spina Bifida and his kidneys were not working.
"Now, I'm in college and basically trying to live life like a normal kid," said Massella, who was reunited with Atala at TED.
"This surgery saved my life and made me who I am today."
About 90 percent of people waiting for transplants are in need of kidneys, and the need far outweighs the supply of donated organs, according to Atala.
"There is a major health crisis today in terms of the shortage of organs," Atala said. "Medicine has done a much better job of making us live longer, and as we age our organs don't last."
(http://www.physorg.com/news/2011-03-surgeon-kidney-ted-stage.html)
Monday, March 7, 2011
A virus so large it gets viruses
Last year, researchers uncovered the largest virus yet discovered. With a genome that is over 700,000 base pairs long, the CroV virus has more DNA than some bacteria. Fortunately, it infects a small, unicellular organism that's very distantly related to humans. Now, the same research team is back, this time announcing that they've discovered a virus that attacks CroV, and may just have given rise to all transposable elements, sometimes known as jumping genes.
While studying CroV, the researchers discovered a much smaller virus that frequently accompanied it. The new virus, which they term Mavirus (for "Maverick virus") is still a healthy size, as far as most viruses are concerned, weighing in at just over 19,000 DNA bases, and encoding 20 genes. But Mavirus never appeared on its own; instead, it was only active in cells when the larger CroV was around, even though it could enter cells on its own. The authors conclude that it probably steals CroV's copying machinery for making more Maviruses; this is consistent with the fact that CroV infections slow down when Mavirus is around.
This isn't the first giant virus to be victimized by a smaller peer—there's even a term for this: virophage. But, when the authors looked at the 20 genes carried by Mavirus, they didn't look like the ones from another virophage; instead, they looked something like genes from a specific type of transposable element.
Transposable elements, or transposons, are stretches of DNA that can move around the genome, hopping from place to place. They're so effective at this that about a third of the human genome is composed of various forms of transposons, which don't appear to do anything very helpful, but require energy to copy.
The authors suggest, however, that transposons got their start by doing something useful. The Mavirus helps protect cells from CroV, so cells that permanently incorporate a copy into their genomes could be at a significant advantage. Once in the genome, however, the viral DNA would be free to evolve into something closer to a parasite. The authors predict that, if we look in the right places, we'll find virophages that correspond to most of the major families of transposons.
While studying CroV, the researchers discovered a much smaller virus that frequently accompanied it. The new virus, which they term Mavirus (for "Maverick virus") is still a healthy size, as far as most viruses are concerned, weighing in at just over 19,000 DNA bases, and encoding 20 genes. But Mavirus never appeared on its own; instead, it was only active in cells when the larger CroV was around, even though it could enter cells on its own. The authors conclude that it probably steals CroV's copying machinery for making more Maviruses; this is consistent with the fact that CroV infections slow down when Mavirus is around.
This isn't the first giant virus to be victimized by a smaller peer—there's even a term for this: virophage. But, when the authors looked at the 20 genes carried by Mavirus, they didn't look like the ones from another virophage; instead, they looked something like genes from a specific type of transposable element.
Transposable elements, or transposons, are stretches of DNA that can move around the genome, hopping from place to place. They're so effective at this that about a third of the human genome is composed of various forms of transposons, which don't appear to do anything very helpful, but require energy to copy.
The authors suggest, however, that transposons got their start by doing something useful. The Mavirus helps protect cells from CroV, so cells that permanently incorporate a copy into their genomes could be at a significant advantage. Once in the genome, however, the viral DNA would be free to evolve into something closer to a parasite. The authors predict that, if we look in the right places, we'll find virophages that correspond to most of the major families of transposons.
Sunday, March 6, 2011
Eastern Cougar Is Extinct, U.S. Fish and Wildlife Service Concludes
Although the eastern cougar has been on the endangered species list since 1973, its existence has long been questioned. The U.S. Fish and Wildlife Service (Service) conducted a formal review of the available information and, in a report issued March 2, 2011, concludes the eastern cougar is extinct and recommends the subspecies be removed from the endangered species list.
"We recognize that many people have seen cougars in the wild within the historical range of the eastern cougar," said the Service's Northeast Region Chief of Endangered Species Martin Miller. "However, we believe those cougars are not the eastern cougar subspecies. We found no information to support the existence of the eastern cougar."
Reports of cougars observed in the wild examined during the review process described cougars of other subspecies, often South American subspecies, that had been held in captivity and had escaped or been released to the wild, as well as wild cougars of the western United States subspecies that had migrated eastward to the Midwest.
During the review, the Service received 573 responses to a request for scientific information about the possible existence of the eastern cougar subspecies; conducted an extensive review of U.S. and Canadian scientific literature; and requested information from the 21 States within the historical range of the subspecies. No States expressed a belief in the existence of an eastern cougar population. According to Dr. Mark McCollough, the Service's lead scientist for the eastern cougar, the subspecies of eastern cougar has likely been extinct since the 1930s.
The Service initiated the review as part of its obligations under the Endangered Species Act. The Service will prepare a proposal to remove the eastern cougar from the endangered species list, since extinct animals are not eligible for protection under the Endangered Species Act. The proposal will be made available for public comment.
The Service's decision to declare the eastern cougar extinct does not affect the status of the Florida panther, another wild cat subspecies listed as endangered. Though the Florida panther once ranged throughout the Southeast, it now exists in less than five percent of its historic habitat and in only one breeding population of 120 to 160 animals in southwestern Florida.
Additional information about eastern cougars, including frequently asked questions and cougar sightings, is at: http://www.fws.gov/northeast/ecougar.
Find information about endangered species at http://www.fws.gov/endangered.
(http://www.sciencedaily.com/releases/2011/03/110302190717.htm?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+sciencedaily+%28ScienceDaily:+Latest+Science+News%29&utm_content=Google+Feedfetcher)
"We recognize that many people have seen cougars in the wild within the historical range of the eastern cougar," said the Service's Northeast Region Chief of Endangered Species Martin Miller. "However, we believe those cougars are not the eastern cougar subspecies. We found no information to support the existence of the eastern cougar."
Reports of cougars observed in the wild examined during the review process described cougars of other subspecies, often South American subspecies, that had been held in captivity and had escaped or been released to the wild, as well as wild cougars of the western United States subspecies that had migrated eastward to the Midwest.
During the review, the Service received 573 responses to a request for scientific information about the possible existence of the eastern cougar subspecies; conducted an extensive review of U.S. and Canadian scientific literature; and requested information from the 21 States within the historical range of the subspecies. No States expressed a belief in the existence of an eastern cougar population. According to Dr. Mark McCollough, the Service's lead scientist for the eastern cougar, the subspecies of eastern cougar has likely been extinct since the 1930s.
The Service initiated the review as part of its obligations under the Endangered Species Act. The Service will prepare a proposal to remove the eastern cougar from the endangered species list, since extinct animals are not eligible for protection under the Endangered Species Act. The proposal will be made available for public comment.
The Service's decision to declare the eastern cougar extinct does not affect the status of the Florida panther, another wild cat subspecies listed as endangered. Though the Florida panther once ranged throughout the Southeast, it now exists in less than five percent of its historic habitat and in only one breeding population of 120 to 160 animals in southwestern Florida.
Additional information about eastern cougars, including frequently asked questions and cougar sightings, is at: http://www.fws.gov/northeast/ecougar.
Find information about endangered species at http://www.fws.gov/endangered.
(http://www.sciencedaily.com/releases/2011/03/110302190717.htm?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+sciencedaily+%28ScienceDaily:+Latest+Science+News%29&utm_content=Google+Feedfetcher)
Saturday, March 5, 2011
Supercritical carbon dioxide Brayton Cycle turbines promise giant leap
ALBUQUERQUE, N.M. — Sandia National Laboratories researchers are moving into the demonstration phase of a novel gas turbine system for power generation, with the promise that thermal-to-electric conversion efficiency will be increased to as much as 50 percent — an improvement of 50 percent for nuclear power stations equipped with steam turbines, or a 40 percent improvement for simple gas turbines. The system is also very compact, meaning that capital costs would be relatively low.
Research focuses on supercritical carbon dioxide (S-CO2) Brayton-cycle turbines, which typically would be used for bulk thermal and nuclear generation of electricity, including next-generation power reactors. The goal is eventually to replace steam-driven Rankine cycle turbines, which have lower efficiency, are corrosive at high temperature and occupy 30 times as much space because of the need for very large turbines and condensers to dispose of excess steam. The Brayton cycle could yield 20 megawatts of electricity from a package with a volume as small as four cubic meters.
The Brayton cycle, named after George Brayton, originally functioned by heating air in a confined space and then releasing it in a particular direction. The same principle is used to power jet engines today.
"This machine is basically a jet engine running on a hot liquid," said principal investigator Steve Wright of Sandia's Advanced Nuclear Concepts group. "There is a tremendous amount of industrial and scientific interest in supercritical CO2 systems for power generation using all potential heat sources including solar, geothermal, fossil fuel, biofuel and nuclear."
Sandia currently has two supercritical CO2 test loops. (The term "loop" derives from the shape taken by the working fluid as it completes each circuit.) A power production loop is located at the Arvada, Colo., site of contractor Barber Nichols Inc., where it has been running and producing approximately 240 kilowatts of electricity during the developmental phase that began in March 2010. It is now being upgraded and is expected to be shipped to Sandia this summer.
A second loop, located at Sandia in Albuquerque, is used to research the unusual issues of compression, bearings, seals, and friction that exist near the critical point, where the carbon dioxide has the density of liquid but otherwise has many of the properties of a gas.
Immediate plans call for Sandia to continue to develop and operate the small test loops to identify key features and technologies. Test results will illustrate the capability of the concept, particularly its compactness, efficiency and scalability to larger systems. Future plans call for commercialization of the technology and development of an industrial demonstration plant at 10 MW of electricity.
A competing system, also at Sandia and using Brayton cycles with helium as the working fluid, is designed to operate at about 925 degrees C and is expected to produce electrical power at 43 percent to 46 percent efficiency. By contrast, the supercritical CO2 Brayton cycle provides the same efficiency as helium Brayton systems but at a considerably lower temperature (250-300 C). The S-CO2 equipment is also more compact than that of the helium cycle, which in turn is more compact than the conventional steam cycle.
Under normal conditions materials behave in a predictable, classical, "ideal" way as conditions cause them to change phase, as when water turns to steam. But this model tends not to work at lower temperatures or higher pressures than those that exist at these critical points. In the case of carbon dioxide, it becomes an unusually dense "supercritical" liquid at the point where it is held between the gas phase and liquid phase. The supercritical properties of carbon dioxide at temperatures above 500 C and pressures above 7.6 megapascals enable the system to operate with very high thermal efficiency, exceeding even those of a large coal-generated power plant and nearly twice as efficient as that of a gasoline engine (about 25 percent).
In other words, as compared with other gas turbines the S-CO2 Brayton system could increase the electrical power produced per unit of fuel by 40 percent or more. The combination of low temperatures, high efficiency and high power density allows for the development of very compact, transportable systems that are more affordable because only standard engineering materials (stainless steel) are required, less material is needed, and the small size allows for advanced-modular manufacturing processes.
"Sandia is not alone in this field, but we are in the lead," Wright said. "We're past the point of wondering if these power systems are going to be developed; the question remains of who will be first to market. Sandia and DOE have a wonderful opportunity in the commercialization effort."
Research focuses on supercritical carbon dioxide (S-CO2) Brayton-cycle turbines, which typically would be used for bulk thermal and nuclear generation of electricity, including next-generation power reactors. The goal is eventually to replace steam-driven Rankine cycle turbines, which have lower efficiency, are corrosive at high temperature and occupy 30 times as much space because of the need for very large turbines and condensers to dispose of excess steam. The Brayton cycle could yield 20 megawatts of electricity from a package with a volume as small as four cubic meters.
The Brayton cycle, named after George Brayton, originally functioned by heating air in a confined space and then releasing it in a particular direction. The same principle is used to power jet engines today.
"This machine is basically a jet engine running on a hot liquid," said principal investigator Steve Wright of Sandia's Advanced Nuclear Concepts group. "There is a tremendous amount of industrial and scientific interest in supercritical CO2 systems for power generation using all potential heat sources including solar, geothermal, fossil fuel, biofuel and nuclear."
Sandia currently has two supercritical CO2 test loops. (The term "loop" derives from the shape taken by the working fluid as it completes each circuit.) A power production loop is located at the Arvada, Colo., site of contractor Barber Nichols Inc., where it has been running and producing approximately 240 kilowatts of electricity during the developmental phase that began in March 2010. It is now being upgraded and is expected to be shipped to Sandia this summer.
A second loop, located at Sandia in Albuquerque, is used to research the unusual issues of compression, bearings, seals, and friction that exist near the critical point, where the carbon dioxide has the density of liquid but otherwise has many of the properties of a gas.
Immediate plans call for Sandia to continue to develop and operate the small test loops to identify key features and technologies. Test results will illustrate the capability of the concept, particularly its compactness, efficiency and scalability to larger systems. Future plans call for commercialization of the technology and development of an industrial demonstration plant at 10 MW of electricity.
A competing system, also at Sandia and using Brayton cycles with helium as the working fluid, is designed to operate at about 925 degrees C and is expected to produce electrical power at 43 percent to 46 percent efficiency. By contrast, the supercritical CO2 Brayton cycle provides the same efficiency as helium Brayton systems but at a considerably lower temperature (250-300 C). The S-CO2 equipment is also more compact than that of the helium cycle, which in turn is more compact than the conventional steam cycle.
Under normal conditions materials behave in a predictable, classical, "ideal" way as conditions cause them to change phase, as when water turns to steam. But this model tends not to work at lower temperatures or higher pressures than those that exist at these critical points. In the case of carbon dioxide, it becomes an unusually dense "supercritical" liquid at the point where it is held between the gas phase and liquid phase. The supercritical properties of carbon dioxide at temperatures above 500 C and pressures above 7.6 megapascals enable the system to operate with very high thermal efficiency, exceeding even those of a large coal-generated power plant and nearly twice as efficient as that of a gasoline engine (about 25 percent).
In other words, as compared with other gas turbines the S-CO2 Brayton system could increase the electrical power produced per unit of fuel by 40 percent or more. The combination of low temperatures, high efficiency and high power density allows for the development of very compact, transportable systems that are more affordable because only standard engineering materials (stainless steel) are required, less material is needed, and the small size allows for advanced-modular manufacturing processes.
"Sandia is not alone in this field, but we are in the lead," Wright said. "We're past the point of wondering if these power systems are going to be developed; the question remains of who will be first to market. Sandia and DOE have a wonderful opportunity in the commercialization effort."
Friday, March 4, 2011
Self-Doubting Monkeys Know What They Don’t Know
by Patrick Morgan
The number of traits chalked up as “distinctly human” seem to dwindle each year. And now, we can’t even say that we’re uniquely aware of the limits of our knowledge: It seems that some monkeys understand uncertainty too.
A team of researchers taught macaques how to maneuver a joystick to indicate whether the pixel density on a screen was sparse or dense. Given a pixel scenario, the monkeys would maneuver a joystick to a letter S (for sparse) or D (for dense). They were given a treat when they selected the correct answer, but when they were wrong, the game paused for a couple seconds. A third possible answer, though, allowed the monkeys to select a question mark, and thereby forgo the pause (and potentially get more treats).
And as John David Smith, a researcher at SUNY Buffalo, and Michael Beran, a researcher at Georgia State University, announced at the AAAS meeting this weekend, the macaques selected the question mark just as humans do when they encounter a mind-stumping question. As Smith told the BBC, “Monkeys apparently appreciate when they are likely to make an error…. They seem to know when they don’t know.”
These findings aren’t applicable to all species of monkeys: The researchers also trained capuchin monkeys, and this species never selected the uncertainty button. These findings may have important evolutionary implications because macaques and capuchins have different lineages: macaques are old world monkeys, and capuchins are new world monkeys.
(http://blogs.discovermagazine.com/discoblog/2011/02/22/self-doubting-monkeys-know-what-they-dont-know/)
The number of traits chalked up as “distinctly human” seem to dwindle each year. And now, we can’t even say that we’re uniquely aware of the limits of our knowledge: It seems that some monkeys understand uncertainty too.
A team of researchers taught macaques how to maneuver a joystick to indicate whether the pixel density on a screen was sparse or dense. Given a pixel scenario, the monkeys would maneuver a joystick to a letter S (for sparse) or D (for dense). They were given a treat when they selected the correct answer, but when they were wrong, the game paused for a couple seconds. A third possible answer, though, allowed the monkeys to select a question mark, and thereby forgo the pause (and potentially get more treats).
And as John David Smith, a researcher at SUNY Buffalo, and Michael Beran, a researcher at Georgia State University, announced at the AAAS meeting this weekend, the macaques selected the question mark just as humans do when they encounter a mind-stumping question. As Smith told the BBC, “Monkeys apparently appreciate when they are likely to make an error…. They seem to know when they don’t know.”
These findings aren’t applicable to all species of monkeys: The researchers also trained capuchin monkeys, and this species never selected the uncertainty button. These findings may have important evolutionary implications because macaques and capuchins have different lineages: macaques are old world monkeys, and capuchins are new world monkeys.
(http://blogs.discovermagazine.com/discoblog/2011/02/22/self-doubting-monkeys-know-what-they-dont-know/)
Thursday, March 3, 2011
Walking cactus discovered in China
By Wynne Parry
The so-called walking cactus belongs to a group of extinct worm-like creatures called lobopodians that are thought to have given rise to arthropods. Spiders and other arthropods have segmented bodies and jointed limbs covered in a hardened shell.
Before the discovery of the walking cactus, Diania cactiformis, all lobopodian remains had soft bodies and soft limbs, said Jianni Liu, the lead researcher who is affiliated with Northwest University in China and Freie University in Germany.
"Walking cactus is very important because it is sort of a missing link from lobopodians to arthropods," Liu told LiveScience. "Scientists have always suspected that arthropods evolved from somewhere amongst lobopodians, but until now we didn't have a single fossil you could point at and say that is the first one with jointed legs. And this is what walking cactus shows." [Image of walking cactus fossil]
Liu and other researchers described the extinct creature based on three complete fossils and 30 partial ones discovered in Yunnan Province in southern China. The walking cactus had a body divided into nine segments with 10 pairs of hardened, jointed legs, and it measured about 2.4 inches (6 centimeters) long.
It's not clear how the leggy worm made its living. It could have used its tube-like mouth called a proboscis to suck tiny things from the mud, or it may have used its spiny front legs to grab prey, Liu said.
Clues to arthropod evolution are preserved in modern-day velvet worms, which are considered the only living relative to all arthropods. Once mistaken for slugs, these land-dwelling worms are almost entirely soft-bodied except for hardened claws and jaws.
The discovery of the walking cactus helps fill in the evolutionary history between the velvet worms and modern arthropods, which, in terms of numbers and diversity, are the most dominant group of animals on the planet, according to Graham Budd, a professor of paleobiology at Uppsala University in Sweden, who was not involved in the current study.
The walking cactus is the first and only case of hardened, jointed limbs built for walking appearing in a creature that is not recognizable as an arthropod, Budd said.
But Budd is not convinced that, as the researchers argue, the walking cactus's hardened legs were passed directly down to modern arthropods.
"I am not persuaded that it is a direct ancestor or as closely related to living arthropods as they suggest," he told LiveScience. "I would like to see more evidence; the great thing is a lot more material keeps coming up."
For instance, it is possible that the walking cactus is less closely related to modern arthropods, and that hardened legs evolved multiple times. It is also possible that the bodies of primitive arthropods hardened before their legs did, Budd said.
New fossils, particularly from China, have helped clarify the evolutionary history of arthropods, and in the last decade or so, scientists have come to more consensus regarding that history, he added.
(http://www.csmonitor.com/Science/2011/0225/Walking-cactus-discovered-in-China)
The so-called walking cactus belongs to a group of extinct worm-like creatures called lobopodians that are thought to have given rise to arthropods. Spiders and other arthropods have segmented bodies and jointed limbs covered in a hardened shell.
Before the discovery of the walking cactus, Diania cactiformis, all lobopodian remains had soft bodies and soft limbs, said Jianni Liu, the lead researcher who is affiliated with Northwest University in China and Freie University in Germany.
"Walking cactus is very important because it is sort of a missing link from lobopodians to arthropods," Liu told LiveScience. "Scientists have always suspected that arthropods evolved from somewhere amongst lobopodians, but until now we didn't have a single fossil you could point at and say that is the first one with jointed legs. And this is what walking cactus shows." [Image of walking cactus fossil]
Liu and other researchers described the extinct creature based on three complete fossils and 30 partial ones discovered in Yunnan Province in southern China. The walking cactus had a body divided into nine segments with 10 pairs of hardened, jointed legs, and it measured about 2.4 inches (6 centimeters) long.
It's not clear how the leggy worm made its living. It could have used its tube-like mouth called a proboscis to suck tiny things from the mud, or it may have used its spiny front legs to grab prey, Liu said.
Clues to arthropod evolution are preserved in modern-day velvet worms, which are considered the only living relative to all arthropods. Once mistaken for slugs, these land-dwelling worms are almost entirely soft-bodied except for hardened claws and jaws.
The discovery of the walking cactus helps fill in the evolutionary history between the velvet worms and modern arthropods, which, in terms of numbers and diversity, are the most dominant group of animals on the planet, according to Graham Budd, a professor of paleobiology at Uppsala University in Sweden, who was not involved in the current study.
The walking cactus is the first and only case of hardened, jointed limbs built for walking appearing in a creature that is not recognizable as an arthropod, Budd said.
But Budd is not convinced that, as the researchers argue, the walking cactus's hardened legs were passed directly down to modern arthropods.
"I am not persuaded that it is a direct ancestor or as closely related to living arthropods as they suggest," he told LiveScience. "I would like to see more evidence; the great thing is a lot more material keeps coming up."
For instance, it is possible that the walking cactus is less closely related to modern arthropods, and that hardened legs evolved multiple times. It is also possible that the bodies of primitive arthropods hardened before their legs did, Budd said.
New fossils, particularly from China, have helped clarify the evolutionary history of arthropods, and in the last decade or so, scientists have come to more consensus regarding that history, he added.
(http://www.csmonitor.com/Science/2011/0225/Walking-cactus-discovered-in-China)
Wednesday, March 2, 2011
Taming the Wild -- Why Can't All Animals Be Tamed?
By Evan Ratliff
"Hello! How are you doing?" Lyudmila Trut says, reaching down to unlatch the door of a wire cage labeled "Mavrik." We're standing between two long rows of similar crates on a farm just outside the city of Novosibirsk, in southern Siberia, and the 76-year-old biologist's greeting is addressed not to me but to the cage's furry occupant. Although I don't speak Russian, I recognize in her voice the tone of maternal adoration that dog owners adopt when addressing their pets.
Mavrik, the object of Trut's attention, is about the size of a Shetland sheepdog, with chestnut orange fur and a white bib down his front. He plays his designated role in turn: wagging his tail, rolling on his back, panting eagerly in anticipation of attention. In adjacent cages lining either side of the narrow, open-sided shed, dozens of canids do the same, yelping and clamoring in an explosion of fur and unbridled excitement. "As you can see," Trut says above the din, "all of them want human contact." Today, however, Mavrik is the lucky recipient. Trut reaches in and scoops him up, then hands him over to me. Cradled in my arms, gently jawing my hand in his mouth, he's as docile as any lapdog.
Except that Mavrik, as it happens, is not a dog at all. He's a fox. Hidden away on this overgrown property, flanked by birch forests and barred by a rusty metal gate, he and several hundred of his relatives are the only population of domesticated silver foxes in the world. (Most of them are, indeed, silver or dark gray; Mavrik is rare in his chestnut fur.) And by "domesticated" I don't mean captured and tamed, or raised by humans and conditioned by food to tolerate the occasional petting. I mean bred for domestication, as tame as your tabby cat or your Labrador. In fact, says Anna Kukekova, a Cornell researcher who studies the foxes, "they remind me a lot of golden retrievers, who are basically not aware that there are good people, bad people, people that they have met before, and those they haven't." These foxes treat any human as a potential companion, a behavior that is the product of arguably the most extraordinary breeding experiment ever conducted.
It started more than a half century ago, when Trut was still a graduate student. Led by a biologist named Dmitry Belyaev, researchers at the nearby Institute of Cytology and Genetics gathered up 130 foxes from fur farms. They then began breeding them with the goal of re-creating the evolution of wolves into dogs, a transformation that began more than 15,000 years ago.
With each generation of fox kits, Belyaev and his colleagues tested their reactions to human contact, selecting those most approachable to breed for the next generation. By the mid-1960s the experiment was working beyond what he could've imagined. They were producing foxes like Mavrik, not just unafraid of humans but actively seeking to bond with them. His team even repeated the experiment in two other species, mink and rats. "One huge thing that Belyaev showed was the timescale," says Gordon Lark, a University of Utah biologist who studies dog genetics. "If you told me the animal would now come sniff you at the front of the cage, I would say it's what I expect. But that they would become that friendly toward humans that quickly… wow."
Miraculously, Belyaev had compressed thousands of years of domestication into a few years. But he wasn't just looking to prove he could create friendly foxes. He had a hunch that he could use them to unlock domestication's molecular mysteries. Domesticated animals are known to share a common set of characteristics, a fact documented by Darwin in The Variation of Animals and Plants Under Domestication. They tend to be smaller, with floppier ears and curlier tails than their untamed progenitors. Such traits tend to make animals appear appealingly juvenile to humans. Their coats are sometimes spotted—piebald, in scientific terminology—while their wild ancestors' coats are solid. These and other traits, sometimes referred to as the domestication phenotype, exist in varying degrees across a remarkably wide range of species, from dogs, pigs, and cows to some nonmammalians like chickens, and even a few fish.
Belyaev suspected that as the foxes became domesticated, they too might begin to show aspects of a domestication phenotype. He was right again: Selecting which foxes to breed based solely on how well they got along with humans seemed to alter their physical appearance along with their dispositions. After only nine generations, the researchers recorded fox kits born with floppier ears. Piebald patterns appeared on their coats. By this time the foxes were already whining and wagging their tails in response to a human presence, behaviors never seen in wild foxes.
Driving those changes, Belyaev postulated, was a collection of genes that conferred a propensity to tameness—a genotype that the foxes perhaps shared with any species that could be domesticated. Here on the fox farm, Kukekova and Trut are searching for precisely those genes today. Elsewhere, researchers are delving into the DNA of pigs, chickens, horses, and other domesticated species, looking to pinpoint the genetic differences that came to distinguish them from their ancestors. The research, accelerated by the recent advances in rapid genome sequencing, aims to answer a fundamental biological question: "How is it possible to make this huge transformation from wild animals into domestic animals?" says Leif Andersson, a professor of genome biology at Uppsala University, in Sweden. The answer has implications for understanding not just how we domesticated animals, but how we tamed the wild in ourselves as well.
The exercise of dominion over plants and animals is arguably the most consequential event in human history. Along with cultivated agriculture, the ability to raise and manage domesticated fauna—of which wolves were likely the first, but chickens, cattle, and other food species the most important—altered the human diet, paving the way for settlements and eventually nation-states to flourish. By putting humans in close contact with animals, domestication also created vectors for the diseases that shaped society.
Yet the process by which it all happened has remained stubbornly impenetrable. Animal bones and stone carvings can sometimes shed light on the when and where each species came to live side by side with humans. More difficult to untangle is the how. Did a few curious boar creep closer to human populations, feeding off their garbage and with each successive generation becoming a little more a part of our diet? Did humans capture red jungle fowl, the ancestor of the modern chicken, straight from the wild—or did the fowl make the first approach? Out of 148 large mammal species on Earth, why have no more than 15 ever been domesticated? Why have we been able to tame and breed horses for thousands of years, but never their close relative the zebra, despite numerous attempts?
In fact, scientists have even struggled to define domestication precisely. We all know that individual animals can be trained to exist in close contact with humans. A tiger cub fed by hand, imprinting on its captors, may grow up to treat them like family. But that tiger's offspring, at birth, will be just as wild as its ancestors. Domestication, by contrast, is not a quality trained into an individual, but one bred into an entire population through generations of living in proximity to humans. Many if not most of the species' wild instincts have long since been lost. Domestication, in other words, is mostly in the genes.
Yet the borders between domesticated and wild are often fluid. A growing body of evidence shows that historically, domesticated animals likely played a large part in their own taming, habituating themselves to humans before we took an active role in the process. "My working hypothesis," says Greger Larson, an expert on genetics and domestication at Durham University in the United Kingdom, "is that with most of the early animals—dogs first, then pigs, sheep, and goats—there was probably a long period of time of unintentional management by humans." The word domestication "implies something top down, something that humans did intentionally," he says. "But the complex story is so much more interesting."
(http://ngm.nationalgeographic.com/2011/03/taming-wild-animals/ratliff-text)
"Hello! How are you doing?" Lyudmila Trut says, reaching down to unlatch the door of a wire cage labeled "Mavrik." We're standing between two long rows of similar crates on a farm just outside the city of Novosibirsk, in southern Siberia, and the 76-year-old biologist's greeting is addressed not to me but to the cage's furry occupant. Although I don't speak Russian, I recognize in her voice the tone of maternal adoration that dog owners adopt when addressing their pets.
Mavrik, the object of Trut's attention, is about the size of a Shetland sheepdog, with chestnut orange fur and a white bib down his front. He plays his designated role in turn: wagging his tail, rolling on his back, panting eagerly in anticipation of attention. In adjacent cages lining either side of the narrow, open-sided shed, dozens of canids do the same, yelping and clamoring in an explosion of fur and unbridled excitement. "As you can see," Trut says above the din, "all of them want human contact." Today, however, Mavrik is the lucky recipient. Trut reaches in and scoops him up, then hands him over to me. Cradled in my arms, gently jawing my hand in his mouth, he's as docile as any lapdog.
Except that Mavrik, as it happens, is not a dog at all. He's a fox. Hidden away on this overgrown property, flanked by birch forests and barred by a rusty metal gate, he and several hundred of his relatives are the only population of domesticated silver foxes in the world. (Most of them are, indeed, silver or dark gray; Mavrik is rare in his chestnut fur.) And by "domesticated" I don't mean captured and tamed, or raised by humans and conditioned by food to tolerate the occasional petting. I mean bred for domestication, as tame as your tabby cat or your Labrador. In fact, says Anna Kukekova, a Cornell researcher who studies the foxes, "they remind me a lot of golden retrievers, who are basically not aware that there are good people, bad people, people that they have met before, and those they haven't." These foxes treat any human as a potential companion, a behavior that is the product of arguably the most extraordinary breeding experiment ever conducted.
It started more than a half century ago, when Trut was still a graduate student. Led by a biologist named Dmitry Belyaev, researchers at the nearby Institute of Cytology and Genetics gathered up 130 foxes from fur farms. They then began breeding them with the goal of re-creating the evolution of wolves into dogs, a transformation that began more than 15,000 years ago.
With each generation of fox kits, Belyaev and his colleagues tested their reactions to human contact, selecting those most approachable to breed for the next generation. By the mid-1960s the experiment was working beyond what he could've imagined. They were producing foxes like Mavrik, not just unafraid of humans but actively seeking to bond with them. His team even repeated the experiment in two other species, mink and rats. "One huge thing that Belyaev showed was the timescale," says Gordon Lark, a University of Utah biologist who studies dog genetics. "If you told me the animal would now come sniff you at the front of the cage, I would say it's what I expect. But that they would become that friendly toward humans that quickly… wow."
Miraculously, Belyaev had compressed thousands of years of domestication into a few years. But he wasn't just looking to prove he could create friendly foxes. He had a hunch that he could use them to unlock domestication's molecular mysteries. Domesticated animals are known to share a common set of characteristics, a fact documented by Darwin in The Variation of Animals and Plants Under Domestication. They tend to be smaller, with floppier ears and curlier tails than their untamed progenitors. Such traits tend to make animals appear appealingly juvenile to humans. Their coats are sometimes spotted—piebald, in scientific terminology—while their wild ancestors' coats are solid. These and other traits, sometimes referred to as the domestication phenotype, exist in varying degrees across a remarkably wide range of species, from dogs, pigs, and cows to some nonmammalians like chickens, and even a few fish.
Belyaev suspected that as the foxes became domesticated, they too might begin to show aspects of a domestication phenotype. He was right again: Selecting which foxes to breed based solely on how well they got along with humans seemed to alter their physical appearance along with their dispositions. After only nine generations, the researchers recorded fox kits born with floppier ears. Piebald patterns appeared on their coats. By this time the foxes were already whining and wagging their tails in response to a human presence, behaviors never seen in wild foxes.
Driving those changes, Belyaev postulated, was a collection of genes that conferred a propensity to tameness—a genotype that the foxes perhaps shared with any species that could be domesticated. Here on the fox farm, Kukekova and Trut are searching for precisely those genes today. Elsewhere, researchers are delving into the DNA of pigs, chickens, horses, and other domesticated species, looking to pinpoint the genetic differences that came to distinguish them from their ancestors. The research, accelerated by the recent advances in rapid genome sequencing, aims to answer a fundamental biological question: "How is it possible to make this huge transformation from wild animals into domestic animals?" says Leif Andersson, a professor of genome biology at Uppsala University, in Sweden. The answer has implications for understanding not just how we domesticated animals, but how we tamed the wild in ourselves as well.
The exercise of dominion over plants and animals is arguably the most consequential event in human history. Along with cultivated agriculture, the ability to raise and manage domesticated fauna—of which wolves were likely the first, but chickens, cattle, and other food species the most important—altered the human diet, paving the way for settlements and eventually nation-states to flourish. By putting humans in close contact with animals, domestication also created vectors for the diseases that shaped society.
Yet the process by which it all happened has remained stubbornly impenetrable. Animal bones and stone carvings can sometimes shed light on the when and where each species came to live side by side with humans. More difficult to untangle is the how. Did a few curious boar creep closer to human populations, feeding off their garbage and with each successive generation becoming a little more a part of our diet? Did humans capture red jungle fowl, the ancestor of the modern chicken, straight from the wild—or did the fowl make the first approach? Out of 148 large mammal species on Earth, why have no more than 15 ever been domesticated? Why have we been able to tame and breed horses for thousands of years, but never their close relative the zebra, despite numerous attempts?
In fact, scientists have even struggled to define domestication precisely. We all know that individual animals can be trained to exist in close contact with humans. A tiger cub fed by hand, imprinting on its captors, may grow up to treat them like family. But that tiger's offspring, at birth, will be just as wild as its ancestors. Domestication, by contrast, is not a quality trained into an individual, but one bred into an entire population through generations of living in proximity to humans. Many if not most of the species' wild instincts have long since been lost. Domestication, in other words, is mostly in the genes.
Yet the borders between domesticated and wild are often fluid. A growing body of evidence shows that historically, domesticated animals likely played a large part in their own taming, habituating themselves to humans before we took an active role in the process. "My working hypothesis," says Greger Larson, an expert on genetics and domestication at Durham University in the United Kingdom, "is that with most of the early animals—dogs first, then pigs, sheep, and goats—there was probably a long period of time of unintentional management by humans." The word domestication "implies something top down, something that humans did intentionally," he says. "But the complex story is so much more interesting."
(http://ngm.nationalgeographic.com/2011/03/taming-wild-animals/ratliff-text)
Tuesday, March 1, 2011
Cats Adore, Manipulate Women
By Jennifer Viegas
The bond between cats and their owners turns out to be far more intense than imagined, especially for cat aficionado women and their affection reciprocating felines, suggests a new study.
Cats attach to humans, and particularly women, as social partners, and it's not just for the sake of obtaining food, according to the new research, which has been accepted for publication in the journal Behavioural Processes.
The study is the first to show in detail that the dynamics underlying cat-human relationships are nearly identical to human-only bonds, with cats sometimes even becoming a furry "child" in nurturing homes.
"Food is often used as a token of affection, and the ways that cats and humans relate to food are similar in nature to the interactions seen between the human caregiver and the pre-verbal infant," co-author Jon Day, a Waltham Centre for Pet Nutrition researcher, told Discovery News. "Both cat and human infant are, at least in part, in control of when and what they are fed!"
For the study, led by Kurt Kotrschal of the Konrad Lorenz Research Station and the University of Vienna, the researchers videotaped and later analyzed interactions between 41 cats and their owners over lengthy four-part periods. Each and every behavior of both the cat and owner was noted. Owner and cat personalities were also assessed in a separate test. For the cat assessment, the authors placed a stuffed owl toy with large glass eyes on a floor so the feline would encounter it by surprise.
The researchers determined that cats and their owners strongly influenced each other, such that they were each often controlling the other's behaviors. Extroverted women with young, active cats enjoyed the greatest synchronicity, with cats in these relationships only having to use subtle cues, such as a single upright tail move, to signal desire for friendly contact.
While cats have plenty of male admirers, and vice versa, this study and others reveal that women tend to interact with their cats -- be they male or female felines -- more than men do.
"In response, the cats approach female owners more frequently, and initiate contact more frequently (such as jumping on laps) than they do with male owners," co-author Manuela Wedl of the University of Vienna told Discovery News, adding that "female owners have more intense relationships with their cats than do male owners."
Cats also seem to remember kindness and return the favors later. If owners comply with their feline's wishes to interact, then the cat will often comply with the owner's wishes at other times. The cat may also "have an edge in this negotiation," since owners are usually already motivated to establish social contact.
Although there are isolated instances of non-human animals, such as gorillas, bonding with other species, it seems to be mostly unique for humans to engage in social relationships with other animals. In this case with cats, it's for very good reason. Cats could very well be man's -- and woman's -- best friend.
"A relationship between a cat and a human can involve mutual attraction, personality compatibility, ease of interaction, play, affection and social support," co-author Dorothy Gracey of the University of Vienna explained. "A human and a cat can mutually develop complex ritualized interactions that show substantial mutual understanding of each other's inclinations and preferences."
Dennis Turner, a University of Zurich-Irchel animal behaviorist, told Discovery News the he's "very impressed with this study on human-cat interactions, in that it has taken our earlier findings a step higher, using more modern analytical techniques to get at the interplay between cat and human personalities."
Turner, who is also senior editor of The Domestic Cat: The Biology of Its Behaviour (Cambridge University Press), added that he and his colleagues "now have a new dimension to help us understand how these relationships function."
Kotrschal's team is presently involved in a long-term study of man's other well-known animal best friend: dogs.
The bond between cats and their owners turns out to be far more intense than imagined, especially for cat aficionado women and their affection reciprocating felines, suggests a new study.
Cats attach to humans, and particularly women, as social partners, and it's not just for the sake of obtaining food, according to the new research, which has been accepted for publication in the journal Behavioural Processes.
The study is the first to show in detail that the dynamics underlying cat-human relationships are nearly identical to human-only bonds, with cats sometimes even becoming a furry "child" in nurturing homes.
"Food is often used as a token of affection, and the ways that cats and humans relate to food are similar in nature to the interactions seen between the human caregiver and the pre-verbal infant," co-author Jon Day, a Waltham Centre for Pet Nutrition researcher, told Discovery News. "Both cat and human infant are, at least in part, in control of when and what they are fed!"
For the study, led by Kurt Kotrschal of the Konrad Lorenz Research Station and the University of Vienna, the researchers videotaped and later analyzed interactions between 41 cats and their owners over lengthy four-part periods. Each and every behavior of both the cat and owner was noted. Owner and cat personalities were also assessed in a separate test. For the cat assessment, the authors placed a stuffed owl toy with large glass eyes on a floor so the feline would encounter it by surprise.
The researchers determined that cats and their owners strongly influenced each other, such that they were each often controlling the other's behaviors. Extroverted women with young, active cats enjoyed the greatest synchronicity, with cats in these relationships only having to use subtle cues, such as a single upright tail move, to signal desire for friendly contact.
While cats have plenty of male admirers, and vice versa, this study and others reveal that women tend to interact with their cats -- be they male or female felines -- more than men do.
"In response, the cats approach female owners more frequently, and initiate contact more frequently (such as jumping on laps) than they do with male owners," co-author Manuela Wedl of the University of Vienna told Discovery News, adding that "female owners have more intense relationships with their cats than do male owners."
Cats also seem to remember kindness and return the favors later. If owners comply with their feline's wishes to interact, then the cat will often comply with the owner's wishes at other times. The cat may also "have an edge in this negotiation," since owners are usually already motivated to establish social contact.
Although there are isolated instances of non-human animals, such as gorillas, bonding with other species, it seems to be mostly unique for humans to engage in social relationships with other animals. In this case with cats, it's for very good reason. Cats could very well be man's -- and woman's -- best friend.
"A relationship between a cat and a human can involve mutual attraction, personality compatibility, ease of interaction, play, affection and social support," co-author Dorothy Gracey of the University of Vienna explained. "A human and a cat can mutually develop complex ritualized interactions that show substantial mutual understanding of each other's inclinations and preferences."
Dennis Turner, a University of Zurich-Irchel animal behaviorist, told Discovery News the he's "very impressed with this study on human-cat interactions, in that it has taken our earlier findings a step higher, using more modern analytical techniques to get at the interplay between cat and human personalities."
Turner, who is also senior editor of The Domestic Cat: The Biology of Its Behaviour (Cambridge University Press), added that he and his colleagues "now have a new dimension to help us understand how these relationships function."
Kotrschal's team is presently involved in a long-term study of man's other well-known animal best friend: dogs.
Monday, February 28, 2011
Is lack of sleep and water giving ecstasy a bad name?
ALL-NIGHT ravers who take ecstasy might not be harming their brains any more than drug-free party animals.
So say John Halpern and colleagues at Harvard Medical School in Boston, who argue that many studies apparently showing that ecstasy use can lead to memory loss and depression were flawed as they did not take account of the rave culture associated with ecstasy use. Lack of sleep and dehydration resulting from all-night dancing can cause cognitive problems on their own, they say.
Halpern's team compared ecstasy users with non-users who had a history of all-night dancing with limited exposure to alcohol and drugs. Both groups completed tests for verbal fluency, memory, depression and other factors.
The team found no significant differences in cognitive performance between the two groups, even when they compared non-users with heavy users of the drug
(http://www.newscientist.com/article/mg20928012.700-is-lack-of-sleep-and-water-giving-ecstasy-a-bad-name.html)
So say John Halpern and colleagues at Harvard Medical School in Boston, who argue that many studies apparently showing that ecstasy use can lead to memory loss and depression were flawed as they did not take account of the rave culture associated with ecstasy use. Lack of sleep and dehydration resulting from all-night dancing can cause cognitive problems on their own, they say.
Halpern's team compared ecstasy users with non-users who had a history of all-night dancing with limited exposure to alcohol and drugs. Both groups completed tests for verbal fluency, memory, depression and other factors.
The team found no significant differences in cognitive performance between the two groups, even when they compared non-users with heavy users of the drug
(http://www.newscientist.com/article/mg20928012.700-is-lack-of-sleep-and-water-giving-ecstasy-a-bad-name.html)
Sunday, February 27, 2011
Scientists create one-dimensional ferroelectric ice
By Lisa Zyga
The researchers, including Hai-Xia Zhao, Xiang-Jian Kong and La-Sheng Long, along with their coauthors from Xiamen University in Xiamen, China, and Hui Li and Xiao Cheng Zeng from the University of Nebraska in the US, have published their study in a recent issue of the Proceedings of the National Academy of Sciences.
Every water molecule carries a tiny electric field. But because water molecules usually freeze in a somewhat random arrangement, with their bonds pointing in different directions, the ice’s total electric field tends to cancel out. In contrast, the bonds in ferroelectric ice all point in the same direction at low enough temperatures, so that it has a net polarization in one direction that produces an electric field.
Ferroelectric ice is thought to be extremely rare; in fact, scientists are still investigating whether or not pure three-dimensional ferroelectric ice exists in nature. Some researchers have proposed that ferroelectric ice may exist on Uranus, Neptune, or Pluto. Creating pure 3D ferroelectric ice in the laboratory seems next to impossible, since it would take an estimated 100,000 years to form without the assistance of catalysts. So far, all ferroelectric ices produced in the laboratory are less than three dimensions and in mixed phases (heterogeneous).
In the new study, the scientists have synthesized a one-dimensional, single-phase (homogeneous) ferroelectric ice by freezing a one-dimensional water ‘wire.’ As far as the scientists know, this is the first single-phase ferroelectric ice synthesized in the laboratory.
To create the water wire, the researchers designed very thin nanochannels that can hold just 96 H2O molecules per crystalline unit cell. By lowering the temperature from a starting point of 350 K (77°C, 171°F), they found that the water wire undergoes a phase transition below 277 K (4°C, 39°F), transforming from 1D liquid to 1D ice. The ice also exhibits a large dielectric anomaly at this temperature and at 175 K (-98°C, -144°F).
(http://www.physorg.com/news/2011-02-scientists-one-dimensional-ferroelectric-ice.html)
The researchers, including Hai-Xia Zhao, Xiang-Jian Kong and La-Sheng Long, along with their coauthors from Xiamen University in Xiamen, China, and Hui Li and Xiao Cheng Zeng from the University of Nebraska in the US, have published their study in a recent issue of the Proceedings of the National Academy of Sciences.
Every water molecule carries a tiny electric field. But because water molecules usually freeze in a somewhat random arrangement, with their bonds pointing in different directions, the ice’s total electric field tends to cancel out. In contrast, the bonds in ferroelectric ice all point in the same direction at low enough temperatures, so that it has a net polarization in one direction that produces an electric field.
Ferroelectric ice is thought to be extremely rare; in fact, scientists are still investigating whether or not pure three-dimensional ferroelectric ice exists in nature. Some researchers have proposed that ferroelectric ice may exist on Uranus, Neptune, or Pluto. Creating pure 3D ferroelectric ice in the laboratory seems next to impossible, since it would take an estimated 100,000 years to form without the assistance of catalysts. So far, all ferroelectric ices produced in the laboratory are less than three dimensions and in mixed phases (heterogeneous).
In the new study, the scientists have synthesized a one-dimensional, single-phase (homogeneous) ferroelectric ice by freezing a one-dimensional water ‘wire.’ As far as the scientists know, this is the first single-phase ferroelectric ice synthesized in the laboratory.
To create the water wire, the researchers designed very thin nanochannels that can hold just 96 H2O molecules per crystalline unit cell. By lowering the temperature from a starting point of 350 K (77°C, 171°F), they found that the water wire undergoes a phase transition below 277 K (4°C, 39°F), transforming from 1D liquid to 1D ice. The ice also exhibits a large dielectric anomaly at this temperature and at 175 K (-98°C, -144°F).
(http://www.physorg.com/news/2011-02-scientists-one-dimensional-ferroelectric-ice.html)
Saturday, February 26, 2011
Mobiles 'increase brain activity'
“Mobile phones are a brain cell killer,” reported The Sun. The newspaper claimed that a study of hundreds of mobile users found that the signals emitted during calls can cause a 7% rise in chemical changes in the brain. It said that these may boost the chances of developing cancer. Other papers also reported the study in a more balanced way.
The laboratory-based study recruited 47 healthy volunteers who had their brain activity measured while they had mobile phones fixed to both sides of their head. One of the handsets received a call on silent for 50 minutes. Brain scans showed there was a 7% increase in brain activity in the area closest to that phone’s antenna.
The Sun over-interpreted the findings of this study and put an alarming spin on it that is not supported by the findings. The study did not show that mobile phones kill brain cells or cause cancer. The size of the effect was small, and the researchers themselves say that the findings are of “unknown clinical significance”. They state that it is not possible to tell from their findings whether or not these effects are harmful. Further research is needed.
(http://www.nhs.uk/news/2011/02February/Pages/mobiles-increase-brain-activity.aspx)
The laboratory-based study recruited 47 healthy volunteers who had their brain activity measured while they had mobile phones fixed to both sides of their head. One of the handsets received a call on silent for 50 minutes. Brain scans showed there was a 7% increase in brain activity in the area closest to that phone’s antenna.
The Sun over-interpreted the findings of this study and put an alarming spin on it that is not supported by the findings. The study did not show that mobile phones kill brain cells or cause cancer. The size of the effect was small, and the researchers themselves say that the findings are of “unknown clinical significance”. They state that it is not possible to tell from their findings whether or not these effects are harmful. Further research is needed.
(http://www.nhs.uk/news/2011/02February/Pages/mobiles-increase-brain-activity.aspx)
Friday, February 25, 2011
People with low self-esteem show more signs of prejudice
When people are feeling badly about themselves, they're more likely to show bias against people who are different. A new study published in Psychological Science, a journal of the Association for Psychological Science, examines how that works. "This is one of the oldest accounts of why people stereotype and have prejudice: It makes us feel better about ourselves," says Jeffrey Sherman of the University of California, Davis, who wrote the study with Thomas Allen. "When we feel bad about ourselves, we can denigrate other people, and that makes us feel better about ourselves."
Sherman and Allen used the Implicit Association Test (IAT)—a task designed to assess people's automatic reactions to words and/or images—to investigate this claim. In order to reveal people's implicit prejudice, participants are asked to watch a computer monitor while a series of positive words, negative words, and pictures of black or white faces appear. In the first part of the test, participants are asked to push the "E" key for either black faces or negative words and the "I" key for white faces or positive words. For the second task, the groupings are reversed—participants are now supposed to associate positive words with black faces and negative words with white faces.
Determining prejudice in the IAT is pretty straightforward: If participants have negative associations with black people, they should find the second task more difficult. This should be especially true when people feel bad about themselves.
But what psychologists don't agree on is how this works. "People were using the exact same data to make completely different arguments about why," Sherman says. There are two possibilities: either feeling bad about yourself activates negative evaluations of others, or it makes you less likely to suppress those biases.
In their experiment, Sherman and Allen asked participants to take a very difficult 12-question test that requires creative thinking. No one got more than two items correct. About half of the participants were given their test results and told that the average score was nine, to make them would feel bad about themselves. The other half were told that their tests would be graded later. All of the participants then completed the IAT and, as expected, those who were feeling bad about their test performance showed more evidence of implicit prejudice.
But Sherman and Allen took it a step farther. They also applied a mathematical model that reveals the processes that contribute to this effect. By plugging in the data from the experiment, they were able to determine that people who feel bad about themselves show enhanced prejudice because negative associations are activated to a greater degree, but not because they are less likely to suppress those feelings.
The difference is subtle, but important, Sherman says. "If the problem was that people were having trouble inhibiting bias, you might try to train people to exert better control," he says. But his results suggest that's not the issue. "The issue is that our mind wanders to more negative aspects of other groups. The way around that is to try and think differently about other people. When you feel bad about yourself and catch yourself thinking negatively about other groups, remind yourself, 'I may be feeling this way because I just failed a test or something.'"
(http://esciencenews.com/articles/2011/02/23/people.with.low.self.esteem.show.more.signs.prejudice)
Sherman and Allen used the Implicit Association Test (IAT)—a task designed to assess people's automatic reactions to words and/or images—to investigate this claim. In order to reveal people's implicit prejudice, participants are asked to watch a computer monitor while a series of positive words, negative words, and pictures of black or white faces appear. In the first part of the test, participants are asked to push the "E" key for either black faces or negative words and the "I" key for white faces or positive words. For the second task, the groupings are reversed—participants are now supposed to associate positive words with black faces and negative words with white faces.
Determining prejudice in the IAT is pretty straightforward: If participants have negative associations with black people, they should find the second task more difficult. This should be especially true when people feel bad about themselves.
But what psychologists don't agree on is how this works. "People were using the exact same data to make completely different arguments about why," Sherman says. There are two possibilities: either feeling bad about yourself activates negative evaluations of others, or it makes you less likely to suppress those biases.
In their experiment, Sherman and Allen asked participants to take a very difficult 12-question test that requires creative thinking. No one got more than two items correct. About half of the participants were given their test results and told that the average score was nine, to make them would feel bad about themselves. The other half were told that their tests would be graded later. All of the participants then completed the IAT and, as expected, those who were feeling bad about their test performance showed more evidence of implicit prejudice.
But Sherman and Allen took it a step farther. They also applied a mathematical model that reveals the processes that contribute to this effect. By plugging in the data from the experiment, they were able to determine that people who feel bad about themselves show enhanced prejudice because negative associations are activated to a greater degree, but not because they are less likely to suppress those feelings.
The difference is subtle, but important, Sherman says. "If the problem was that people were having trouble inhibiting bias, you might try to train people to exert better control," he says. But his results suggest that's not the issue. "The issue is that our mind wanders to more negative aspects of other groups. The way around that is to try and think differently about other people. When you feel bad about yourself and catch yourself thinking negatively about other groups, remind yourself, 'I may be feeling this way because I just failed a test or something.'"
(http://esciencenews.com/articles/2011/02/23/people.with.low.self.esteem.show.more.signs.prejudice)
Wednesday, February 23, 2011
UK’s Chief Scientific Adviser criticizes “journalists wilfully misusing science, distorting evidence by cherry-picking data that suits their view, giving bogus authority to people who misrepresent the absolute basics of science, and worse”
Government Chief Scientific Adviser John Beddington is stepping up the war on pseudoscience with a call to his fellow government scientists to be “grossly intolerant” if science is being misused by religious or political groups.
In closing remarks to an annual conference of around 300 scientific civil servants on 3 February, in London, Beddington said that selective use of science ought to be treated in the same way as racism and homophobia. “We are grossly intolerant, and properly so, of racism. We are grossly intolerant, and properly so, of people who [are] anti-homosexuality…. We are not—and I genuinely think we should think about how we do this—grossly intolerant of pseudo-science, the building up of what purports to be science by the cherry-picking of the facts and the failure to use scientific evidence and the failure to use scientific method,” he said.
Beddington said he intends to take this agenda forward with his fellow chief scientists and also with the research councils. “I really believe that… we need to recognise that this is a pernicious influence, it is an increasingly pernicious influence and we need to be thinking about how we can actually deal with it.
I first reported on Beddington back in 2009 when he warned that by 2030, “A ‘perfect storm’ of food shortages, scarce water and insufficient energy resources threaten to unleash public unrest, cross-border conflicts and mass migration as people flee from the worst-affected regions.” See “When the global Ponzi scheme collapses (circa 2030), the only jobs left will be green” for an amazing speech explaining why.
No doubt Beddington is thinking of UK journalists like David Rose and Richard North (see links below) — and James Delingpole, who recently melted down on the BBC and said, “It is not my job to sit down and read peer-reviewed papers because I simply haven’t got the time…. I am an interpreter of interpretations.”
Here’s more from the UK’s Chief Scientific Adviser:
“We should not tolerate what is potentially something that can seriously undermine our ability to address important problems.“There are enough difficult and important problems out there without having to… deal with what is politically or morally or religiously motivated nonsense.”
Beddington also had harsh words for journalists who treat the opinions of non-scientist commentators as being equivalent to the opinions of what he called “properly trained, properly assessed” scientists. “The media see the discussions about really important scientific events as if it’s a bloody football match. It is ridiculous.”
His call has been welcomed by science groups, including the Campaign for Science and Engineering.
Edzard Ernst, professor of the study of complementary medicine at Exeter University, whose department is being closed down, said he was “delighted that somebody in [Beddington’s] position speaks out”. In an interview with Research Fortnight Ernst said that the analogy with racism was a good one and that he, like Beddington, questioned why journalists have what he called “a pathological need” to balance a scientific opinion with one from outside of science.
“You don’t have that balance in racism,” he said. “You’re not finishing [an article] by quoting the Ku Klux Klan when it is an article about racist ideas,” Ernst said.
“This is strong language because the frustration is so huge and because scientists are being misunderstood. For far too long we have been tolerant of these post-modern ideas that more than one truth is valid. All this sort of nonsense does make you very frustrated in the end.”
Ben Goldacre, a science journalist and medical doctor, agrees. “Society has been far too tolerant of politicians, lobbyists, and journalists wilfully misusing science, distorting evidence by cherry-picking data that suits their view, giving bogus authority to people who misrepresent the absolute basics of science, and worse,” he told Research Fortnight. “This distorted evidence has real world implications, because people need good evidence to make informed decisions on policy, health, and more. Beddington is frustrated, and rightly so: for years I’ve had journalists and politicians repeatedly try to brush my concerns on these issues under the carpet.” Scientists need to fight back, he says.
(http://climateprogress.org/2011/02/21/uks-chief-scientific-adviser-criticizes-journalists-wilfully-misusing-science/)
In closing remarks to an annual conference of around 300 scientific civil servants on 3 February, in London, Beddington said that selective use of science ought to be treated in the same way as racism and homophobia. “We are grossly intolerant, and properly so, of racism. We are grossly intolerant, and properly so, of people who [are] anti-homosexuality…. We are not—and I genuinely think we should think about how we do this—grossly intolerant of pseudo-science, the building up of what purports to be science by the cherry-picking of the facts and the failure to use scientific evidence and the failure to use scientific method,” he said.
Beddington said he intends to take this agenda forward with his fellow chief scientists and also with the research councils. “I really believe that… we need to recognise that this is a pernicious influence, it is an increasingly pernicious influence and we need to be thinking about how we can actually deal with it.
I first reported on Beddington back in 2009 when he warned that by 2030, “A ‘perfect storm’ of food shortages, scarce water and insufficient energy resources threaten to unleash public unrest, cross-border conflicts and mass migration as people flee from the worst-affected regions.” See “When the global Ponzi scheme collapses (circa 2030), the only jobs left will be green” for an amazing speech explaining why.
No doubt Beddington is thinking of UK journalists like David Rose and Richard North (see links below) — and James Delingpole, who recently melted down on the BBC and said, “It is not my job to sit down and read peer-reviewed papers because I simply haven’t got the time…. I am an interpreter of interpretations.”
Here’s more from the UK’s Chief Scientific Adviser:
“We should not tolerate what is potentially something that can seriously undermine our ability to address important problems.“There are enough difficult and important problems out there without having to… deal with what is politically or morally or religiously motivated nonsense.”
Beddington also had harsh words for journalists who treat the opinions of non-scientist commentators as being equivalent to the opinions of what he called “properly trained, properly assessed” scientists. “The media see the discussions about really important scientific events as if it’s a bloody football match. It is ridiculous.”
His call has been welcomed by science groups, including the Campaign for Science and Engineering.
Edzard Ernst, professor of the study of complementary medicine at Exeter University, whose department is being closed down, said he was “delighted that somebody in [Beddington’s] position speaks out”. In an interview with Research Fortnight Ernst said that the analogy with racism was a good one and that he, like Beddington, questioned why journalists have what he called “a pathological need” to balance a scientific opinion with one from outside of science.
“You don’t have that balance in racism,” he said. “You’re not finishing [an article] by quoting the Ku Klux Klan when it is an article about racist ideas,” Ernst said.
“This is strong language because the frustration is so huge and because scientists are being misunderstood. For far too long we have been tolerant of these post-modern ideas that more than one truth is valid. All this sort of nonsense does make you very frustrated in the end.”
Ben Goldacre, a science journalist and medical doctor, agrees. “Society has been far too tolerant of politicians, lobbyists, and journalists wilfully misusing science, distorting evidence by cherry-picking data that suits their view, giving bogus authority to people who misrepresent the absolute basics of science, and worse,” he told Research Fortnight. “This distorted evidence has real world implications, because people need good evidence to make informed decisions on policy, health, and more. Beddington is frustrated, and rightly so: for years I’ve had journalists and politicians repeatedly try to brush my concerns on these issues under the carpet.” Scientists need to fight back, he says.
(http://climateprogress.org/2011/02/21/uks-chief-scientific-adviser-criticizes-journalists-wilfully-misusing-science/)
Tuesday, February 22, 2011
More Intelligent People Are More Likely to Binge Drink and Get Drunk
by Satoshi Kanazawa
Not only are more intelligent individuals more likely to consume more alcohol more frequently, they are more likely to engage in binge drinking and to get drunk.
In an earlier post, I show that, consistent with the prediction of the Hypothesis, more intelligent individuals consume larger quantities of alcohol more frequently than less intelligent individuals. The data presented in the post come from the National Child Development Study in the United Kingdom. The NCDS measures the respondents’ general intelligence before the age of 16, and then tracks the quantity and frequency of alcohol consumption throughout their adulthood in their 20s, 30s, and 40s. The graphs presented in the post show a clear monotonic association between childhood general intelligence and both the frequency and the quantity of adult alcohol consumption. The more intelligent they are in childhood, the more and the more frequently they consume alcohol in their adulthood.
Related Links
There are occasional medical reports and scientific studies which tout the health benefits of mild alcohol consumption, such as drinking a glass of red wine with dinner every night. So it may be tempting to conclude that more intelligent individuals are more likely to engage in such mild alcohol consumption than less intelligent individuals, and the positive association between childhood general intelligence and adult alcohol consumption reflects such mild, and thus healthy and beneficial, alcohol consumption.
Unfortunately for the intelligent individuals, this is not the case. More intelligent children are more likely to grow up to engage in binge drinking (consuming five or more units of alcohol in one sitting) and getting drunk.
The National Longitudinal Study of Adolescent Health (Add Health) asks its respondents specific questions about binge drinking and getting drunk. For binge drinking, Add Health asks: “During the past 12 months, on how many days did you drink five or more drinks in a row?” For getting drunk, it asks: “During the past 12 months, on how many days have you been drunk or very high on alcohol?” For both questions, the respondents can answer on a six-point ordinal scale: 0 = none, 1 = 1 or 2 days in the past 12 months, 2 = once a month or less (3 to 12 times in the past 12 months), 3 = 2 or 3 days a month, 4 = 1 or 2 days a week, 5 = 3 to 5 days a week, 6 = every day or almost every day.
As you can see in the following graph, there is a clear monotonic positive association between childhood intelligence and adult frequency of binge drinking. “Very dull” Add Health respondents (with childhood IQ < 75) engage in binge drinking less than once a year. In sharp contrast, “very bright” Add Health respondents (with childhood IQ > 125) engage in binge drinking roughly once every other month.
The association between childhood intelligence and adult frequency of getting drunk is equally clear and monotonic, as you can see in the following graph. “Very dull” Add Health respondents almost never get drunk, whereas “very bright” Add Health respondents get drunk once every other month or so.
In a multiple ordinal regression, childhood intelligence has a significant (ps < .00001) effect on adult frequency of both binge drinking and getting drunk, controlling for age, sex, race, ethnicity, religion, marital status, parental status, education, earnings, political attitudes, religiosity, general satisfaction with life, taking medication for stress, experience of stress without taking medication, frequency of socialization with friends, number of sex partners in the last 12 months, childhood family income, mother’s education, and father’s education. I honestly cannot think of any other variable that might be correlated with childhood intelligence than those already controlled for in the multiple regression analyses. It is very likely that it is childhood intelligence itself, and not anything else that is confounded with it, which increases the adult frequency of binge drinking and getting drunk.
Note that education is controlled for in the ordinal multiple regression analysis. Given that Add Health respondents in Wave III (when the dependent measures are taken) are in their early 20s, it may be tempting to conclude that the association between childhood intelligence and adult frequency of binge drinking and getting drunk is mediated by college attendance. More intelligent children are more likely to go to college, and college students are more likely to engage in binge drinking and get drunk. The significant partial effect of childhood intelligence on the adult frequency of binge drinking and getting drunk, net of education, shows that this indeed is not the case. It is childhood intelligence itself, not education, which increases the adult frequency of binge drinking and getting drunk.
In fact, in both equations, education does not have a significant effect on binge drinking and getting drunk. Net of all the other variables included in the ordinal multiple regression equations, education is not significantly correlated with the frequency of binge drinking and getting drunk. Among other things, it means that college students are more likely to engage in binge drinking, not because they are in college, but because they are more intelligent.
(http://www.psychologytoday.com/blog/the-scientific-fundamentalist/201102/more-intelligent-people-are-more-likely-binge-drink-and-ge)
Not only are more intelligent individuals more likely to consume more alcohol more frequently, they are more likely to engage in binge drinking and to get drunk.
In an earlier post, I show that, consistent with the prediction of the Hypothesis, more intelligent individuals consume larger quantities of alcohol more frequently than less intelligent individuals. The data presented in the post come from the National Child Development Study in the United Kingdom. The NCDS measures the respondents’ general intelligence before the age of 16, and then tracks the quantity and frequency of alcohol consumption throughout their adulthood in their 20s, 30s, and 40s. The graphs presented in the post show a clear monotonic association between childhood general intelligence and both the frequency and the quantity of adult alcohol consumption. The more intelligent they are in childhood, the more and the more frequently they consume alcohol in their adulthood.
Related Links
There are occasional medical reports and scientific studies which tout the health benefits of mild alcohol consumption, such as drinking a glass of red wine with dinner every night. So it may be tempting to conclude that more intelligent individuals are more likely to engage in such mild alcohol consumption than less intelligent individuals, and the positive association between childhood general intelligence and adult alcohol consumption reflects such mild, and thus healthy and beneficial, alcohol consumption.
Unfortunately for the intelligent individuals, this is not the case. More intelligent children are more likely to grow up to engage in binge drinking (consuming five or more units of alcohol in one sitting) and getting drunk.
The National Longitudinal Study of Adolescent Health (Add Health) asks its respondents specific questions about binge drinking and getting drunk. For binge drinking, Add Health asks: “During the past 12 months, on how many days did you drink five or more drinks in a row?” For getting drunk, it asks: “During the past 12 months, on how many days have you been drunk or very high on alcohol?” For both questions, the respondents can answer on a six-point ordinal scale: 0 = none, 1 = 1 or 2 days in the past 12 months, 2 = once a month or less (3 to 12 times in the past 12 months), 3 = 2 or 3 days a month, 4 = 1 or 2 days a week, 5 = 3 to 5 days a week, 6 = every day or almost every day.
As you can see in the following graph, there is a clear monotonic positive association between childhood intelligence and adult frequency of binge drinking. “Very dull” Add Health respondents (with childhood IQ < 75) engage in binge drinking less than once a year. In sharp contrast, “very bright” Add Health respondents (with childhood IQ > 125) engage in binge drinking roughly once every other month.
The association between childhood intelligence and adult frequency of getting drunk is equally clear and monotonic, as you can see in the following graph. “Very dull” Add Health respondents almost never get drunk, whereas “very bright” Add Health respondents get drunk once every other month or so.
In a multiple ordinal regression, childhood intelligence has a significant (ps < .00001) effect on adult frequency of both binge drinking and getting drunk, controlling for age, sex, race, ethnicity, religion, marital status, parental status, education, earnings, political attitudes, religiosity, general satisfaction with life, taking medication for stress, experience of stress without taking medication, frequency of socialization with friends, number of sex partners in the last 12 months, childhood family income, mother’s education, and father’s education. I honestly cannot think of any other variable that might be correlated with childhood intelligence than those already controlled for in the multiple regression analyses. It is very likely that it is childhood intelligence itself, and not anything else that is confounded with it, which increases the adult frequency of binge drinking and getting drunk.
Note that education is controlled for in the ordinal multiple regression analysis. Given that Add Health respondents in Wave III (when the dependent measures are taken) are in their early 20s, it may be tempting to conclude that the association between childhood intelligence and adult frequency of binge drinking and getting drunk is mediated by college attendance. More intelligent children are more likely to go to college, and college students are more likely to engage in binge drinking and get drunk. The significant partial effect of childhood intelligence on the adult frequency of binge drinking and getting drunk, net of education, shows that this indeed is not the case. It is childhood intelligence itself, not education, which increases the adult frequency of binge drinking and getting drunk.
In fact, in both equations, education does not have a significant effect on binge drinking and getting drunk. Net of all the other variables included in the ordinal multiple regression equations, education is not significantly correlated with the frequency of binge drinking and getting drunk. Among other things, it means that college students are more likely to engage in binge drinking, not because they are in college, but because they are more intelligent.
(http://www.psychologytoday.com/blog/the-scientific-fundamentalist/201102/more-intelligent-people-are-more-likely-binge-drink-and-ge)
Thursday, February 17, 2011
Humans Living in East Africa 200,000 Years Ago Were as Complex in their Behavior as Humans Living Today
John Shea, Ph.D., Refutes Long-Standing Myth About Human Origins
"Stone points dating to at least 104,000 years ago from Omo Kibish, Ethiopia. These points, shaped by pressure-flaking and likely used as projectile points are more than 65,000 years older than the oldest similar artifacts from the European Upper Paleolithic Period. The Omo Kibish toolmakers showed equal skill at making similar points out of very different kinds of stone.
STONY BROOK, N.Y., February 17, 2011— In a paper recently published in Current Anthropology, SBU Professor John Shea disproves the myth that the earliest humans were significantly different from us. The idea that human evolution follows a progressive trajectory is one of the most deeply-entrenched assumptions about Homo sapiens evolution. In fact, archaeologists have long believed that modern human behaviors emerged tens of thousands of years after our species first evolved. And while scientists disagreed over whether the process was gradual or quick, they have agreed that Homo sapiens once lived who were very different from us.
“Archaeologists have been focusing on the wrong measurement of early human behavior,” says John Shea, Ph.D, professor of Anthropology at SBU and a Research Associate with the Turkana Basin Institute in Kenya. “The search has been for evidence of ‘behavioral modernity,’ a quality supposedly unique to Homo sapiens, when archaeologists ought to have been investigating ‘behavioral variability,’ a quantitative dimension to the behavior of all living things.”
Early humans were not “behaviorally modern,” meaning they did not collect difficult-to-procure foods, nor did they use complex technologies like traps and nets. But, according to Shea, there is now evidence that some of the behaviors associated with modern humans—specifically our capacity for wide behavioral variability, —did occur among early humans.
The European Upper Paleolithic archaeological record has long been the standard against which the behavior of earlier and non-European humans is compared. During the Upper Paleolithic (45,000-12,000 years ago), Homo sapiens fossils first appeared, together with complex tool technology, carved bone tools, complex projectile weapons, advanced techniques for using fire, cave art, beads and other personal adornments. Similar behaviors are either universal or very nearly so among recent humans, and, thus, archaeologists cite evidence for these behaviors as evidence of human behavioral modernity.
Yet, the oldest Homo sapiens fossils occur between 100,000-200,000 years ago in Africa and southern Asia and in contexts lacking clear and consistent evidence for such behavioral modernity. For decades anthropologists contrasted these earlier “archaic” African and Asian humans with their “behaviorally-modern” Upper Paleolithic counterparts, explaining the differences between them in terms of a single “Human Revolution” that fundamentally changed human biology and behavior. Archaeologists disagree about the causes, timing, pace, and characteristics of this revolution, but there is a consensus that the behavior of the earliest Homo sapiens was significantly different than that of more-recent “modern” humans.
Professor Shea tested the hypothesis that there were differences in behavioral variability between earlier and later Homo sapiens using stone tool evidence dating to between 250,000-6,000 years ago in eastern Africa, which features the longest continuous archaeological record of Homo sapiens behavior. “A systematic comparison of variability in stone tool making strategies over the last quarter-million years shows no single behavioral revolution in our species’ evolutionary history,” notes Professor Shea. “Instead, the evidence shows wide variability in stone tool making strategies over the last quarter-million years and no single behavioral revolution. Particular changes in stone tool technology are explicable in terms of principles of behavioral ecology and the costs and benefits of different tool making strategies.”
The study, entitled “Homo sapiens Is as Homo sapiens Was: Behavioral Variability vs. ‘Behavioral Modernity’ in Paleolithic Archaeology,” has important implications for archaeological research on human origins. “Comparing the behavior of our most ancient ancestors to Upper Paleolithic Europeans holistically and ranking them in terms of their “behavioral modernity” is a waste of time,” argues Shea. “There are no such things as modern humans, just Homo sapiens populations with the capacity for a wide range of behavioral variability. Whether this range is significantly different from that of earlier and other hominin species remains to be discovered, but the best way to advance our understanding of human behavior is by researching the sources of behavioral variability.”
About Stony Brook University
Part of the State University of New York system, Stony Brook University encompasses 200 buildings on 1,450 acres. In the 53 years since its founding, the University has grown tremendously, now with nearly 25,000 students and 2,200 faculty, and is recognized as one of the nation’s important centers of learning and scholarship. It is a member of the prestigious Association of American Universities, and ranks among the top 100 national universities in America and among the top 50 public national universities in the country according to the 2010 U.S. News & World Report survey. Stony Brook University co-manages Brookhaven National Laboratory, as one of an elite group of universities, including Berkeley, University of Chicago, Cornell, MIT, and Princeton that run federal research and development laboratories. SBU is a driving force of the Long Island economy, with an annual economic impact of $4.65 billion, generating nearly 60,000 jobs, and accounts for nearly 4% of all economic activity in Nassau and Suffolk counties, and roughly 7.5 percent of total jobs in Suffolk County.
"Stone points dating to at least 104,000 years ago from Omo Kibish, Ethiopia. These points, shaped by pressure-flaking and likely used as projectile points are more than 65,000 years older than the oldest similar artifacts from the European Upper Paleolithic Period. The Omo Kibish toolmakers showed equal skill at making similar points out of very different kinds of stone.
STONY BROOK, N.Y., February 17, 2011— In a paper recently published in Current Anthropology, SBU Professor John Shea disproves the myth that the earliest humans were significantly different from us. The idea that human evolution follows a progressive trajectory is one of the most deeply-entrenched assumptions about Homo sapiens evolution. In fact, archaeologists have long believed that modern human behaviors emerged tens of thousands of years after our species first evolved. And while scientists disagreed over whether the process was gradual or quick, they have agreed that Homo sapiens once lived who were very different from us.
“Archaeologists have been focusing on the wrong measurement of early human behavior,” says John Shea, Ph.D, professor of Anthropology at SBU and a Research Associate with the Turkana Basin Institute in Kenya. “The search has been for evidence of ‘behavioral modernity,’ a quality supposedly unique to Homo sapiens, when archaeologists ought to have been investigating ‘behavioral variability,’ a quantitative dimension to the behavior of all living things.”
Early humans were not “behaviorally modern,” meaning they did not collect difficult-to-procure foods, nor did they use complex technologies like traps and nets. But, according to Shea, there is now evidence that some of the behaviors associated with modern humans—specifically our capacity for wide behavioral variability, —did occur among early humans.
The European Upper Paleolithic archaeological record has long been the standard against which the behavior of earlier and non-European humans is compared. During the Upper Paleolithic (45,000-12,000 years ago), Homo sapiens fossils first appeared, together with complex tool technology, carved bone tools, complex projectile weapons, advanced techniques for using fire, cave art, beads and other personal adornments. Similar behaviors are either universal or very nearly so among recent humans, and, thus, archaeologists cite evidence for these behaviors as evidence of human behavioral modernity.
Yet, the oldest Homo sapiens fossils occur between 100,000-200,000 years ago in Africa and southern Asia and in contexts lacking clear and consistent evidence for such behavioral modernity. For decades anthropologists contrasted these earlier “archaic” African and Asian humans with their “behaviorally-modern” Upper Paleolithic counterparts, explaining the differences between them in terms of a single “Human Revolution” that fundamentally changed human biology and behavior. Archaeologists disagree about the causes, timing, pace, and characteristics of this revolution, but there is a consensus that the behavior of the earliest Homo sapiens was significantly different than that of more-recent “modern” humans.
Professor Shea tested the hypothesis that there were differences in behavioral variability between earlier and later Homo sapiens using stone tool evidence dating to between 250,000-6,000 years ago in eastern Africa, which features the longest continuous archaeological record of Homo sapiens behavior. “A systematic comparison of variability in stone tool making strategies over the last quarter-million years shows no single behavioral revolution in our species’ evolutionary history,” notes Professor Shea. “Instead, the evidence shows wide variability in stone tool making strategies over the last quarter-million years and no single behavioral revolution. Particular changes in stone tool technology are explicable in terms of principles of behavioral ecology and the costs and benefits of different tool making strategies.”
The study, entitled “Homo sapiens Is as Homo sapiens Was: Behavioral Variability vs. ‘Behavioral Modernity’ in Paleolithic Archaeology,” has important implications for archaeological research on human origins. “Comparing the behavior of our most ancient ancestors to Upper Paleolithic Europeans holistically and ranking them in terms of their “behavioral modernity” is a waste of time,” argues Shea. “There are no such things as modern humans, just Homo sapiens populations with the capacity for a wide range of behavioral variability. Whether this range is significantly different from that of earlier and other hominin species remains to be discovered, but the best way to advance our understanding of human behavior is by researching the sources of behavioral variability.”
About Stony Brook University
Part of the State University of New York system, Stony Brook University encompasses 200 buildings on 1,450 acres. In the 53 years since its founding, the University has grown tremendously, now with nearly 25,000 students and 2,200 faculty, and is recognized as one of the nation’s important centers of learning and scholarship. It is a member of the prestigious Association of American Universities, and ranks among the top 100 national universities in America and among the top 50 public national universities in the country according to the 2010 U.S. News & World Report survey. Stony Brook University co-manages Brookhaven National Laboratory, as one of an elite group of universities, including Berkeley, University of Chicago, Cornell, MIT, and Princeton that run federal research and development laboratories. SBU is a driving force of the Long Island economy, with an annual economic impact of $4.65 billion, generating nearly 60,000 jobs, and accounts for nearly 4% of all economic activity in Nassau and Suffolk counties, and roughly 7.5 percent of total jobs in Suffolk County.
Baby gorilla takes its first steps
Remember the tiny baby gorilla born at London Zoo last October? Well, he's back and this time zookeepers were on hand with a video camera when Tiny - as he's now called - took his first steps this week (see video, above).
The infant has been clinging to his mother since his birth, so it's no surprise she gave him an encouraging shove in the right direction when he tried to return for a cuddle. The keepers are now trying to come up with a permanent name, as he is fast outgrowing his nickname.
The infant has been clinging to his mother since his birth, so it's no surprise she gave him an encouraging shove in the right direction when he tried to return for a cuddle. The keepers are now trying to come up with a permanent name, as he is fast outgrowing his nickname.
Wednesday, February 16, 2011
Supermassive black holes not so big after all
BRISTOL: Supermassive black holes are between 2 and 10 times less massive than previously thought, according to new calculations published by German astrophysicists.
At the centre of most galaxies, including our own, sit supermassive black holes, believed to be between 100,000 and several billion times more massive than the Sun. Previous estimates of black hole masses had contradicted theory, particularly for far away or young black holes. But new research shows that these estimates were wrong.
“It caused problems for the theory of galactic evolution that young galaxies should have these massive black holes,” said lead researcher Wolfram Kollatschny of the University of Göttingen in Germany. “Knowing the rotational velocity of surrounding material we could calculate the central black hole masses unambiguously.”
Probing the black holes
Supermassive black holes are thought to grow from massive star supernovae, sucking in so much surrounding gas that they eventually gravitate to the centre of their galaxy. They are surrounded by bright hot discs of material - called accretion discs - waiting to fall into the abyss.
Emission spectra – which identify the elements in matter - emanating from these discs contain important information about the black holes they surround. Scientists use one line in these spectra to estimate young and distant black hole masses and another for closer black holes.
What the latest research published in the journal Nature shows, however, is that one line “is always broader” than the other. “If we don’t correct for this effect we overestimate the masses of distant and young black holes,” said Kolatschny.
All previous calculations overestimated
Kollatschny and Matthias Zetzl, also from the University of Göttingen dissected spectra from 37 active galactic nuclei and found that the line widths of broad emission lines are caused by a combination of turbulence and rotational speed.
“We could separate their shares in individual emission lines,” explained Kollatschny. “Only the rotational velocity should be used to derive the central black hole masses.”
When they did this they found that previously calculated masses had all been overestimated. Furthermore, they found that “the ratio of the turbulence with respect to the rotational speed gives detailed information on the accretion disk geometrical structure.”
A clue to galaxy formation
According to Emmanuele Berti from the University of Mississippi it is important to have accurate estimates of black hole masses: “It is widely believed that the black hole mass is intimately related to other properties of their galactic environment.”
He continued, “If we can measure this at different times during cosmic history, we may learn something about how the Universe became what we see today.”
Since a black hole only has two properties: mass and angular momentum, an accurate estimate of the mass is essential if astrophysicists are to have any hope of understanding what is going on now and what went on during galaxy formation.
Gas can potentially corrupt results
Scientists are confident that they know the mass of the supermassive black hole at the centre of the Milky Way, as Berti explained, “Observing the orbits of stars at the centre of our own galaxy yielded the most precise supermassive black hole mass measurement to date.”
However, David Ballantyne of Georgia Institute of Technology in the U.S. urged caution when using other methods.
“For active galactic nuclei it is tricky to estimate the black hole mass because they are relatively rare (and farther away), and they emit a lot of light which blocks our view of the nucleus,” he said.
Hence, said Ballantyne, scientists are forced to use gas tracers, but gas can be compressed, heated and/or shocked, which can corrupt its velocity signature. “The accuracy of these methods is not fully known.”
(http://www.cosmosmagazine.com/news/4060/supermassive-black-holes-not-so-massive?page=0%2C1)
At the centre of most galaxies, including our own, sit supermassive black holes, believed to be between 100,000 and several billion times more massive than the Sun. Previous estimates of black hole masses had contradicted theory, particularly for far away or young black holes. But new research shows that these estimates were wrong.
“It caused problems for the theory of galactic evolution that young galaxies should have these massive black holes,” said lead researcher Wolfram Kollatschny of the University of Göttingen in Germany. “Knowing the rotational velocity of surrounding material we could calculate the central black hole masses unambiguously.”
Probing the black holes
Supermassive black holes are thought to grow from massive star supernovae, sucking in so much surrounding gas that they eventually gravitate to the centre of their galaxy. They are surrounded by bright hot discs of material - called accretion discs - waiting to fall into the abyss.
Emission spectra – which identify the elements in matter - emanating from these discs contain important information about the black holes they surround. Scientists use one line in these spectra to estimate young and distant black hole masses and another for closer black holes.
What the latest research published in the journal Nature shows, however, is that one line “is always broader” than the other. “If we don’t correct for this effect we overestimate the masses of distant and young black holes,” said Kolatschny.
All previous calculations overestimated
Kollatschny and Matthias Zetzl, also from the University of Göttingen dissected spectra from 37 active galactic nuclei and found that the line widths of broad emission lines are caused by a combination of turbulence and rotational speed.
“We could separate their shares in individual emission lines,” explained Kollatschny. “Only the rotational velocity should be used to derive the central black hole masses.”
When they did this they found that previously calculated masses had all been overestimated. Furthermore, they found that “the ratio of the turbulence with respect to the rotational speed gives detailed information on the accretion disk geometrical structure.”
A clue to galaxy formation
According to Emmanuele Berti from the University of Mississippi it is important to have accurate estimates of black hole masses: “It is widely believed that the black hole mass is intimately related to other properties of their galactic environment.”
He continued, “If we can measure this at different times during cosmic history, we may learn something about how the Universe became what we see today.”
Since a black hole only has two properties: mass and angular momentum, an accurate estimate of the mass is essential if astrophysicists are to have any hope of understanding what is going on now and what went on during galaxy formation.
Gas can potentially corrupt results
Scientists are confident that they know the mass of the supermassive black hole at the centre of the Milky Way, as Berti explained, “Observing the orbits of stars at the centre of our own galaxy yielded the most precise supermassive black hole mass measurement to date.”
However, David Ballantyne of Georgia Institute of Technology in the U.S. urged caution when using other methods.
“For active galactic nuclei it is tricky to estimate the black hole mass because they are relatively rare (and farther away), and they emit a lot of light which blocks our view of the nucleus,” he said.
Hence, said Ballantyne, scientists are forced to use gas tracers, but gas can be compressed, heated and/or shocked, which can corrupt its velocity signature. “The accuracy of these methods is not fully known.”
(http://www.cosmosmagazine.com/news/4060/supermassive-black-holes-not-so-massive?page=0%2C1)
Subscribe to:
Posts (Atom)