Download Huygens` view of Titan

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Gene therapy wikipedia , lookup

Fetal origins hypothesis wikipedia , lookup

Race and health wikipedia , lookup

Disease wikipedia , lookup

Seven Countries Study wikipedia , lookup

Epidemiology wikipedia , lookup

Syndemic wikipedia , lookup

Index of HIV/AIDS-related articles wikipedia , lookup

Public health genomics wikipedia , lookup

Multiple sclerosis research wikipedia , lookup

Alzheimer's disease research wikipedia , lookup

Transcript
Hot flashes may be welcome sign in women with breast cancer, study says
Women on tamoxifen therapy who reported having hot flashes were less likely to develop recurrent breast
cancer than those who did not report hot flashes, according to a study from the Moores Cancer Center at the University of California, San Diego (UCSD). Moreover, hot flashes were a stronger predictor of outcome than age, hormone receptor status or even how advanced the breast cancer was at diagnosis.
The study results were presented at the American Society of Clinical Oncology (ASCO) annual meeting in Chicago
today.
"Hot flashes are a very common and disruptive problem in breast cancer survivors," said the study's first author
Joanne Mortimer, M.D., medical director of the Moores Cancer Center and professor of medicine with the UCSD School
of Medicine. "About two-thirds of women with breast cancer say hot flashes compromise their quality of life. The most
common request for additional treatment we get is for relief from these symptoms."
The study was based upon data from the comparison group of the Women's Healthy Eating and Living (WHEL)
study – a multi-site randomized trial of the impact of a diet high in vegetables, fruits and fiber, and low in fat on the
recurrence of breast cancer. The WHEL participating institutions are University of California, San Diego and Davis,
Stanford University, Kaiser Permanente in Oakland and Portland, University of Arizona at Tucson, and the University of
Texas MD Anderson Cancer Center in Houston.
Of the 1,551 women with early-stage breast cancer who were randomized to the comparison group of the WHEL
study, more than half (864, or 56 percent) were taking tamoxifen, and more than three-quarters of those (674, or 78
percent) reported hot flashes.
Cancer recurrence among women who reported hot flashes was about 12.9 percent, compared with 21 percent for
women not reporting hot flashes. These data were consistent across all years of follow-up, regardless of age or
menopausal status.
"This study provides the first evidence that hot flashes may be an indicator of a better prognosis in women with
early stage breast cancer," said the study's senior author, John P. Pierce, Ph.D, director of the Cancer Prevention and
Control Program at the Moores UCSD Cancer Center. "Our data support the possibility of a significant association
between hot flashes and disease outcome."
As a next step, the researchers plan to further study the relationship between hot flashes and breast cancer progression by measuring the tamoxifen metabolites in breast cancer survivors.
CHICAGO --
Stanford researchers track human stem cells transplanted into rat brain
Researchers at the Stanford University School of Medicine have illuminated the path taken by human
neural stem cells that were transplanted into the brains of rats and mice, and found that the cells successfully navigate
toward areas damaged by stroke.
The research group placed miniscule particles of iron inside stem cells to act as cellular beacons detected by
magnetic resonance imaging. With the ability to monitor where the human stem cells go in real time, researchers will
have an easier time learning the best way of using the cells to treat human neural disorders, such as stroke, traumatic
brain injury, Parkinson's disease or radiation damage.
The findings, to be published in the June 4 advance online version of the Proceedings of the National Academy of
Sciences, could eventually make it possible to track human stem cells that are transplanted into the brains of patients.
Gary Steinberg, MD, PhD, who led the research group, said the work also shows that the iron doesn't disrupt the
normal function of the stem cells. "This work is important because if a method of tracking the cells changes their biology, it will not be helpful," said Steinberg, senior author of the paper and the Bernard and Ronni Lacroute-William
Randolph Hearst Professor in Neurosurgery and Neurosciences.
In a 2006 study, Steinberg and his colleagues had shown that the same human stem cells used in this study were
able to migrate toward a brain region in rats that mimicked a human stroke. They also found that those cells matured
into the types of cells they would expect to find in that part of the brain.
The only problem was that in order to find out where the cells ended up, they had to kill the rats - not an approach
that can be used for human clinical trials. What the researchers needed was a way of tracking the stem cells in real
time to find out whether cells migrated appropriately and survived.
Steinberg said that the iron particles, called superparamagnetic iron oxide or SPIO, have been used for more than a
decade to track cells in living animals, including in rat neural stem cells. If the point is to use the technique in humans,
he and postdoctoral scholar Raphael Guzman, MD, wanted to make sure that the particles worked in human cells as
well.
"I think it's critical that we are applying this technique in human stem cells that can be used in human clinical trials,"
said Guzman, who is lead author of the paper. He said that because they chose to work with those cells, their results
can be directly translated to human trials.
They were reassured that putting the iron particles in the cells didn't change the stem cells' biological properties.
Also, when the group placed those iron-filled human neural stem cells into the brains of rats - either healthy fetal and
adult rats or those that had experienced a stroke - the cells behaved as expected in each case.
In fetal mice with brains still developing, the group injected stem cells into the fluid-filled brain regions called
ventricles. From there, the iron-filled cells migrated along the path that stem cells normally take to populate the
developing brain. Those stem cells also matured into the proper types of brain cells.
STANFORD, Calif. --
In adult rats that had a simulated stroke, the human stem cells migrated into the damaged region, matured into the
appropriate type of neuron and support cells and appeared to integrate into the surrounding tissue. The research
group is currently testing whether those transplanted cells repaired stroke-induced damage to the rats' ability to move
or learn.
The only situation that rendered the neural stem cells immobile was the healthy adult rat brain. As with Steinberg's
previous work, the group found that in the absence of any signals to beckon the stem cells, they stayed close to where
the researchers implanted them.
All of this adds up to encouraging news for researchers hoping to use stem cells to treat human disease. For now,
nobody knows the best way of inserting the cells, the conditions that are best for cell survival, or the optimal timing
after an injury for when transplanting the cells is most effective. With the ability to watch the cells in real time, researchers can compare different techniques to learn what works best.
The cells used in this study were similar to those that are part of a clinical trial for a childhood disorder called
Batten's disease. Steinberg said he and others are interested in testing these or other stem cells as a way of treating a
wide range of diseases.
Geoengineering -- A quick fix with big risks
Radical steps to engineer Earth’s climate by blocking sunlight could drastically cool the planet, but could
just as easily worsen the situation if these projects fail or are suddenly halted, according to a new computer modeling
study.
The experiments, described in the June 4 early online edition of The Proceedings of the National Academy of
Sciences, look at what might happen if we attempt to slow climate change by “geoengineering” a solar filter instead of
reducing carbon dioxide emissions. The researchers used a computer model to simulate a decrease in solar radiation
across the entire planet, but assumed that that the current trend of increasing global carbon dioxide emissions would
continue for the rest of this century.
“Given current political and economic trends, it is easy to become pessimistic about the prospect that needed cuts
in carbon dioxide emissions will come soon enough or be deep enough to avoid irreversibly damaging our climate,”
said co-author Ken Caldeira of the Carnegie Institution’s Department of Global Ecology. “If we want to consider more
dramatic options, such as deliberately altering the Earth’s climate, it’s important to understand how these strategies
might play out.”
Although the term “geoengineering” describes any measure intended to modify the Earth at the planetary scale, the
current study focuses on changes that reduce the amount of solar radiation that reaches the planet’s surface. Several
methods to accomplish this have been suggested, from filling the upper atmosphere with light-reflecting sulfate particles to installing mirrors in orbit around the planet.
According to the model, even after greenhouse gases warm the planet, geoengineering schemes could cool off the
Earth within a few decades to temperatures not seen since the dawn of the industrial revolution. This is good news,
according to Caldeira and lead author Damon Matthews of Concordia University in Montreal, Canada, because it
suggests there is no need to rush into building a geoengineering system before it is absolutely necessary.
However, the study also offers some bad news. If any hypothetical geoengineering program were to fail or be
cancelled for any reason, a catastrophic, decade-long spike in global temperatures could result, along with rates of
warming 20 times greater than we are experiencing today.
“If we become addicted to a planetary sunshade, we could experience a painful withdrawal if our fix was suddenly
cut off,” Caldeira explained. “This needs to be taken into consideration if we ever think seriously about implementing a
geoengineering strategy.”
Caldeira and Matthews believe that lower temperatures in a geoengineered world would result in more efficient
storage of carbon in plants and soils. However, if the geoengineering system failed and temperatures suddenly increased, much of that stored carbon would be released back into the atmosphere. This, in turn, could lead to accelerated greenhouse warming.
Reduced solar radiation not only affects temperatures in the simulations, but also global rainfall patterns. In a
model run with no simulated geoengineering, warmer temperatures resulted in more rainfall over the oceans, while
increased carbon dioxide levels caused a decrease in evaporation from plants’ leaves, and consequently a decrease in
rainfall over tropical forests. In contrast, the geoengineering scenario—which had lower temperatures but the same
high levels of carbon dioxide—resulted only in a decrease in tropical forest rainfall.
“Many people argue that we need to prevent climate change. Others argue that we need to keep emitting
greenhouse gases,” Caldeira said. “Geoengineering schemes have been proposed as a cheap fix that could let us have
our cake and eat it, too. But geoengineering schemes are not well understood. Our study shows that planet-sized
geoengineering means planet-sized risks.”
Caldeira feels it is important to develop a scientific understanding of proposed geoengineering schemes. “I hope I
never need a parachute, but if my plane is going down in flames, I sure hope I have a parachute handy,” Caldeira
said. ”I hope we’ll never need geoengineering schemes, but if a climate catastrophe occurs, I sure hope we will have
thought through our options carefully.”
Stanford, CA --
Building our new view of Titan
Today, two and a half years after the historic landing of ESA’s Huygens probe on Titan, a new set of results on Saturn’s
largest moon is ready to be presented. Titan, as seen through the eyes of Huygens still holds exciting surprises,
scientists say.
Latest Titan results
On 14 January 2005, after a seven-year voyage on board the
NASA/ESA/ASI Cassini spacecraft, ESA’s Huygens probe spent 2 hours and
28 minutes descending by parachute to land on Titan. It then sent transmissions from the surface for another seventy minutes before Cassini
moved out of range.
On 8 December that year, a combined force of scientists published their
preliminary findings in Nature. Now, after another year and a half of patient work, they are ready to add fresh details to their picture of Titan. This
time, the papers are published in a special issue of the Planetary and Space
Science magazine.
Huygens' view of Titan
“The added value comes from computer modelling,” says Jonathan
Lunine, Huygens Interdisciplinary Scientist from the Lunar and Planetary
Laboratory, University of Arizona.
By driving their computer models of Titan to match the data returned
from the probe, planetary scientists can now visualise Titan as a working
world. “Even though we have only four hours of data, it is so rich that after
two years of work we have yet to retrieve all the information it contains,”
says François Raulin, Huygens Interdisciplinary Scientist, at the Laboratoire
de Physique et Chimie de l'Environnement, Paris.
The new details add greatly to the picture of Saturn’s largest moon.
“Titan is a world very similar to the Earth in many respects,” says
Jean-Pierre Lebreton, ESA Huygens Project Scientist.
Huygens found that the atmosphere was hazier than expected because
of the presence of dust particles – called ‘aerosols’. Now, scientists are
learning how to interpret their analysis of these aerosols, thanks to a special chamber that simulates Titan’s atmosphere.
Drainage, flow and erosion on the Huygens landing site
When the probe dropped below 40 kilometres in altitude, the haze cleared and the cameras were able to take their
first distinct images of the surface. They revealed an extraordinary landscape showing strong evidence that a liquid,
possibly methane, has flowed on the surface, causing erosion. Now, images from Cassini are being coupled with the
‘ground truth’ from Huygens to investigate how conditions on Titan carved
out this landscape.
As the probe descended, Titan’s winds carried it over the surface. A new
model of the atmosphere, based on the winds, reveals that Titan’s atmosphere is a giant conveyor belt, circulating its gas from the south pole to the
north pole and back again.
Also, the tentative detection of an extremely low frequency (ELF) radio
wave has planetary scientists equally excited. If they confirm that it is a
natural phenomenon, it will give them a way to probe into the moon’s subsurface, perhaps revealing an underground ocean.
Tectonic and fluid-flow patterns on Titan
The journey Huygens took to the surface is the subject of the most intense scrutiny, with many papers on the
subject. When an anomaly onboard Cassini robbed scientists of data from the Doppler Wind Experiment (DWE), it was
followed by a painstaking analysis of data collected by radio telescopes on Earth that were tracking Huygens. Engineers and scientists succeeded in recovering the movement of the probe, providing an accurate wind profile and
helping them place some of the images and data from Huygens into their correct context.
Now corroborating evidence, resulting from a thorough analysis of many instruments and engineering sensors on
Huygens, is adding unprecedented detail to the movement of the probe during its descent.
And there is still more science to come. “There are so many papers dealing with the results from Huygens that we
could not prepare all of them in time for this issue. So a second special issue is already in preparation,” says Raulin.
Research shows survival benefit for leukemia patients treated with arsenic trioxide
Through participation in a government-sponsored multi-year study, researchers at the Comprehensive Cancer Center at Wake Forest University have helped confirm that arsenic trioxide – marketed as
Trisenox® – significantly improves patient survival when coupled with standard chemotherapy treatment in newly
diagnosed patients with acute promyelocytic leukemia, or APL.
Bayard Powell, M.D., principal investigator of the study and professor and section head of Hematology and OnWINSTON-SALEM, N.C. --
cology at Wake Forest University Baptist Medical Center, presented the findings today at the annual meeting of the
American Society of Clinical Oncology (ASCO).
Nearly 600 patients in the U.S. and Canada participated in the phase III study over a six-year period – from June
1999 through March 2005. The study was sponsored by the National Cancer Institute (NCI) and led by one of its
cooperative groups, the Cancer and Leukemia Group B (CALGB). The Comprehensive Cancer Center at Wake Forest
University is a member of CALGB.
“Patients receiving the arsenic trioxide had a significantly higher likelihood of remaining disease-free, with longer
survival than those receiving standard chemotherapy alone,” said Powell. “The results are so compelling that we
recommend use of arsenic trioxide in first-line treatment of APL.”
APL is a cancer of the bone marrow in which cancerous cells eventually crowd out the healthy blood cells needed for
the body to function normally.
Karen Shelton, a 58 year-old nurse from Kannapolis, was diagnosed with APL in 2001. Ms. Shelton was offered the
opportunity to participate in the arsenic trioxide trial. “I was eager to take part in the study. Even if it didn’t help me, it
might help others,” she said. She was one of 26 patients enrolled through Wake Forest Baptist.
In remission for nearly six years, Shelton is an advocate of the therapy. “I truly believe chemotherapy put me in
remission, but the arsenic therapy sealed the deal.”
Arsenic trioxide was approved by the Food and Drug Administration nearly seven years ago for use in patients with
APL who had not responded to, or had stopped responding to, standard first-line therapy.
Approximately 81 percent of patients with APL who received arsenic trioxide were alive and remained in remission
-- free of leukemia -- three years after diagnosis compared to 66 percent of patients treated with the standard regimen
of chemotherapy. The enhanced effectiveness of the experimental combination also resulted in better overall survival
after three years among 86 percent of the patients who received the arsenic trioxide regimen, compared to 79 percent
for patients on the standard treatment.
“Up to 30 percent of patients with APL will relapse from current first-line therapy, so finding more effective therapies that enhance overall survival and keep patients in remission is critical,” said Powell. “This study represents a major
step forward in the treatment of patients with this type of leukemia and reinforces the important role clinical trials play
in unlocking new or different combinations of therapies to fight cancer, not to mention the hope it gives patients and
their families.”
Powell said the willingness of patients with leukemia and their physicians to participate in the clinical trial has
markedly improved the outcome for participants and future patients with APL.
Study participants underwent extensive follow-up (32 months on average), including weekly blood tests and regular
bone marrow tests. Arsenic trioxide appeared to be well tolerated and did not increase toxicities compared to the
standard chemotherapy regimen. Patients participated through one of five NCI-sponsored North American Cooperative
Oncology Groups.
Aluminum foil lamps outshine incandescent lights
Researchers at the University of Illinois are developing panels of microcavity plasma lamps that may
soon brighten people’s lives. The thin, lightweight panels could be used for residential and commercial lighting, and for
certain types of biomedical applications.
“Built of aluminum foil, sapphire and small amounts of gas, the panels are less than 1 millimeter thick, and can hang
on a wall like picture frames,” said Gary Eden, a professor of electrical and computer engineering at the U. of I., and
corresponding author of a paper describing the microcavity plasma lamps in the June issue of the Journal of Physics D:
Applied Physics.
Like conventional fluorescent lights, microcavity plasma lamps are glow-discharges in which atoms of a gas are
excited by electrons and radiate light. Unlike fluorescent lights, however, microcavity plasma lamps produce the
plasma in microscopic pockets and require no ballast, reflector or heavy metal housing. The panels are lighter, brighter
and more efficient than incandescent lights and are expected, with further engineering, to approach or surpass the
efficiency of fluorescent lighting.
The plasma panels are also six times thinner than panels composed of light-emitting diodes, said Eden, who also is
a researcher at the university’s Coordinated Science Laboratory and the Micro and Nanotechnology Laboratory.
A plasma panel consists of a sandwich of two sheets of aluminum foil separated by a thin dielectric layer of clear
aluminum oxide (sapphire). At the heart of each lamp is a small cavity, which penetrates the upper sheet of aluminum
foil and the sapphire.
“Each lamp is approximately the diameter of a human hair,” said visiting research scientist Sung-Jin Park, lead
author of the paper. “We can pack an array of more than 250,000 lamps into a single panel.”
Completing the panel assembly is a glass window 500 microns (0.5 millimeters) thick. The window’s inner surface is
coated with a phosphor film 10 microns thick, bringing the overall thickness of the lamp structure to 800 microns.
Flat panels with radiating areas of more than 200 square centimeters have been fabricated, Park said. Depending
upon the type of gas and phosphor used, uniform emissions of any color can be produced.
In the researchers’ preliminary plasma lamp experiments, values of the efficiency – known as luminous efficacy – of
15 lumens per watt were recorded. Values exceeding 30 lumens per watt are expected when the array design and
microcavity phosphor geometry are optimized, Eden said. A typical incandescent light has an efficacy of 10 to 17
CHAMPAIGN, Ill. --
lumens per watt.
The researchers also demonstrated flexible plasma arrays sealed in polymeric packaging. These devices offer new
opportunities in lighting, in which lightweight arrays can be mounted onto curved surfaces – on the insides of windshields, for example.
The flexible arrays also could be used as photo-therapeutic bandages to treat certain diseases – such as psoriasis –
that can be driven into remission by narrow-spectrum ultraviolet light, Eden said.
Postoperative complications of living right liver donors
System for classifying complications helps pinpoint incidence rate
More than 78 percent of living right liver donors experienced post-operative complications, according to a new
study that uses a replicable complication classification system. Most of the complications were minor, though some
were more serious. The full findings are published in Liver Transplantation, a journal by John Wiley & Sons. The article
is also available online via Wiley Interscience at http://www.interscience.wiley.com/journal/livertransplantation.
The demand for donor livers far outstrips the supply from deceased donors, so living donor liver transplantation has
become increasingly common ever since it was first reported successful in 1989. While a donor left hepatectomy is
associated with fewer complications than a right hepatectomy, often the right liver is needed to meet the metabolic
demands of large recipients. Complication rates from right hepatectomy have been reported to range between 0
percent and 67 percent, depending on the definition of morbidity.
In an effort to pinpoint the true complication rate for living right liver donors, researchers led by Kyung-Suk Suh of
Seoul National University College of Medicine, prospectively analyzed the outcomes of 83 consecutive living donor
right hepatectomies using a standardized classification of the severity of complications. They used a modified Clavien
system: Grade I=minor complications; Grade II=potentially life-threatening complications requiring pharmacologic
treatment; Grade III=complications requiring invasive treatment; Grade IV=complications causing organ dysfunction
requiring ICU management; Grade V=complications resulting in death. (Please note that the original classification of
postoperative complication was suggested by Clavien et al and Broering et al.)
The study took place between January 2002 and July 2004 at the Seoul National University Hospital and the donors,
usually offspring of the recipients, underwent either right hepatectomy or a modified extended right hepatectomy. The
researchers monitored them for complications for 12-months after their surgery.
There were no significant differences in the types and incidences of complications between the donors who underwent right hepatectomy and those who underwent modified extended right hepatectomy Overall, 65 of the 83
donors (78.3 percent) experienced complications. “Most were minor and self-limited or were silent in that they were
only noted in laboratory and protocol imaging studies,” the authors report. “However, several patients experienced
potentially life-threatening complications requiring additional treatment.”
Sixty-four patients (77.1 percent) had Grade 1 complications, most commonly hyperbilirubinemia and pleural effusion. Eleven donors had Grade II complications, mostly bile leakage. One donor had a Grade III complication. And
no donors had Grade IV or IV complications. At the one-year follow-up, 93 percent of donors had normal bilirubin and
ALT levels.
“In conclusion, although most of these adverse events were minor and self-limited, 78 percent of right liver donors
still experienced morbidity,” the authors report. “Therefore, continuous standardized reporting of the donor morbidity
as well as meticulous surgery and intensive care is essential for the success of donor right hepatectomy implementation.”
An accompanying editorial by Yasuhiko Sugawara et al. of the University of Tokyo praised the effort to introduce a
standardized assessment system to evaluate the rate of complications for living liver donors. “The modified Clavien’s
classification system introduced in 2004 is simple and informative. We believe its use will greatly enhance the comparison of living donor liver transplantation outcomes,” the authors write. “From now on, the modified Clavien’s classification system should be used whenever surgical complications of live liver donors are discussed.”
Older men may not live as long if they have low testosterone
Low levels of testosterone may increase the long-term risk of death in men over 50 years old, according to researchers with the Department of Family and Preventive Medicine at the University of California, San Diego School of
Medicine.
"The new study is only the second report linking deficiency of this sex hormone with increased death from all
causes, over time, and the first to do so in relatively healthy men who are living in the community," said Gail Laughlin,
Ph.D., assistant professor and study author.
Laughlin will present the findings to The Endocrine Society Tuesday June 5th, 2007. The findings will be among
selected articles published in the distinguished The Endocrine Society's ENDO 07 Research Summaries Book .
"We have followed these men for an average of 18 years and our study strongly suggests that the association
between testosterone levels and death is not simply due to some acute illness," said Laughlin.
In the study, Laughlin and co-workers looked at death, no matter the cause, in nearly 800 men, ages 50 to 91 years,
who were living in Rancho Bernardo, California. The participants have been members of the Rancho Bernardo Heart
and Chronic Disease Study since the 1970s. At the beginning of the 1980s, almost one-third of these men had
suboptimal blood testosterone levels for men their age.
The group with low testosterone levels had a 33 percent greater risk of death during the next 18 years than the
men with higher testosterone. This difference was not explained by smoking, drinking, physical activity level or
pre-existing diseases (such as diabetes or heart disease).
In this study, "low testosterone" levels were set at the lower limit of the normal range for young adult men. Testosterone declines slowly with aging in men and levels vary widely, with many older men still having testosterone levels
in the range of young men. Twenty-nine percent of Rancho Bernardo men had low testosterone.
Distinguishing Factors
Men with low testosterone were more likely to have elevated markers of inflammation, called inflammatory cytokines, which contribute to many diseases. Another characteristic that distinguished the men with low testosterone was
a larger waist girth along with a cluster of cardiovascular and diabetes risk factors related to this type of fat accumulation. Men with low testosterone are three times more likely to have the metabolic syndrome than men with higher
testosterone levels; metabolic syndrome is the name for the presence of three or more of these risk factors:
* waist measurement more than 40 inches in men (more than 35 inches in women),
* low HDL (good) cholesterol,
* high triglycerides (levels of fat in the blood),
* high blood pressure
* high blood glucose (blood sugar)
While the study lends support to the belief that supplemental hormone therapy may help older men with low
testosterone levels, those who practice weight control and increase their physical activity may also live longer.
"It’s very possible that lifestyle determines what level of testosterone a patient has," commented principal investigator, Elizabeth Barrett-Connor, M.D., UCSD Distinguished Professor of Family and Preventive Medicine and chief of
the Division of Epidemiology. "It may be possible to alter the testosterone level by lowering obesity."
Barrett-Connor and Laughlin were also careful to clarify what the study did not show.
"The study did show there may be an association between low testosterone levels and higher mortality. It did not
show that higher levels of testosterone are associated with decreased mortality," explained Laughlin. Researchers
agree only randomized placebo-controlled clinical trials can determine whether testosterone supplements can safely
promote longer life. Such a trial is in the planning stages at UCSD.
Barrett-Connor cautioned, "We are very excited about these findings, which have important implications, but we
are not ready to say that men should go out and get testosterone to prolong their lives. We’re not ready to take this to
the prescribing pharmacist."
"Conventional wisdom is that women live longer because estrogen is good and testosterone is bad," said Barrett-Connor. "We don’t know. Maybe the decline in testosterone is healthy and comes with older age. Maybe the decline is bad and is associated with chronic diseases of aging."
The National Institute on Aging and the American Heart Association funded the study.
The Rancho Bernardo Study
Dr. Elizabeth Barrett-Connor is founder and director of the Rancho Bernardo Heart and Chronic Disease Study, now in its
35th year. Since 1972, the RB study has greatly increased knowledge of cardiovascular disease, diabetes, cancer, osteoporosis, exogenous and endogenous hormones, and the connections between lifestyle, behavior and health.
Six thousand residents, 82 percent of the original population of adults in Rancho Bernardo, agreed to participate in this
history-making study and significant collection of data. Barrett-Connor and a team of UCSD researchers have conducted
clinical research visits with the participants every four years for more than three decades.
The clinical research visits last three to four hours, thoroughly examining the participant’s physical condition by gathering
information through bone and heart scans, blood samples, heart disease risk factor measurements such as lipid levels and
cognitive function assessment.
The rate of follow-up with those who have moved or died (through cooperation of family and friends), has been exceptionally high.
A Living Legacy
The RB study has just been re-funded for what Barrett-Connor predicts will be the final clinic visit. Another newly funded
grant from the National Institutes of Health (NIH) will allow analysis of information gathered over the course of the study to
provide new insights on the link between heart disease risk and cognitive function.
"Although more than 400 scientific papers based on RB data have already been published, it’s a wonderful legacy for
participants to realize that the knowledge gathered has not come close to being exhausted. It’s an enormous bank of data,"
said Laughlin. "Though we are beginning the final visits with the RB group, analyzing the data from the RB study, including
the new data being acquired at this final visit, will continue to contribute to our knowledge about healthy aging for years to
come."
Barrett-Connor added, "We must note that this would not be possible without the remarkable contributions of our loyal
participants. They are really very special people."
Origins of nervous system found in genes of sea sponge, report scientists at UC Santa Barbara
Scientists at the University of California, Santa Barbara have discovered significant clues to the
evolutionary origins of the nervous system by studying the genome of a sea sponge, a member of a group considered
to be among the most ancient of all animals.
(Santa Barbara, Calif.) --
The findings are published in the June 6 issue of the journal PLoS ONE, a Public Library of Science journal. The article can be
found at http://www.plosone.org/doi/pone.0000506.
“It turns out that sponges, which lack nervous systems, have most of the genetic components of synapses,” said
Todd Oakley, co-author and assistant professor in the Department of Ecology, Evolution and Marine Biology at UC
Santa Barbara.
“Even more surprising is that the sponge proteins have ‘signatures’ indicating they probably interact with each other
in a similar way to the proteins in synapses of humans and mice,” said Oakley. “This pushes back the origins of these
genetic components of the nervous system to at or before the first animals –– much earlier than scientists had previously suspected.”
When analyzing something as complex as the nervous system, it is difficult to know where to begin, explained Ken
Kosik, senior author and co-director of UCSB’s Neuroscience Research Institute, who holds the Harriman Chair in
Neuroscience Research.
The first neurons and synapses appeared over 600 million years ago in “cnidarians,” creatures known today as
hydra, sea anemones, and jellyfish. By contrast, sponges, the oldest known animal group with living representatives,
have no neurons or synapses. They are very simple animals with no internal organs.
“We look at the evolutionary period between sponges and cnidarians as the period when the nervous system came
into existence, about 600 million years ago,” said Kosik.
He explained that the research group made a list of all the genes expressed in a synapse in humans, since synapses
epitomize the nervous system. Synapses are involved in cell communication, learning, and memory. Next, the researchers looked to see if any of the synapse genes were present in the sponge.
“That was when the surprise hit,” said Kosik. “We found a lot of genes to make a nervous system present in the
sponge.” The research team also found evidence to show that these genes were working together in the sponge. The
way two of the proteins interact, and their atomic structure, bear resemblance to the human nervous system.
“We found this mysterious unknown structure in the sponge, and it is clear that evolution was able to take this
entire structure, and, with small modifications, direct its use toward a new function,” said Kosik. “Evolution can take
these ‘off the shelf’ components and put them together in new and interesting ways.”
The roots of grammar: New study shows children innately prepared to learn language
To learn a language, a child must learn a set of all-purpose rules, such as “a sentence can be formed by combining
a subject, a verb and an object” that can be used in an infinite number of ways. A new study shows that by the age of
seven months, human infants are on the lookout for abstract rules – and that they know the best place to look for such
abstractions is in human speech.
In a series of experiments appearing in the May issue of Psychological Science, a journal of the Association for
Psychological Science, Gary Marcus and co-authors Keith Fernandes and Scott Johnson at New York University exposed infants to abstractly structured sequences that consisted of either speech syllables or nonspeech sounds.
Once infants became familiar with these sequences, Marcus and his colleagues presented the infants four new
unique sequences: Two of these new sequences were consistent with the familiarization “grammar,” while two were
inconsistent. For example, given familiarization with la ta ta, ge lai lai, consistent test sentences would include wo fe fe
and de ko ko (ABB), while inconsistent sentences would include wo wo fe and de de ko (AAB). They then measured
how long infants attended to each sequence in order to determine whether they recognized the previously learned
grammar.
In the first two experiments, the researchers examined infants’ rule learning using sequences of tones, sung syllables, musical instruments of varying timbres and animal noises.
Across both experiments, infants were able to identify rules only when exposed to speech sequences (versus
nonspeech sequences). These findings are significant, says Marcus, because “the essence of language is learning
rules, and these results suggest that young infants are specifically prepared to learn these rules from speech.”
In a third experiment, the researchers discovered another intriguing result: Infants were able to generalize rules
learned from speech to the sequences of nonspeech sounds, even though they couldn’t directly learn rules from the
nonspeech stimuli. Infants were again familiarized with structured sequences of speech and then tested on their ability
to recognize those same structures in sequences of tones, timbres, and animal sounds. Infants who received this
pre-exposure to structured sequences of speech were able to recognize these same structures in the nonlinguistic
stimuli. This shows, according to Marcus, that “infants’ drive to understand the abstract patterns underlying speech
must be much stronger than their pull towards understanding abstraction in other domains.”
“Infants may analyze speech more deeply than other signals because it is highly familiar or highly salient, because
it is produced by humans, because it is inherently capable of bearing meaning, or because it bears some
not-yet-identified acoustic property that draws the attention of the rule-induction system,” writes Marcus.
“Regardless, from birth, infants prefer listening to speech,” he continues, “and the intriguing patterns we have
observed in rule learning and transfer could in some way be an extension of that initial, profound interest in speech.”
Patient bleeds dark green blood
A team of Canadian surgeons got a shock when the patient they were operating on began shedding dark greenish-black blood, the Lancet reports.
The man emulated Star Trek's Mr Spock - the Enterprise's science officer who supposedly had green Vulcan blood.
In this case, the unusual colour of the 42-year-old's blood was down to the migraine medication he was taking.
The man's leg surgery went ahead successfully and his blood returned to normal once he eased off the drug.
Dark green
The patient had been taking large doses of sumatriptan - 200 milligrams a day.
This had caused a rare condition called sulfhaemoglobinaemia, where sulphur is
incorporated into the oxygen-carrying compound haemoglobin in red blood cells.
Describing the case in The Lancet, the doctors led by Dr Alana Flexman from St
Paul's Hospital in Vancouver wrote: "The patient recovered uneventfully, and stopped
taking sumatriptan after discharge.
"When seen five weeks after his last dose, he was found to have no sulfhaemoglobin in his blood."
Mr Spock's green blood is down to copper
The man had needed urgent surgery because he had developed a dangerous condition in his legs after falling
asleep in a sitting position.
The surgeons performed urgent fasciotomies, limb-saving procedures which involve making surgical incisions to
relieve pressure and swelling caused by the man's condition - compartment syndrome.
In compartment syndrome, the swelling and pressure in a restricted space limits blood flow and causes localised
tissue and nerve damage.
It is commonly caused by trauma, internal bleeding or a wound dressings or cast being too tight.
According to the science fantasy television series Star Trek, Mr Spock had green blood because the oxidizing agent
in Vulcan blood is copper, not iron, as it is in humans.
Mr Spock had a human mother, and Vulcan father, from who he inherited his inability to make sense of human
emotion, as well as his green blood.
82,000 year old jewellery found
Archaeologists from Oxford have discovered what are thought to be the oldest
examples of human decorations in the world.
The international team of archaeologists, led by Oxford University's Institute of
Archaeology, have found shell beads believed to be 82,000 years old from a limestone cave in Morocco.
Institute director Prof Nick Barton said: "Bead-making in Africa was a widespread
practice at the time, which was spread between cultures with different stone technology by exchange or by long-distance social networks.
At work in the caves
"A major question in evolutionary studies today is 'how early did humans begin to think and behave in ways we would see
as fundamentally modern?' "The appearance of ornaments such as these may be linked to a growing sense of
self-awareness and identity among humans and cultural innovations must have played a large role in human development."
The handmade beads were found at the Grotte des Pigeons, Taforalt, in Eastern Morocco during a four to five year excavation in the region.
Prof Barton said the finds suggest that humans were making purely symbolic objects
40,000 years before they did it in Europe.
The beads themselves comprise 12 Nassarius shells - Nassarius are molluscs found in
warm seas and coral reefs in America, Asia and the Pacific - which had holes in them and
appeared to have been suspended or hung. They were covered in red ochre.
Similar beads have been found at sites in Algeria, Israel and South Africa which are
thought to date back to around the same time or slightly after the finds from Taforalt.
The team, which includes archaeologists from Morocco, France and Germany as well as
the UK, believe that similar shells are present in other sites in Morocco.
Dating results from the shells are still awaited, but the team believe some may be even
older than those found in Taforalt.
The jewellery found by the archeologists
The team has recently secured funding for a further four to five years of research in the area from the Natural
Environment Research Council. Further research will look at early humans in Africa and how they spread around the
world.
A paper on the team's findings is featured in this month's edition of Proceedings of the National Academy of Sciences, published today.
Russian nuclear store 'a powder keg'
* 17:44 04 June 2007
* NewScientist.com news service
* Rob Edwards
Scientists have identified a risk of an "uncontrolled chain reaction" at one of the world's largest radioactive waste
stores in northern Russia.
According to environmentalists, this could trigger a disaster worse than the accident at the Chernobyl nuclear plant
in 1986. But the probability of such a disaster is regarded as very small by regulatory authorities.
"We are sitting on a powder keg with a burning fuse," claims Alexander Nikitin, from the St Petersburg office of the
Norwegian environmental group, Bellona. "And we can only guess about the length of the fuse."
Andreeva Bay, on the Kola Peninsula in northwest Russia, is home to 21,000 spent uranium fuel assemblies from
nuclear submarines and ice-breakers. But the three huge concrete tanks in which the radioactive waste is stored have
begun to corrode and let in seawater.
Critical mass
A study by scientists from three Russian research institutes suggests that salt water could accelerate disintegration
of the fuel, splitting it into tiny particles. If the particles reach concentrations of 5-10% in water, it could be dangerous,
they say.
"Calculations show that the creation of a homogeneous mixture of these particles with water could lead to an
uncontrolled chain reaction," they warn. This kind of accidental critical mass, leading to bursts of radiation and heat, is
a well-recognised risk in the nuclear industry, but is not the same as a nuclear explosion.
The Russian study has been translated and highlighted by Bellona, which has long monitored safety at Andreeva
Bay, less than 50 kilometres from Norway. In the worst case, the group says, such a reaction could ignite a hydrogen
explosion, which could shower Europe with radioactivity.
"Real risk"
Per Strand, a director of the Norwegian Radiation Protection Authority, accepts that there is a risk of an accidental
criticality. "The probability is low but you can't exclude it," he told New Scientist.
The highest risk would come when the fuel is moved to put it in safer storage, planned over the next few years. "We
will have to be very careful," he says. "Many countries are working together to try and solve this unique problem."
John Large, a British nuclear consultant who has visited Andreeva Bay, points out that some forms of aluminium-based fuel stored there were particularly prone to saltwater corrosion. Hydrogen could be released from the fuel,
resulting in an explosion and fire, he says.
"The risk is real and serious enough to warrant a study of the potential radiological consequences as these could
potentially apply to Scandinavia and north-west Europe," he adds. "But I am doubtful that the keeper of these stores,
the Russian Federation Navy, has the resources and desire to undertake such an assessment."
Journal reference: Atomic Energy (vol 101, p 49)
Polynesians beat Columbus to the Americas
* 22:00 04 June 2007
* NewScientist.com news service
* Emma Young
Prehistoric Polynesians beat Europeans to the Americas, according to a new analysis of chicken bones.
The work provides the first firm evidence that ancient Polynesians voyaged as far as South America, and also
strongly suggests that they were responsible for the introduction of chickens to the continent - a question that has
been hotly debated for more than 30 years.
Chilean archaeologists working at the site of El Arenal-1, on the Arauco Peninsula in south-central Chile, discovered
what they thought might be the first prehistoric chicken bones unearthed in the Americas. They asked Elizabeth
Matisoo-Smith at the University of Auckland, New Zealand, and colleagues to investigate.
The group carbon-dated the bones and their DNA was analysed. The 50 chicken bones from at least five individual
birds date from between 1321 and 1407 - 100 years or more before the arrival of Europeans.
Two-week journey
However, this date range does coincide with dates for the colonization of the easternmost islands of Polynesia,
including Pitcairn and Easter Island.
And when the El Arenal chicken DNA was compared with chicken DNA from archaeological sites in Polynesia, the
researchers found an identical match with prehistoric samples from Tonga and American Samoa, and a near identical
match from Easter Island.
Easter Island is in eastern Polynesia, and so is a more likely launch spot for a voyage to South America, the researchers say. The journey would have taken less than two weeks, falling within the known range of Polynesian
voyages around this time, says Matisoo-Smith.
First real evidence
Other researchers have found indirect evidence that Polynesians might have made it to the Americas before Europeans. "But this is the first concrete evidence - not something based on a similarity in the styles of artefacts or a
linguistic similarity," says Matisoo-Smith.
It is also the first clear evidence that the chicken was introduced before the Europeans arrived.
Genetic studies of modern South Americans have not uncovered any signs of Polynesian ancestry. But this is not
surprising, says Matisoo-Smith. Ancient Polynesians were great explorers, but tended to settle only in uninhabited
islands.
It seems that if they found other people, they would usually turn around and go home, she says.
Journal reference: Proceedings of the National Academy of Sciences (DOI: 10.1073/pnas.0703993104)
The Curious Cook
Extra Virgin Anti-Inflammatories
By HAROLD McGEE
FOR a newcomer to the world of olive oil connoisseurship, the sound effects from the 20 or so tasters at the Los
Angeles County Fairgrounds in Pomona, Calif., were startling. The low murmurs of discussion were punctuated by loud,
sharp slurps, and loud, sharp coughs. Slurps and coughs, hour after hour. On the second day I made two notes to
myself: reread “The Magic Mountain”; check in with Dr. Beauchamp.
I was observing the annual Los Angeles international extra virgin olive oil competition, where nearly 400 oils from
15 countries were evaluated by expert judges last month. Through the three days of competition I learned what a
wonderful variety of aromas you can discover in olive oils when you sip and slurp. (Vigorous slurping aerates the
viscous oil and helps release its flavors.)
There were many different green notes pressed from the green fruit: of grass, celery, raw and cooked artichoke,
green tea, seaweed. An oil from the Spanish picual variety smelled startlingly of tomato leaf, then green herbs: sage
and rosemary and basil and mint and eucalyptus. From riper olives there were fruity and nutty aromas: citrus and
almond and even banana.
Many of these aromas were delicate and elusive; they would be swamped by most of the foods that we anoint with
olive oil. Since the competition, I’ve been starting supper with aperitif-like sips of straight oil, just to enjoy it for itself.
I also learned a lot about the not-so-delicate side of olive oil: the bitterness, the drying astringency and especially
that peppery pungency that hits the back of the throat and provokes a cough. Some oils were so strong that they
seemed more medicinal than delicious. But the Italian and Spanish judges consistently rated the most peppery,
throat-catching oils at the top, nodding in admiration even as they gasped for breath.
The sensations of bitterness, astringency and pungency are caused by members of the phenolic family of chemicals.
Phenols also have antioxidant properties and so help to protect the oil from going rancid. Whenever you taste an
especially peppery oil, it’s an indication that the oil is rich in olive extracts and relatively fresh.
Pondering the line between delicious and medicinal reminded me that some years ago a very peppery oil had inspired a brilliant biomedical hunch. That’s why I made a note to call Dr. Gary Beauchamp, the director of the Monell
Chemical Senses Center in Philadelphia: to get an update on the chemistry of the olive oil cough.
At the 1999 international workshop on molecular and physical gastronomy, in the mist-shrouded mountain town of
Erice, Sicily, the physicists Ugo and Beatrice Palma brought along oil freshly pressed from their own trees. Dr. Beauchamp tasted the oil and felt his throat burn, as did I and all the other attendees. But he was the only one who immediately thought of ibuprofen.
Dr. Beauchamp happened to be an ibuprofen connoisseur. He and a Monell colleague, Dr. Paul Breslin, had been
trying to help a manufacturer replace acetaminophen with ibuprofen in its liquid cold and flu medicine. The medicine
tasted fine until it was swallowed. Consumer panels described the unpleasant sensation as bitterness, but Dr. Beauchamp recognized it as an irritation akin to the pungency of black pepper and chilies, strangely localized to the back of
the throat. And he recognized it again in Sicily.
“The moment I felt that burn from Ugo and Beatrice’s oil, I saw the whole picture in my head,” Dr. Beauchamp recalled
last week. “There’s a natural analogue of ibuprofen in olive oil, and it could have anti-inflammatory properties, too.”
He, Dr. Breslin and several collaborators confirmed that the pungent substance in olive oil is a phenolic chemical,
which they named oleocanthal. And they showed that oleocanthal is even more effective than ibuprofen at inhibiting
enzymes in the body that create inflammation. “It took five years of spare-time unfunded research to prove it, but that
was some of the best fun I’ve had doing science,” he said.
In their 2005 report to the journal Nature, the team noted that anti-inflammatory drugs like aspirin and ibuprofen
appear to have long-term health benefits, including reduction in the risk of some forms of heart disease and cancer.
They suggested that the oleocanthal in pungent olive oils might be one of the things that make traditional Mediterranean diets so healthful.
The scientists are engaged in further oleocanthal research that may identify new sensory receptors in the throat. It
may even spin off new medications that are more potent and more palatable than ibuprofen.
In the meantime, the medalists of the 2007 Los Angeles County competition will be announced on June 16. If you
like olive oil, shop for a couple of them and give the sip-and-slurp method a try. And be ready to enjoy a good healthy
cough.
Iceman 'bled to death on glacier'
Massive blood loss from a ruptured artery killed the 5,300-year-old Alpine
"Iceman" known as Oetzi, tests confirm.
A Swiss-Italian team says the arrow that struck him in the left shoulder slit the
artery under his collar bone.
Oetzi probably died as the result of a fight: he may either have fled his attacker who then shot him in the back - or been ambushed.
The remains of the Neolithic man were discovered in 1991 emerging from a melting
glacier.
Experts are certain Oetzi died from an arrow that pierced his shoulder
They have since been subjected to a long series of investigations, with the latest results being published in the
Journal of Archaeological Science.
Examination of food - and perhaps more importantly - tree pollen in his stomach has established that Oetzi started
his day with a meal in a wooded valley below the Alps.
But later the same day, he was involved in a fight. This assessment is based on the presence of a flint arrowhead
lodged in his back and extensive cuts to his hands.
No one can be sure whether this attack took place in the valley below, prompting Oetzi to flee up the mountain; or
whether he was involved in a violent scrap at the 3,210m (10,500ft) altitude where his body was discovered on the
border between Austria and Italy 16 years ago.
Cold case
Recent advances in computerised tomography (CT), a sophisticated X-ray scan that allows multidimensional imaging, have given researchers an unprecedented view of Oetzi's internal anatomy.
The pictures reveal a 13mm-long rip in Oetzi's left subclavian artery which lies just under the collar bone.
Blood poured out into the surrounding tissue, forming a haematoma that can be seen in the breast cavity.
"We can conclude that this was really a deadly hit from the arrowhead," Dr Ruhli
told the BBC News website. "He would not have walked around for days. It was a quick
death."
Even today, people with this type of injury have only a 40% chance of survival.
"Theoretically, you could have been hit by an arrow and survive. If it doesn't hit an
artery or the lung, and you don't get an infection it shouldn't be a problem," said Dr
Ruhli.
Clotted blood also entered the hole caused by the arrow's wooden shaft, showing
that it was broken off while Oetzi was still alive and therefore still bleeding.
Oetzi climbed up to the Schnalstal Glacier and died from cardiac arrest, brought on
by shock, after sustaining massive blood loss, the science team says.
Scientists have modelled the arrowhead embedded in his back
Cover up?
Dr Ruhli speculated that it was possible the Iceman removed the shaft himself.
Alternatively, it could have been removed by an ally who tried in vain to help him, or perhaps by the attacker - if his
arrows had a characteristic shaft - to try to cover up evidence linking him to the killing.
The University of Zurich researcher said the speed with which Oetzi would have died following his injury made it
seem more likely he was shot on the glacier, rather than in the valley below where he started his journey.
But Dr Ruhli added, "this is speculation, because someone might have helped him up there. I'd rather stick to the
facts".
It is impossible to tell whether Oetzi was hit while he was walking, running, or stationary. But it seems the arrow
was shot from below Oetzi, suggesting the killer was either kneeling or further down a hill.
The arrow hit with some considerable power, penetrating the Iceman's shoulder blade.
Oetzi represents one of the great archaeological finds of recent years. He takes his name from the Oetz Valley
where he was found - still wearing goatskin leggings and a grass cape.
His copper-headed axe and a quiver full of arrows were lying nearby.
At first, it was thought he died from cold and hunger, but researchers were eventually able to establish that he died
from injuries sustained in a conflict.
Oetzi was about 159cm tall (5ft, 2.5in), 46 years old, arthritic, and infested with whipworm.
Female beetles have a thirst for sex
Buying a lady a drink to win her favour is a trick not confined to men. Some beetle females will mate simply to
quench their thirst.
The bean weevil Callosobruchus maculatus feeds on dry pulses. With a diet like this, the male's
ejaculate is a valuable water source for females. Martin Edvardsson at Uppsala University, Sweden,
tested the idea that females tap into this by keeping them on dry beans with or without access to
water. Females living on beans alone accepted more matings, presumably to secure the water in the
seminal fluid (Animal Behaviour, DOI: 10.1016/j.anbehav.2006.07.018).
Edvardsson says that the energy used to produce the ejaculate, which makes up a whopping 10 per cent of a male's
weight, is well spent. Once impregnated, females lose interest in sex - probably to avoid further injury from the male's
spiny penis. They are more likely to mate again if they are thirsty. "This is a massive investment for the male,"
Edvardsson says. "It buys them time before the females remate and their sperm have to compete with that of other
males."
Females with access to water lived on average for a day and a half longer than those without water. Since average
lifespan is only around nine days, this makes quite a difference to the total number of eggs they can lay.
From issue 2606 of New Scientist magazine, 06 June 2007, page 20
Brain injuries unleash Alzheimer's threat
* 17:00 06 June 2007
* NewScientist.com news service
* Roxanne Khamsi
People who suffer severe blows to the head, or a stroke, appear much more likely to develop Alzheimer's disease a new study suggests why.
The research reveals that injury causes cells to overproduce a protein known as beta-secretase (BACE). The
overabundance of this protein leads to the formation of the brain plaques associated with Alzheimer's, the researchers
believe.
The findings may help drug developers design medications to protect against this form of dementia.
Previous studies have found that people who have lost consciousness following a traumatic head injury are about
10 times as likely as their spouses to have Alzheimer's disease. Those who have suffered a blow to the head (such as
a heavy punch in the face) without blacking out are roughly three times as likely to have the illness.
Another study suggested that individuals who have had a stroke face a tripled risk of Alzheimer's compared with the
general population.
Plaque former
Inspired by this epidemiological evidence, Rudolph Tanzi at the Massachusetts General Hospital in Boston, US, and
colleagues set out to discover how these injuries trigger Alzheimer's disease.
In the first part of their experiment they simulated the type of cell death that occurs in stroke by adding a chemical
to brain cells in a dish.
Within 12 hours about half the cells had "committed suicide", explains Giuseppina Tesco, who helped carry out the
research. She also found that levels of BACE had rocketed. Specifically, the dying cells produced seven times as much
of the protein as normal brain cells did.
Another part of the experiment offered evidence that the cellular compound that initiates cell death also protects
BACE from being destroyed.
All this is important, says Tesco, because BACE protein prompts cells to produce notorious amyloid beta molecules.
Amyloid beta compounds clump together and form the brain plaques that characterize Alzheimer's disease.
Alzheimer's prevention?
The team also simulated a stroke in rats by blocking blood flow to half of each animal's brain. Again, they saw a rise
in BACE levels in the injured half.
"This is the first study showing a molecular mechanism linking stroke and Alzheimer's disease," Tesco says. She
plans to continue observing the rodents to see if they also develop more plaques in the wounded brain hemisphere.
Tesco speculates that other types of brain injury, including blows to the head, similarly raise levels of BACE. Even
falling on your face when you have had too much to drink might boost the amounts of this protein, leading to brain
plaques, she says.
Richard Mayeux at Columbia University in New York, US, says that the new findings make him more confident that
brain injury can lead to Alzheimer's: "Until you know the mechanism, you always have to worry that there is no real
direct link between the two diseases."
If true, he adds, it opens up the possibility of aggressively preventing Alzheimer's from developing.
Tesco hopes that a drug that breaks down BACE might block the formation of brain plaques.
Journal reference: Neuron (DOI: 10.1016/j.neuron.2007.05.012)
Battlefield 'Bear' robot to rescue fallen soldiers
* 06 June 2007
* NewScientist.com news service
* Dawn Stover
"I WILL never leave a fallen comrade." So states the US Soldier's Creed, and true
to that vow, 22-year-old Sergeant Justin Wisniewski died in Iraq last month while
searching for soldiers abducted during an ambush on 12 May.
A remote-controlled robot that will rescue injured or abducted soldiers, without
putting the lives of their comrades at risk, is being developed for the US army. The
1.8-metre-tall Battlefield Extraction-Assist Robot (Bear) will be able to travel over
bumpy terrain and squeeze through doorways while carrying an injured soldier in its
arms.
The prototype Bear torso can lift more than 135 kilograms with one arm, and its
developer, Vecna Technologies of College Park, Maryland, is now focusing on improving its two-legged lower body. The robot recently showed how it can climb up
and down stairs with a human-size dummy in its arms.
"We saw a need for a robot that can essentially go where a human can," says
Daniel Theobald, Vecna's president. But Bear can also do things no human can, such as carrying heavy loads over
considerable distances without tiring. The robot can also carry an injured soldier while kneeling or lying down, enabling it to move through tall grass or behind a wall without being spotted.
The robot's hydraulic arms are designed to pick up loads in a single smooth movement, to avoid causing pain to
wounded soldiers. While the existing prototype slides its arms under its burden like a forklift, future versions will be
fitted with manoeuvrable hands to gently scoop up casualties.
Tracks on both the thighs and shins allow the robot to climb
BEAR FACTS
easily over rough terrain or up and down stairs while crouching 1. Teddy bear face designed to be reassuring
or kneeling. It also has wheels at its hips, knees and feet, so it 2. Hydraulic upper body carries up to 227kgs
can switch to two wheels to travel efficiently over smooth sur- (500lbs)
faces while adopting a variety of positions. To keep it steady no 3. When kneeling tracked "legs" travel over
matter what position it adopts, Bear is fitted with accelerome- rubble. Switches to wheels on smooth surfaces
4. Dynamic Balance Behaviour (DBB) technolters to monitor the movement of its torso, and gyroscopes to
detect any rotation of its body that might indicate it is about to ogy allows the robot to stand and carry loads
upright on its ankles, knees or hips for nearly
lose its balance. Computer-controlled motors adjust the posian hour
tion of its lower body accordingly to prevent it toppling over.
The robot's humanoid body and teddy bear-style head give it a friendly appearance. "A really important thing when
you're dealing with casualties is trying to maintain that human touch," says Gary Gilbert of the US army's Telemedicine
and Advanced Technology Research Center in Frederick, Maryland, which provided the initial funding for Bear's development. Congress has since added a further $1.1 million.
Although rescuing injured soldiers will be its most important role, Bear's work will also include mundane tasks such
as loading trucks and carrying equipment for soldiers. "The robot will be an integral part of a military team," says
Theobald. Bear is expected to be ready for field testing in less than five years.
From issue 2607 of New Scientist magazine, 06 June 2007, page 32
Bigger horns equal better genes
Size matters. At least, it does to an alpine ibex.
According to a team of international researchers, mature, male alpine ibex demonstrate a correlation between horn
growth and genetic diversity. Past research studies have shown that greater genetic diversity correlates with a greater
chance of survival.
"The size of the horns reliably advertises the genetic quality of the ibex—and the bigger, the better," said Dr. David
Coltman, an evolutionary geneticist at the University of Alberta and co-author of the study, which was published this
month in the journal Molecular Ecology.
The researchers found that horn sizes among younger ibex (one- to six-years-old) are relatively similar regardless
of their genetic diversity. However, once the ibex mature to the age when they begin competing for reproductive mates
(7 to 12), horn length varies according to genetic diversity: the greater the diversity, the greater the length of the
horns.
The researchers believe the horn length discrepancies are evidence to support the mutation accumulation theory of
ageing, which is the idea that, because natural selection weakens with age, genetic mutations have effects that accumulate over time. Therefore, differences in genetic quality become more apparent as an organism ages.
Coltman noted that his study, which incorporated genetic samples from more than 150 ibex, took into account the
fact that environmental factors also play a role in determining ibex horn size.
The ibex's horns are considered a "secondary sexual trait". Researchers believe the horns help males successfully
mate because they display genetic quality to females and also help to "win" physical battles and achieve high social
rank among their competitors.
"We've learned from other species, such as deer and sheep, that horn or antler size can be a good indicator of an
individual's quality and reproductive success," Coltman said. "We wanted to see if the same could be said for alpine
ibex, and we found that it can."
The researchers were particularly intrigued about the ibex's horns because they are costly for an ibex to produce
and maintain.
"[The horns] require a lot of energy to build and then carry around. They can be a meter long and are quite heavy,
and the ibex carries them for their lifetime, unlike antlers which are shed every year. They also cause the ibex to lose
heat in winter, because their core is heavily vascularized."
Found exclusively in the Alps mountain range in Europe, alpine ibex were hunted almost to extinction about a
century ago for sport and the purported pharmacological properties of their horns. The last survivors were protected in
an Italian national park, and the species has slowly repopulated and today is no longer considered endangered.
Largest ever study of genetics of common diseases published today
The Wellcome Trust Case Control Consortium, the largest ever study of the genetics behind common diseases such
as diabetes, rheumatoid arthritis and coronary heart disease, today publishes its results in the journals Nature and
Nature Genetics.
The £9 million study is one of the UK's largest and most successful academic collaborations to date. It has examined
DNA samples from 17,000 people across the UK, bringing together 50 leading research groups and 200 scientists in the
field of human genetics from dozens of UK institutions. Over two years, they have analysed almost 10 billion pieces of
genetic information.
"Many of the most common diseases are very complex, part 'nature' and 'nurture', with genes interacting with our
environment and lifestyles," says Professor Peter Donnelly, Chair of the Consortium, who is based at the University of
Oxford. "By identifying the genes underlying these conditions, our study should enable scientists to understand better
how disease occurs, which people are most at risk and, in time, to produce more effective, more personalised
treatments."
The study has substantially increased the number of genes known to play a role in the development of some of our
most common diseases. Many of these genes that have been found are in areas of the genome not previously thought
to have been related to the diseases.
"Just a few years ago it would have been thought wildly optimistic that it would be possible in the near future to
study a thousand genetic variants in each of a thousand people," says Dr Mark Walport, Director of the Wellcome Trust,
the UK's largest medical research charity, which funded the study. "What has been achieved in this research is the
analysis of half a million genetic variants in each of seventeen thousand individuals, with the discovery of more than
ten genes that predispose to common diseases.
"This research shows that it is possible to analyse human variation in health and disease on an enormous scale. It
shows the importance of studies such as the UK Biobank, which is seeking half a million volunteers aged between 40
and 69, with the aim of understanding the links between health, the environment and genetic variation. New preventive strategies and new treatments depend on a detailed understanding of the genetic, behavioural and environmental factors that conspire to cause disease."
Amongst the most significant new findings are four chromosome regions containing genes that can predispose to
type 1 diabetes and three new genes for Crohn's disease (a type of inflammatory bowel disease). For the first time, the
researchers have found a gene linking these two autoimmune diseases, known as PTPN2.
The study has also confirmed the importance of a process known as autophagy in the development of Crohn's
disease. Autophagy, or "self eating", is responsible for clearing unwanted material, such as bacteria, from within cells.
The may be key to the interaction of gut bacteria in health and in inflammatory bowel disease and could have clinical
significance in the future.
"The link between type 1 diabetes and Crohn's disease is one of the most exciting findings to come out of the
Consortium," says Professor John Todd from the University of Cambridge, who led the study into type 1 diabetes. "It is
a promising avenue for us to understand how the two diseases occur. The pathways that lead to Crohn's disease are
increasingly well understood and we hope that progress in treating Crohn's disease may give us clues on how to treat
type 1 diabetes in the future."
Research from the Consortium has already played a major part in identifying the clearest genetic link yet to obesity
and three new genes linked to type 2 diabetes, published in April in advance of the main study. It has found independently a major gene region on chromosome 9 identified by independent studies on coronary heart disease.
Researchers analysed DNA samples taken from people in the UK – 2,000 patients for each disease and 3,000
control samples – to identify common genetic variations for seven major diseases. These are bipolar disorder, Crohn's
disease, coronary heart disease, hypertension, rheumatoid arthritis and type 1 and type 2 diabetes. For each disease,
the researchers will study larger population samples to confirm their results.
Although the human genome is made up of more than three billion sub-units of DNA, called nucleotides (or bases),
most of these show little in the way of differences between individuals. A substantial part of the variation in DNA
sequence between individuals is due to single-nucleotide polymorphisms (differences), also known as SNPs. There are
approximately 8 million common SNPs in European populations. Fortunately, because SNPs that lie close together on
chromosomes often tell quite similar stories, researchers in the Consortium were able to explore this variation through
analysing a subset of these SNPs (in fact approximately 500,000).
"Human genetics has a chequered history of irreproducible results, but this landmark collaboration of scientists in
Britain has shown conclusively that the new approach of analysing a large subset of genetic variants in large samples
of patients and healthy individuals works," says Professor Donnelly. "We are now able to effectively scan most of the
common variation in the human genome to look for variants associated with diseases. This approach will undoubtedly
herald major advances in how we understand and tackle disease in the future."
Further analysis as part of the Consortium will be looking at tuberculosis (TB), breast cancer, autoimmune thyroid
disease, multiple sclerosis and ankylosing spondylitis. The results are expected later this year.
New interview technique could help police spot deception
Shifting uncomfortably in your seat? Stumbling over your words? Can’t hold your questioner’s gaze? Police interviewing strategies place great emphasis on such visual and speech-related cues, although new research funded by the
Economic and Social Research Council (ESRC) and undertaken by academics at the University of Portsmouth casts
doubt on their effectiveness. However, the discovery that placing additional mental stress on interviewees could help
police identify deception has attracted interest from investigators in the UK and abroad.
Police manuals recommend several approaches to help investigators decide whether they are being told the truth.
The principal strategy focuses on visual cues such as eye contact and body movement, whilst the Baseline Method
strategy sees investigators compare a suspect’s verbal and non-verbal responses during ‘small talk’ at the beginning of
interview with those in the interview proper. A third, the Behavioural Analysis Interview (BAI) strategy, comprises a list
of questions to which it is suggested liars and those telling the truth will give different verbal and non-verbal responses.
However, research has consistently found that cues offered in each of these scenarios are unreliable – a view
confirmed by the ESRC-funded ‘Interviewing to Detect Deception’ study. A series of experiments involving over 250
student ‘interviewees’ and 290 police officers, the study saw interviewees either lie or tell the truth about staged
events. Police officers
were then asked to tell
the liars from the truth
tellers using the recommended strategies.
Those paying attention
to visual cues proved
significantly worse at
distinguishing liars from
those telling the truth
than those looking for
speech-related cues. In
another experiment, liars appeared less
nervous and more
helpful than those telling the truth – contrary
to the advice of the BAI
strategy.
Professor Aldert Vrij
explained: “Certain
visual behaviours are
associated with lying,
but this doesn’t always
work. Nor is comparing
a suspect’s responses
during small talk, and
then in a formal interview, likely to be much
help. Whether lying or
telling the truth, people
are likely to behave
quite differently in these
two situations.”
He continued: “Evidence also suggests
that liars are concerned
about not being believed, and so are unlikely to come across as
less helpful than truthful
people during interview.
If anything, guilty people are probably even
keener to make a positive impression. All of
this makes the investigator’s job very difficult.”
However, the picture
changed when researchers raised the
‘cognitive load’ on interviewees by asking
them to tell their stories in reverse order.
Professor Vrij explained:
“Lying takes a lot of mental effort in some situations, and we wanted to test the idea that introducing an extra demand
would induce additional cues in liars. Analysis showed significantly more non-verbal cues occurring in the stories told
in this way and, tellingly, police officers shown the interviews were better able to discriminate between truthful and
false accounts.”
High self-esteem may be culturally universal, international study shows
The notion that East Asians, Japanese in particular, are self-effacing and have low self-esteem compared to
Americans may well describe the surface view of East Asian personality, but misses the picture revealed by recently
developed measures of self-esteem, according to a new study by a team of researchers from the United States, China
and Japan.
For the first time psychologists used those new measures in exactly parallel fashion to compare samples of university students from the three countries. Surveying more than 500 students, they found that implicit, or automatic,
self-esteem was strongly positive among students from each of the nations. The consistency of the findings across
cultures was so clearly apparent that the researchers conclude in this month’s issue of the journal Psychological
Science that high implicit self-esteem may be culturally universal.
The researchers used the Implicit Association Test (IAT) created by University of Washington psychologist Anthony
Greenwald and a co-author of the study, to probe the students’ positive associations with themselves. Different versions of the test have been widely used to investigate automatic attitudes and evaluations such as racial bias, and
gender and age stereotypes. In this study it was used to provide an index of self-esteem. Psychologists previously
equated self-esteem with the extent to which people describe themselves as having positive characteristics. These
self-descriptions are called explicit self-esteem and are measured by asking for agreement with statements such as “I
feel that I have a number of good qualities.” No questions are asked to measure implicit self-esteem. Instead the test
measures how rapidly a person can give the same response to words that are pleasant and words that refer to one’s
self.
To ensure that their sample was geographically diverse, the researchers recruited students from seven universities
– the University of Tokyo, Osaka University and Shinshu University in Japan; East China Normal University and
Northwest Normal University in China and the UW and Harvard University – to take the test, which was administered
by computer.
Although East Asians are perceived by both others and themselves to be modest and self-effacing, the test results
painted a different picture. Students from all three countries had highly positive implicit self-esteem, with the Japanese
students showing especially higher self-esteem than their Chinese and American counterparts. “Ordinary East Asians
are aware that they hold strongly positive self-views. But the prevalent modesty norm prevents them from expressing
it publicly,” said Susumu Yamaguchi of Tokyo University and lead author of the study. “The IAT successfully unraveled
East Asians’ unexpressed self-esteem in our study.” The authors speculate that cross-cultural similarities in positive
implicit self-esteem may arise from cross-cultural similarities in child-rearing.
“It may be that parents in all societies, especially mothers, adore their children and put them on a pedestal so that
children worldwide absorb a highly positive self-concept,” Greenwald said. “In Japan the culture explicitly tells you that
you are not better than others. But this culturally approved explicit self-concept doesn’t remove the base of adoration
created by parents and other relatives since childhood. In China, where there is pressure for having smaller families,
children are perhaps more precious than they were years ago.”
Mahzarin Banaji, a Harvard psychologist, co-developer of the test and co-author of the study said: “When we see
cultural variation in human behavior, we understand that societies and cultures mold their members in different ways.
When we see cultural invariance, as we do here in East-West self esteem, we understand that we are also all the
same.”
---------------------------The Japanese Society for the Promotion of Science and the National Institute of Mental Health funded the research. Other authors
of the papers are Fumio Murakami of the University of Tokyo, Kimihiro Shiomura of Shinshu University and Chihiro Kobayashi of
Osaka University in Japan; Huajian Cai, a former UW post-doctoral researcher now at Sun Yat-sen University in China; Daniel Chen,
a UW doctoral student; and Anne Krendl of Dartmouth University.
Essay
The Universe, Expanding Beyond All Understanding
By DENNIS OVERBYE
When Albert Einstein was starting out on his cosmological quest 100 years ago, the
universe was apparently a pretty simple and static place. Common wisdom had it that
all creation consisted of an island of stars and nebulae known as the Milky Way surrounded by infinite darkness.
We like to think we’re smarter than that now. We know space is sprinkled from now
to forever with galaxies rushing away from one another under the impetus of the Big
Bang.
Bask in your knowledge while you can. Our successors, whoever and wherever they
are, may have no way of finding out about the Big Bang and the expanding universe,
according to one of the more depressing scientific papers I have ever read.
Jeremy Traum
If things keep going the way they are, Lawrence Krauss of Case Western Reserve University and Robert J. Scherrer
of Vanderbilt University calculate, in 100 billion years the only galaxies left visible in the sky will be the half-dozen or so
bound together gravitationally into what is known as the Local Group, which is not expanding and in fact will probably
merge into one starry ball.
Unable to see any galaxies flying away, those astronomers will not know the universe is expanding and will think
instead that they are back in the static island universe of Einstein. As the authors, who are physicists, write in a paper
to be published in The Journal of Relativity and Gravitation, “observers in our ‘island universe’ will be fundamentally
incapable of determining the true nature of the universe.”
It is hard to count all the ways in which this is sad. Forget the implied mortality of our species and everything it has
or has not accomplished. If you are of a certain science fiction age, like me, you might have grown up with a vague
notion of the evolution of the universe as a form of growing self-awareness: the universe coming to know itself, getting
smarter and smarter, culminating in some grand understanding, commanding the power to engineer galaxies and
redesign local spacetime.
Instead, we have the prospect of a million separate Sisyphean efforts with one species after another pushing the
rock up the hill only to have it roll back down and be forgotten.
Worse, it makes you wonder just how smug we should feel about our own knowledge.
“There may be fundamentally important things that determine the universe that we can’t see,” Dr. Krauss said in an
interview. “You can have right physics, but the evidence at hand could lead to the wrong conclusion. The same thing
could be happening today.”
The proximate culprit here is dark energy, which has been responsible for much of the bad news in physics over the
last 10 years. This is the mysterious force, discovered in 1998, that is accelerating the cosmic expansion that is causing
the galaxies to rush away faster and faster. The leading candidate to explain that acceleration is a repulsion embedded
in space itself, known as the cosmological constant. Einstein postulated the existence of such a force back in 1917 to
explain why the universe didn’t collapse into a black hole, and then dropped it when Edwin Hubble discovered that
distant galaxies were flying away — the universe was expanding.
If this is Einstein’s constant at work — and some astronomers despair of ever being able to say definitively whether
it is or is not — the future is clear and dark. In their paper, Dr. Krauss and Dr. Scherrer extrapolated forward in time
what has become a sort of standard model of the universe, 14 billion years old, and composed of a trace of ordinary
matter, a lot of dark matter and Einstein’s cosmological constant.
As this universe expands and there is more space, there is more force pushing the galaxies outward faster and
faster. As they approach the speed of light, the galaxies will approach a sort of horizon and simply vanish from view, as
if they were falling into a black hole, their light shifted to infinitely long wavelengths and dimmed by their great speed.
The most distant galaxies disappear first as the horizon slowly shrinks around us like a noose.
A similar cloak of invisibility will befall the afterglow of the Big Bang, an already faint bath of cosmic microwaves,
whose wavelengths will be shifted so that they are buried by radio noise in our own galaxy. Another vital clue, the
abundance of deuterium, a heavy form of hydrogen manufactured in the Big Bang, in deep space, will become unobservable because to be seen it needs to be backlit from distant quasars, and those quasars, of course, will have
disappeared.
Eventually, in the far far future, this runaway dark energy will suck all the energy and life out of the universe. A few
years ago, Edward Witten, a prominent theorist at the Institute for Advanced Study, called a universe that is accelerating forever “not very appealing.” Dr. Krauss has called it simply “the worst possible universe.”
But our future cosmologists will be spared this vision, according to the calculations. Instead they will puzzle about
why the visible universe seems to consist of six galaxies, Dr. Krauss said. “What is the significance of six? Hundreds of
papers will be written on that,” he said.
Those cosmologists may worry instead that their galaxy cloud will collapse into a black hole one day and, like
Einstein, propose a cosmic repulsion to prevent it. But they will have no way of knowing if they were right.
Although by then the universe will be mostly dark energy, Dr. Krauss said, it will be undetectable unless astronomers want to follow the course of the occasional star that gets thrown out of the galaxy and is caught up in the dark
cosmic current. But it would have to be followed for 10 billion years, he said — an experiment the National Science
Foundation would be unlikely to finance.
“This is even weirder,” Dr. Krauss said. “Five billion years ago dark energy was unobservable; 100 billion years from
now it will become invisible again.”
It turns out that you don’t actually need dark energy to be this pessimistic about the future, as Dr. Krauss and Dr.
Scherrer point out. In 1987, George Ellis, a mathematician and astronomer at the University of Cape Town, in South
Africa, and Tony Rothman, currently lecturing at Princeton, wrote a paper showing how even ordinary expansion would
gradually carry most galaxies too far away to be seen, setting the stage for cosmic ignorance.
Dark energy speeds up the picture, Dr. Ellis said in an e-mail message, adding that he was glad to see the new
paper, which adds many astrophysical details. “It’s an interesting gloss on the far future,” he said.
James Peebles, a Princeton cosmologist, said there were more pressing worries. We might be headed toward a
universe that is “asymptotically empty,” he said, “But I have the uneasy feeling that the U.S.A. is headed into asymptotic futility well before that.”
You might object that the inhabitants of the far future will be far more advanced than we are. Maybe they will be
able to detect dark energy — or the extra dimensions of string theory, for that matter — in the laboratory. Maybe they
will even be us, in some form or other, if the human race manages to get out of the solar system before the Sun blows
up in five billion years. But if relativity is right, they won’t be able to build telescopes that can see past the edge of the
universe.
It’s not too late to start thinking about sending out the robot probes that could drift down through alien skies eons
from now with, if not us or our DNA, at least a few nuggets of wisdom — that the world is made of atoms and that it
started with a bang.
The lesson in the meantime is that we don’t know what we don’t know, and we never will — a lesson that extends
beyond astronomy.
Einstein once said, “The Lord God is subtle but malicious he is not.”
I wondered in light of this new report whether it might be time to revise that quotation. Max Tegmark, a cosmologist at the Massachusetts Institute of Technology, told me the problem was not malice but human arrogance — a
necessary but unfortunate condition for scientific progress.
“We have a tendency to put ourselves at the center of the universe,” he said. “We assume all we see is all there is.”
But, as Dr. Tegmark noted, Big Bang theorists already suppose that basic aspects of the universe are out of sight.
The reason we believe we live in a smooth, orderly universe instead of the chaotic one that is more likely, they say,
is that the chaos has been hidden. According to the dominant theory of the Big Bang, known as inflation, an extremely
violent version of dark energy blew it up a fraction of a second after time began, stretching and smoothing space and
pushing all the wildness and chaos and even perhaps other universes out of the sky, where they will never be seen.
“Inflation tells us we live in a messy universe,” Dr. Tegmark said. Luckily we never have to confront it.
Ignorance is us, or is it bliss?
Elephants only heed warning calls from local herds
For such a big animal, elephants have a surprisingly sensitive touch. Not only do they use their feet to "hear" calls
from other herds, it now appears that how they respond depends on who's calling.
Elephants use low-frequency vocalisations to communicate between herds up to several kilometres apart. These
rumblings also generate seismic waves in the ground, and Caitlin O'Connell-Rodwell of Stanford University Medical
Center in California and her colleagues suspected these might play a role in communication.
The researchers recorded alarm calls in Namibia and Kenya made by elephants when lions were lurking nearby.
When herds at Namibian watering holes were played just the seismic portion of alarm calls from a neighbouring herd
through the ground, they reacted dramatically, first freezing and then clumping in tight groups, with babies in the
middle. However, the further they lived from the herd being played, the less they reacted. The elephants reacted least
to calls from herds in Kenya. The work will appear in the Journal of the Acoustical Society of America.
"You'd think that danger would mean danger and they should run," says O'Connell-Rodwell. She says it is puzzling
that elephants ignore an alarm call when it comes from an unfamiliar herd.
Chimp culture is passed between groups
* 17:00 07 June 2007
* NewScientist.com news service
* Nora Schultz
Chimp populations, like humans, have local customs, and these cultural practices can spread to other troops, researchers say.
The spread of such traditions and innovations to different groups is an important hallmark of culture, and a necessary part of development through social learning, they say.
Andrew Whiten at the University of St Andrews, UK, and colleagues
taught individual chimpanzees one of two ways to solve complex foraging
tasks, and observed how the different techniques spread across two sets of
three groups. The chimps had to manipulate a combination of buttons,
levers or discs to extract treats from cubes. Watch a video of chimps
completing the tasks.
Although no chimps cracked the puzzles without instruction during an
initial encounter with the cubes, animals in the two groups learned quickly
how to work the devices when watching a peer who had been trained in
one of the two possible sets of solutions.
Within a few days, most chimps mastered the techniques that had been
"seeded" this way in their group.
A chimp observes a peer opening a cube for food (Image: Lewis Haughton)
Distance learning
The cubes were then moved into the view of a second set of chimp groups, so they could observe their respective
neighbours solving the tasks. The new groups learned the same techniques as demonstrated in the adjacent enclosure,
and then passed their set of tricks on to a third group in another round of experiments.
"This is the first time we can show such transmission of socially learned behaviour patterns between groups of
animals", says Antoine Spiteri, who was involved in the study.
The team had previously found social learning of similarly complex tasks within groups, but to spread widely,
cultural traditions must catch on with new groups, too, the researchers say.
Carel van Schaik at the University of Zurich in Switzerland, who studies orang-utan culture, says that the new
results show "beyond a doubt that apes are capable of transmitting pretty complex traditions. The question is now to
what extent this reflects what's going on in the wild."
Group dynamics
Van Schaik's group hopes to find out more about this by measuring "peering" behaviour in wild orang-utans highly-focused watching of another animal from a short distance which may be a potential mechanism for social
learning. "The whole picture is coming together", he says.
Next Spiteri wants to unravel exactly how chimp culture spreads: "We need to see how status and prestige of
different animals affect who learns from whom."
An analysis of Whiten's group's studies already shows that the order in which individuals in each group picked up
new traditions was similar for foraging tasks, but not for unrelated tasks, giving first insights into the dynamics of
cultural transmission. Journal reference: Current Biology (DOI: 10.1016/j.cub.2007.05.031)
Wireless power could have cellphone users beaming
* 19:00 07 June 2007
* NewScientist.com news service
* Robert Adler
Your cellphone or laptop computer may soon recharge itself the same way it transfers information - wirelessly.
Researchers at the Massachusetts Institute of Technology (MIT), in the US, report that they can now send substantial amounts of power - enough to light a 60-watt bulb - across a room by magnetic induction between two devices
tuned to resonate with each other.
They hope to use this phenomenon of "strong coupling" to recharge or even run mobile devices wirelessly.
Induction - the ability of a changing magnetic field to produce an electric current - was discovered by Michael
Faraday in 1831. It is what makes electric generators, transformers, and motors work. Until now, induction has only
been practical at close range, for example between the charger and handset of an electric toothbrush. At longer
distances, the power losses are too great to make it worthwhile
Power-storing coil
Inspired by a mobile phone with a rundown battery, Marin Soljaĉić (pronounced soul-ya-cheech), a theoretical
physicist at MIT, wondered if he could improve the efficiency of induction over longer distances.
From his experience with lasers, he knew that objects that resonate at the same frequency readily exchange energy.
He set out to see if he could use electromagnetic resonance to transmit electrical power.
Soljaĉić and his research group have now built a coil with just the right properties. Powered by mains current, the
coil naturally oscillates at 10 MHz. Unlike an antenna - which radiates the energy it receives - their device stores energy
internally, in the form of oscillating currents and charges.
The coil generates a strong electromagnetic field, but most of the electric component of that field is trapped inside
the coil, while an oscillating magnetic field surrounds it. The oscillating magnetic field efficiently transmits power
across the lab to a receiver tuned to the same frequency.
Minimising the external electric field is crucial for safety. "We wanted to use the magnetic field for coupling, and
have the electric field confined," says theoretician André Kurs, a member of the MIT group, "because a magnetic field
does not interact with most objects, including biological tissues."
Real world applications
"I think it’s brilliant," says Douglas Stone, a theoretical physicist at Yale University, not affiliated with the MIT group.
"This is something anybody could have thought about for a century."
Stone agrees with the MIT researchers that while there is much work to be done before your gadgets recharge
themselves wirelessly, this technology will move from the lab to the real world. "There’s no fundamental problem,"
says Stone. "It’s going to work." Journal reference: Science Express (7 June 2007, p 1)
Scientists propose the kind of chemistry that led to life
Before life emerged on earth, either a primitive kind of metabolism or an RNA-like duplicating machinery must have
set the stage – so experts believe. But what preceded these pre-life steps?
A pair of UCSF scientists has developed a model explaining how simple chemical and physical processes may have
laid the foundation for life. Like all useful models, theirs can be tested, and they describe how this can be done. Their
model is based on simple, well-known chemical and physical laws.
The work appears online this week in “The Proceedings of the National Academy of Sciences.”
( http://www.pnas.org/cgi/content/abstract/0703522104v1)
The basic idea is that simple principles of chemical interactions allow for a kind of natural selection on a micro scale:
enzymes can cooperate and compete with each other in simple ways, leading to arrangements that can become stable,
or “locked in,” says Ken Dill, PhD, senior author of the paper and professor of pharmaceutical chemistry at UCSF.
The scientists compare this chemical process of “search, selection, and memory” to another well-studied process:
different rates of neuron firing in the brain lead to new connections between neurons and ultimately to the mature
wiring pattern of the brain. Similarly, social ants first search randomly, then discover food, and finally build a
short-term memory for the entire colony using chemical trails.
They also compare the chemical steps to Darwin’s principles of evolution: random selection of traits in different
organisms, selection of the most adaptive traits, and then the inheritance of the traits best suited to the environment
(and presumably the disappearance of those with less adaptive traits).
Like these more obvious processes, the chemical interactions in the model involve competition, cooperation, innovation and a preference for consistency, they say.
The model focuses on enzymes that function as catalysts – compounds that greatly speed up a reaction without
themselves being changed in the process. Catalysts are very common in living systems as well as industrial processes.
Many researchers believe the first primitive catalysts on earth were nothing more complicated than the surfaces of
clays or other minerals.
In its simplest form, the model shows how two catalysts in a solution, A and B, each acting to catalyze a different
reaction, could end up forming what the scientists call a complex, AB. The deciding factor is the relative concentration
of their desired partners. The process could go like this: Catalyst A produces a chemical that catalyst B uses. Now,
since B normally seeks out this chemical, sometimes B will be attracted to A -- if its desired chemical is not otherwise
available nearby. As a result, A and B will come into proximity, forming a complex.
The word “complex” is key because it shows how simple chemical interactions, with few players, and following basic
chemical laws, can lead to a novel combination of molecules of greater complexity. The emergence of complexity –
whether in neuronal systems, social systems, or the evolution of life, or of the entire universe -- has long been a major
puzzle, particularly in efforts to determine how life emerged.
Dill calls the chemical interactions “stochastic innovation” – suggesting that it involves both random (stochastic)
interactions and the emergence of novel arrangements.
“A major question about life’s origins is how chemicals, which have no self-interest, became ‘biological’ -- driven to
evolve by natural selection,” he says. “This simple model shows a plausible route to this type of complexity.” Dill is also
a professor of biophysics and associate dean of research in the UCSF School of Pharmacy. He is a faculty affiliate at
QB3, the California Institute for Quantitative Biomedical Research, headquartered at UCSF.
Lead author of the paper is Justin Bradford, a UCSF graduate student working with Dill.
The research was supported by the National Science Foundation and the National Institutes of Health.
UCSF is a leading university that advances health worldwide by conducting advanced biomedical research, educating
graduate students in the life sciences and health professions, and providing complex patient care.
Far side could be ideal for radio observatory
* 12:06 08 June 2007
* NewScientist.com news service
* David Shiga
The idea of building a radio observatory on the Moon has been bolstered by a high-level report on lunar science.
Situated on the far side, it would be able to look back to the "dark ages" of the early universe, and map the magnetic
fields of planets around other stars.
NASA asked the US National Research Council (NRC) in 2006 to advise
the agency on what kinds of science could be tackled from the Moon, and
what projects should be given top priority.
On Tuesday, an NRC committee headed by George Paulikas, a former
scientist with The Aerospace Corporation in El Segunda, California, US,
reported its findings. The report is entitled The Scientific Context for the
Exploration of the Moon.
One of the astronomical ideas considered most promising by the
committee was a radio observatory on the Moon's surface. A paper on
the idea was submitted by a group of scientists led by Joseph Lazio of the
Naval Research Laboratory in Washington, DC, US.
A Y-shaped array of radio antennas on the moon would investigate the mysterious sources of cosmic rays
(Illustration: J Lazio et al/Naval Research Laboratory)
Instant array
They envisage plating metallic antennas onto a flexible plastic film that can be rolled up for transport to the Moon,
then unrolled on the lunar surface like a carpet, creating an instant array of radio receivers.
Such an observatory could investigate very low frequency sources in the 1 to 10 megahertz range, which would
normally be drowned out by the constant din of radio traffic from Earth. The Moon’s bulk prevents these Earthly radio
signals from reaching its far side.
"The far side of the Moon is one of the most radio quiet places in the solar system, because it's the one place that
doesn't really have any time in view of the Earth", says David Lawrence of the Los Alamos National Laboratory in New
Mexico, US, a member of the NRC committee that prepared the report.
By observing at low frequencies, the observatory could map out structures prevalent during the period of reionisation, an era when radiation from the first stars and galaxies dramatically transformed the universe, altering the
omnipresent hydrogen clouds to make them transparent to radiation.
It has so far been impossible to directly observe anything but the very end of this period, but scientists are keen to
know what was happening in the universe at that time, as the formation pattern of galaxies could cast light on the
nature of dark matter and dark energy.
Planet probing
It would even be possible for such an observatory to probe planets orbiting other stars. The interaction of charged
particles such as electrons with the magnetic fields of extrasolar planets should produce low-frequency radio waves.
They could provide information on the interiors of extrasolar planets, as the internal structure and composition
governs the strength of the magnetic field.
Another aim would be to shed light on the mysterious sources of cosmic rays – speeding charged particles that
appear to come from all directions in the sky. Wherever these particles are accelerated, they should also give off
low-frequency radio waves that a suitable observatory could detect.
Lazio's team has proposed an initial radio observatory on the near side of the Moon consisting of three
500-metre-long antenna-bearing strips that would meet in a Y shape. It would be mostly limited to investigating
outbursts of the Sun, but it would also serve as a testing ground for a more ambitious detector, spread over a few
kilometres on the quiet, far side of the Moon.
Bigger is better
The more modest radio observatory design is called Radio Observatory for Lunar Sortie Science (ROLSS). Each strip
could be rolled up into a cylinder just a metre long and 25 centimetres across. They could be deployed by astronauts
on a return to the Moon, but it is simple enough that robots could also do the job.
Even ROLSS might see some distant objects, but only with blurry vision. "For the best astrophysics you want
something that's much bigger, let's say 10 kilometres or 100 kilometres across," says team member Robert MacDowall
of NASA's Goddard Space Flight Center in Greenbelt, Maryland, US.
The committee's report casts doubt on the suitability of the moon for telescopes working in visible light, infrared
and ultraviolet wavelengths, partly because the accumulation of lunar dust on their optics could interfere with their
vision. Dust does not affect low frequency radio waves, however.
Other promising ideas highlighted in the report include taking cores of lunar soil to look for evidence of ancient solar
flares in the isotopes laid down; looking for hardy bits of Earth rock, which could have been blasted off our planet as a
result of meteorite impacts and could contain information about the planet's early history; and a wide variety of geological investigations that could shed light on the formation and evolution of the Moon itself.
Tycoon seeks patent for 'minimal genome'
* 13:20 08 June 2007
* NewScientist.com news service
* Peter Aldhous
The man who led the private-sector effort to sequence the human genome is seeking exclusive commercial rights to
the bare essentials for life in the hope of one day creating a living organism from scratch.
It is not Craig Venter's first brush with controversy. His company Celera raced publicly-funded researchers to sequence the human genome. Now his research institute is trying to patent a "minimal genome", which could be used to
make synthetic life forms.
Some activists fear that Venter will create a "microbesoft" monopoly in the burgeoning area of synthetic biology – a
supercharged form of biotechnology that aims to create living "machines". The patent has also annoyed biologists who
are trying to foster an open-source movement. But the claim that Venter is about to become the Bill Gates of synthetic
biology is wide of the mark, say his scientific rivals.
"It’s the philosophical stake in the ground that will really tick people off," says Tom Knight, a synthetic biologist at
the Massachusetts Institute of Technology, US. "The good news is that what they’re claiming is a lot more limited than
people realise."
Monopoly on life?
The US patent application, filed by Hamilton Smith and colleagues at the J. Craig Venter Institute in Rockville,
Maryland, US, claims ownership of a set of less than 400 genes required to sustain a free-living microbe.
The patent states that a synthetic genome bearing the genes could be inserted into a bacterium stripped of its own
DNA. The idea is that this bacterium will become a "chassis" for synthetic biology, used to carry genetic circuits with
novel functions. The patent also claims a specific application: producing ethanol or hydrogen for fuel.
The ETC Group, which is concerned about the societal and environmental implications of new technologies, is up in
arms. "We believe these monopoly claims signal the start of a high-stakes commercial race to synthesise and privatise
synthetic life forms," says Jim Thomas, a researcher with the group based in Montreal, Canada.
Side-stepping the patent
But George Church, a synthetic biologist at Harvard University, predicts that many in the field will prefer to build
their machines using standard bacteria such as Escherichia coli. Even if they do use a stripped-down synthetic bacterium, they could sidestep the patent, claims Knight.
Because Venter’s own group published a paper on its minimal genome project in 1999, placing much of the information in the public domain, the patent had to be tightly defined. Indeed, it should be possible to evade it simply by
packing a synthetic genome with extra genes to bring the total over 450.
The patent also gives no details of how to create a synthetic organism. "I would be perfectly happy filing a patent
on mechanisms of creating an organism of this kind," says Knight. "That is not what this is."
Venter could not be reached for comment. But rumours are circulating that his institute will soon unveil the first
synthetic bacterium.
Mars rover finds "puddles" on the planet's surface
* 15:33 08 June 2007
* NewScientist.com news service
* David Chandler
A new analysis of pictures taken by the exploration rover Opportunity reveals what appear to be small ponds of
liquid water on the surface of Mars.
The report identifies specific spots that appear to have contained liquid water two years ago, when Opportunity
was exploring a crater called Endurance. It is a highly controversial claim, as many scientists believe that liquid water
cannot exist on the surface of Mars today because of the planet’s thin
atmosphere.
If confirmed, the existence of such ponds would significantly boost
the odds that living organisms could survive on or near the surface of
Mars, says physicist Ron Levin, the report's lead author, who works in
advanced image processing at the aerospace company Lockheed Martin
in Arizona.
Along with fellow Lockheed engineer Daniel Lyddy, Levin used images
from the Jet Propulsion Laboratory's website. The resulting stereoscopic
reconstructions, made from paired images from the Opportunity rover's
twin cameras, show bluish features that look perfectly flat. The surfaces
are so smooth that the computer could not find any surface details within
those areas to match up between the two images.
The imaging shows that the areas occupy the lowest parts of the
terrain. They also appear transparent: some features, which Levin says
may be submerged rocks or pebbles, can be seen below the plane of the
smooth surface.
Smooth bluish areas on a Martian crater floor could be ponds, according to two scientists. The area is approximately 1 square metre (Image: Ron Levin)
Smooth surface
The smoothness and transparency of the features could suggest either water or very clear ice, Levin says.
"The surface is incredibly smooth, and the edges are in a plane and all at the same altitude," he says. "If they were
ice or some other material, they'd show wear and tear over the surface, there would be rubble or sand or something."
His report was presented at a conference of the Institute of Electrical and Electronics Engineers, and will be published later this year in the institute's proceedings.
No signs of liquid water have been observed directly from cameras on the surface before. Reports last year pointed
to the existence of gullies on crater walls where water appears to have flowed in the last few years, as shown in
images taken from orbit, but those are short-lived flows, which are thought to have frozen over almost immediately.
Speedy evaporation?
Levin and other reasearchers, including JPL's Michael Hecht, have published calculations showing the possibility of
"micro-environments" where water could linger, but the idea remains controversial.
“The temperatures get plenty warm enough, but the Mars atmosphere is essentially a vacuum," says Phil Christensen of Arizona State University, developer of the Mars rovers' mini-Thermal Emission Spectrometers. That means
any water or ice exposed on the surface evaporates or sublimes away almost instantly, he says.
But, he adds, "it is theoretically possible to get liquid water within soil, or under other very special conditions". The
question is just how special those conditions need to be, and whether they ever really are found on Mars today.
If there were absolutely no wind, says Christensen, you might build up a stagnant layer of vapour above a liquid
surface, preventing it from evaporating too fast. “The problem is, there are winds on Mars… In the real world, I think
it's virtually impossible," he told New Scientist.
Simple test
Levin disagrees. He says his analysis shows that there can be wind-free environments at certain times of day in
certain protected locations. He thinks that could apply to these small depressions inside the sheltered bowl of Endurance crater, at midday in the Martian summer.
He adds that highly briny water, as is probably found on Mars, could be stable even at much lower temperatures.
Although the rover is now miles away from this site, Levin proposes a simple test that would prove the presence of
liquid if similar features are found: use the rover's drill on the surface of the flat area. If it is ice, or any solid material,
the drill will leave unmistakable markings, but if it is liquid there should be no trace of the drill's activity.
Levin’s father Gilbert was principal investigator of an experiment on the Viking Mars lander, which found evidence
for life on the planet, although negative results from a separate test for organic materials led most scientists to doubt
the evidence for biology.
Journal reference: R. L. Levin and Daniel Lyddy, Investigation of possible liquid water ponds on the Martian surface (2007 IEEE
Aerospace Applications Conference Proceedings, paper #1376, to be published in IEEE Xplore)
Caribbean frog populations started with single, ancient voyage on South American raft
Nearly all of the 162 land-breeding frog species on Caribbean islands, including the coqui frogs of Puerto Rico,
originated from a single frog species that arrived on a sea voyage from South America. They came 30 to 50 million
years ago, according to DNA-sequence analyses by scientists at Penn State.
Similarly, the scientists found that the Central American relatives of these Caribbean amphibians also arose from a
single species that arrived by raft from South America.
"This discovery is surprising because no previous theories of how the frogs arrived had predicted a single origin for
Caribbean terrestrial frogs, and because groups of close relatives rarely dominate the fauna of an entire continent or
major geographic region," said Blair Hedges of Penn State, an evolutionary biologist who directed the research.
"Because connections among continents have allowed land-dwelling animals to disperse freely over millions of years,
the fauna of any one continent is usually a composite of many types of animals."
The results will be published in the June 12, 2007 issue of the Proceedings of the National Academy of Sciences and
posted in the journal's online early edition this week.
The field work for the study required nearly three decades to complete because many of the species are restricted
to remote and isolated mountain tops or other inaccessible areas. Some species included in the study are believed to
be extinct because of habitat degradation and other causes.
"This study is a valuable contribution to our understanding of the evolutionary and biogeographic history of one of
the world's most complicated places geographically, the Caribbean," said Patrick Herendeen, program director in the
National Science Foundation's division of environmental biology, which funded the research. "The research presents
strong evidence that this group of frogs reached the Caribbean islands by dispersal."
A recent global assessment of amphibians found that the Caribbean Islands have the highest proportion of amphibian species threatened with extinction.
The anatomy of Caribbean frogs previously had led scientists to conclude that species in Cuba and other western-Caribbean islands were related to different mainland species than were the species on Puerto Rico and other
eastern-Caribbean islands, regardless of how they got there.
"Discovering a single origin for all of these species throughout the Caribbean islands was completely unexpected,"
Hedges said.
To make their discovery, the researchers sequenced the DNA of nearly 300 species of Caribbean, Central American,
and South American frogs.
The study's DNA research revealed that, while many ocean dispersals may have occurred over time, only two led to
the current faunas: one for the Caribbean islands and another for Central America.
The original frogs that successfully colonized the Caribbean islands likely hitched a ride on floating mats of vegetation called flotsam, the method typically used by land animals to travel across salt water. It's not likely that the frog
species dispersed simply by swimming because frogs dry out easily and are not tolerant of salt water.
In addition to the study's discoveries about Caribbean and Central American frogs, the research also revealed an
unusually large and unpredicted number of frog species in South America.
"The South American frog group may have more than 400 species, and is mostly associated with the large Andes
mountains of South America," Hedges said.
Study proves alcohol injections for common cause of foot pain highly successful
Sonographically-guided alcohol injections has a high success rate and is well tolerated by patients with Morton’s
neuroma, a common cause of foot pain, according to a recent study conducted by researchers at the Royal National
Orthopaedic Hospital and Kingston Hospital NHS Trust in Middlesex, United Kingdom.
Morton’s neuroma is a growth of nerve tissue that occurs in a nerve in your foot, often between your third and
fourth toes and usually causes a sharp, burning pain in the ball of your foot. For this study, researchers assessed the
efficacy of a series of alcohol injections into the lesion.
“I felt many patients with Morton’s neuroma were undergoing an operation that was unnecessary and that the
neuroma could be successfully treated in a less invasive manner,” said David Connell, MD, lead author of the study.
The study consisted of 101 patients with Morton’s neuroma. An average of 4.1 treatments per person were administered, and follow-up images were obtained at a mean of 21.1 months after the last treatment.
According to the study, there was a technical success rate of 100%. In 94% of the patients, partial or total symptom
improvement was reported, with 84% becoming totally pain free. Thirty patients underwent sonography at six months
after the last injection and showed a 30% decrease in the size of the neuroma.
“Surprisingly, most patients maintain innervation to the toes despite the alcohol ablation,” said Dr. Connell. “This
means that they don’t have the permanent numbness and loss of sensation that accompanies resection of the nerve at
surgery,” he said.
Agonized death throes probable cause of open-mouthed, head-back pose of many dino fossils
'Dead-bird' posture likely resulted from brain damage due to trauma or asphyxiation
Berkeley -- The peculiar pose of many fossilized dinosaurs, with wide-open mouth, head thrown back and recurved
tail, likely results from the agonized death throes typical of brain damage and asphyxiation, according to two paleontologists.
A classic example of the posture, which has puzzled paleontologists for ages, is the 150 million-year-old Archaeopteryx, the first-known example of a feathered dinosaur and the proposed link between dinosaurs and present-day
birds.
"Virtually all articulated specimens of Archaeopteryx are in this posture, exhibiting a classic pose of head thrown
back, jaws open, back and tail reflexed backward and limbs contracted," said Kevin Padian, professor of integrative
biology and curator in the Museum of Paleontology at the University of California, Berkeley. He and Cynthia Marshall
Faux of the Museum of the Rockies published their findings in the March issue of the quarterly journal Paleobiology,
which appeared this week.
Dinosaurs and their relatives, ranging from the flying pterosaurs to Tyrannosaurus rex, as well as many early
mammals, have been found exhibiting this posture. The explanation usually given by paleontologists is that the dinosaurs died in water and the currents drifted the bones into that position, or that rigor mortis or drying muscles,
tendons and ligaments contorted the limbs.
"I'm reading this in the literature and thinking, "This doesn't make any sense to me as a veterinarian,'" said lead
author Faux (pronounced fox), a veterinarian-turned-paleontologist who also is a curatorial affiliate with Yale University's Peabody Museum. "Paleontologists aren't around sick and dying animals the way a veterinarian is, where you see
this posture all the time in disease processes, in strychnine cases, in animals hit by a car or in some sort of extremis."
Faux and Padian argue in Paleobiology that the dinosaurs died in this posture as a result of damage to the central
nervous system. In fact, the posture is well known to neurologists as opisthotonus and is due to damage to the brain's
cerebellum. In humans and animals, cerebellar damage can result from suffocation, meningitis, tetanus or poisoning,
and typically accompanies a long, slow death.
Some animals found in this posture may have suffocated in an ash fall during a volcanic eruption, consistent with
the fact that many fossils are found in ash deposits, Faux and Padian said. But many other possibilities exist, including
disease, brain trauma, severe bleeding, thiamine deficiency or poisoning.
"This puts a whole new light on the mode of death of these animals, and interpretation of the places they died in,"
Padian said. "This explanation gives us clues to interpreting a great many fossil horizons we didn't understand before
and tells us something dinosaurs experienced while dying, not after dying."
Also, because the posture has been seen only in dinosaurs, pterosaurs and mammals, which are known or suspected to have had high metabolic rates, it appears to be a good indicator that the animal was warm blooded. Animals
with lower metabolic rates, such as crocodiles and lizards, use less oxygen and so might have been less traumatically
affected by hypoxia during death throes, Padian said.
Padian acknowledged that many dinosaur fossils show signs that the animal died in water and the current tugged
the body into an arched position, but currents cannot explain all the characteristics of an opisthotonic pose. By
studying a large number of fully articulated fossil skeletons, he and Faux were able to distinguish animals that underwent post-mortem water transport, a non-biological or abiotic process, from those with the classic "dead-bird"
posture, which they interpret to be the result of biological processes.
Faux, who also works as a disaster veterinarian from her home in Lewiston, Idaho, set out to test other
post-mortem processes - rigor mortis, which is the temporary stiffening of muscles after death; and the drying of
muscles, tendons and ligaments - that some paleontologists credit with creating the opisthotonic posture. She obtained badly injured birds -owls, falcons and red-tailed hawks - that had been euthanized at a raptor recovery center
and watched them for 8-10 hours, checking periodically to see if they moved during the process.
"In horses and smaller animals, rigor mortis sets in within a couple of hours, so I just looked to see if they were
moving or not," Faux said. "And they weren't moving. They were staying in whatever position I'd left them in. I
thought, 'If birds aren't doing it, and I'd never observed a horse doing it, then why would dinosaurs be doing it"'"
The idea that drying causes muscles or tendons to contract asymmetrically also didn't make sense, she said, based
on her veterinary experience and an experiment she conducted with two euthanized red-tailed hawks, which she dried
for two months set them in Styrofoam peanuts to dr. Most joints have counterbalancing muscles that dry the same way,
she said, so there was no reason to expect that the muscles would turn a joint during drying. She found no
post-mortem movement. She also pinned beef tendons as they dried, and though they shrank a bit, they did not shrink
enough even to dislodge the pins. Given these observations, it is hard to imagine how shrinking tendons or muscles
could drag a heavy creature into a different position, the researchers noted.
Padian pointed out, too, that all opisthotonic dinosaurs are very well preserved, meaning they evidently did not sit
out in the open for long, or scavengers would have quickly scattered the bones. So, he wondered, how could they have
been exposed long enough to dry out"
The only explanation that makes sense, they concluded, is central nervous system damage. The cerebellum is
responsible for fine muscle movement, controlling, for example, the body's antigravity muscles that keep the head
upright. Once the cerebellum ceases to modulate the behavior of the antigravity muscles, Faux said, the muscles pull
at full force, tipping the head and tail back, contracting the limbs and opening the mouth.
Padian and Faux urge reanalysis of many fossil finds, referring, for example, to a mass death uncovered in Nebraska in the early 20th century. They argue that cerebellar dysfunction explains the opisthotonic posture of the
numerous camel-like fossils better than does the common explanation - that the animals died in a stream and were
washed into an eddy or backwater.
The authors also point to a fossil of Allosaurus, a T rex-like animal, that displayed bone lesions suggestive of a
bacterial infection that also can lead to meningitis, a disease that can produce opithotonus. The authors point out that
their explanation of the opisthotonic posture in dinosaurs and other animals provides a way to assess the role played
by microbes in evolution, whether through disease or through other processes such as algal blooms - so-called "red
tides" - that can suffocate aquatic animals.
This example and others "suggest that reevaluation may be in order for an untold number of paleoenvironments
whose story has been at least partly explained on the basis of the death positions of many of their fossil vertebrates,"
the authors write in their Paleobiology paper.
Alzheimer's disease to quadruple worldwide by 2050
More than 26 million now estimated to have the disease
More than 26 million people worldwide were estimated to be living with Alzheimer’s disease in 2006, according to a
study led by researchers at the Johns Hopkins Bloomberg School of Public Health. The researchers also concluded the
global prevalence of Alzheimer’s disease will grow to more than 106 million by 2050. By that time, 43 percent of those
with Alzheimer’s disease will need high-level care, equivalent to that of a nursing home. The findings were presented
June 10 at the Second Alzheimer’s Association International Conference on Prevention of Dementia held in Washington,
D.C. and are published in the Association’s journal, Alzheimer’s & Dementia.
“We face a looming global epidemic of Alzheimer’s disease as the world’s population ages,” said the study’s lead
author, Ron Brookmeyer, PhD, professor in Biostatistics and chair of the Master of Public Health Program at the
Bloomberg School of Public Health. “By 2050, 1 in 85 persons worldwide will have Alzheimer’s disease. However, if we
can make even modest advances in preventing Alzheimer’s disease or delay its progression, we could have a huge
global public health impact.”
According to Brookmeyer and his co-authors, interventions that could delay the onset of Alzheimer’s disease by as
little as one year would reduce prevalence of the disease by 12 million fewer cases in 2050. A similar delay in both the
onset and progression of Alzheimer’s disease would result in a smaller overall reduction of 9.2 million cases by 2050,
because slower disease progression would mean more people surviving with early-stage disease symptoms. However,
nearly all of that decline would be attributable to decreases in those needing costly late-stage disease treatment in
2050.
The largest increase in the prevalence of Alzheimer’s will occur in Asia, where 48 percent of the world’s Alzheimer’s
cases currently reside. The number of Alzheimer’s cases is expected to grow in Asia from 12.65 million in 2006 to
62.85 million in 2050; at that time, 59 percent of the world’s Alzheimer’s cases will live in Asia.
To forecast the worldwide prevalence of Alzheimer’s disease, the researcher created a multi-state mathematical
computer model using United Nations population projections and other data on the incidence and mortality of Alzheimer’s.
---------------The research was funded by Elan Pharmaceuticals and Wyeth Pharmaceuticals.
Additional authors of the article “Forecasting the global burden of Alzheimer’s disease” include Elizabeth Johnson of the Johns
Hopkins Bloomberg School of Public Health, Kathryn Zieger-Graham with St. Olaf College and H. Michael Arrighi with Elan
Pharmaceuticals.
Drug slows and may halt Parkinson's disease
Northwestern University researchers have discovered a drug that slows – and may even halt – the progression of Parkinson’s disease. The drug rejuvenates aging dopamine cells, whose death in the brain causes the
symptoms of this devastating and widespread disease.
D. James Surmeier, the Nathan Smith Davis Professor and chair of physiology at Northwestern University’s Feinberg
School of Medicine, and his team of researchers have found that isradipine, a drug widely used for hypertension and
stroke, restores stressed-out dopamine neurons to their vigorous younger selves. The study is described in a feature
article in the international journal Nature, which will be published on-line June 10.
Dopamine is a critical chemical messenger in the brain that affects a person’s ability to direct his movements. In
Parkinson’s disease, the neurons that release dopamine die, causing movement to become more and more difficult.
Ultimately, a person loses the ability to walk, talk or pick up a glass of water. The illness is the second most common
neurodegenenerative disease in the country, affecting about 1 million people. The incidence of Parkinson’s disease
increases with age, soaring after age 60.
“Our hope is that this drug will protect dopamine neurons, so that if you began taking it early enough, you won’t get
Parkinson’s disease, even if you were at risk. ” said Surmeier, who heads the Morris K. Udall Center of Excellence for
Parkinson’s Disease Research at Northwestern. “It would be like taking a baby aspirin everyday to protect your heart.”
Isradipine may also significantly benefit people who already have Parkinson’s disease. In animal models of the
disease, Surmeier’s team found the drug protected dopamine neurons from toxins that would normally kill them by
restoring the neurons to a younger state in which they are less vulnerable.
The principal therapy for Parkinson’s disease patients currently is L-DOPA, which is converted in the brain to dopamine. Although L-DOPA relieves many symptoms of the disease in its early stages, the drug becomes less effective
over time. As the disease progresses, higher doses of L-DOPA are required to help patients, leading to unwanted
side-effects that include involuntary movements. The hope is that by slowing the death of dopamine neurons, isCHICAGO ---
radipine could significantly extend the time in which L-DOPA works effectively.
“If we could double or triple the therapeutic window for L-DOPA, it would be a huge advance,” Surmeier said.
The work by Surmeier’s group is particularly exciting because nothing is known to prevent or slow the progression
of Parkinson’s disease.
“There has not been a major advance in the pharmacological management of Parkinson’s disease for 30 years,”
Surmeier said.
Surmeier, who has researched Parkinson’s disease for 20 years, had long been frustrated because it wasn’t known
how or why dopamine cells die in the disease. “It didn’t seem like we were making much progress in spite of intense
study on several fronts,” he said.
Because he’s a physiologist, Surmeier decided to investigate whether the electrical activity of dopamine neurons
might provide a clue to their vulnerability. All neurons in the brain use electrical signals to do their job, much like digital
computers.
First, Surmeier observed that dopamine neurons are non-stop workers called pacemakers. They generate regular
electrical signals seven days a week, 24 hours a day, just like pacemaker cells in the heart. This was already known.
But then he probed more deeply and discovered something very strange about these dopamine neurons.
Most pacemaking neurons use sodium ions (like those found in table salt) to produce electrical signals. But Surmeier found that adult dopamine neurons use calcium instead.
Sodium is a mild mannered ion that does its job without causing a whit of trouble to the cell. Calcium ions, however,
are wild and rambunctious. Remember when Marlon Brando rode into town with his motorcycle gang in “The Wild
One”" Those guys were like calcium ions.
“The reliance upon calcium was a red flag to us,” Surmeier said. Calcium ions need to be chaperoned by the cell
almost as soon as they enter to keep them from causing trouble, he noted. The cell has to sequester them or keep
pumping them out. This takes a lot of energy.
“It’s a little like having a room full of two year olds you have to watch like a hawk so they don’t get into trouble,”
Surmeier said. “That’s really going to stress you.” With three boys under age eleven, he can relate to the stressed
dopamine neuron.
Surmeier theorized that the non-stop stress on the dopamine neurons explains why they are more vulnerable to
toxins and die at a more rapid rate as we age.
But these findings still didn’t offer him a new therapy.
Then, serendipity struck when he was working on a different problem. He discovered that young dopamine neurons
and adult ones have an entirely different way of operating.
When the neurons are young, Surmeier found they actually use sodium ions to do their work. But as the neurons
age, they become more and more dependent on the troublesome calcium and stop using sodium. This calcium dependence – and the stress it causes the neurons --is what makes them more vulnerable to death.
What would happen, Surmeier wondered, if he simply blocked the calcium’s route into the adult neuron cells"
Would the neurons revert to their youthful behavior and start using sodium again"
“The cells had put away their old childhood tools in the closet. The question was if we stopped them from behaving
like adults would they go into the closet and get them out again"” Surmeier asked. “Sure enough, they did.”
When he gave the mice isradipine, it blocked the calcium from entering the dopamine neuron. At first, the dopamine neurons became silent. But within a few hours, they had reverted to their childhood ways, once again using
sodium to get their work done.
“This lowers the cells’ stress level and makes them much more resistant to any other insult that’s going to come
along down the road. They start acting like they’re youngsters again,” Surmeier said.
The next step will be launching a clinical study.
"This animal study suggests that calcium channel blockers, drugs currently used to reduce blood pressure, might
someday be used to slow the steady progression of Parkinson's disease," said Walter J. Koroshetz, M.D., deputy director of the NINDS.
Simple test predicts 6-year risk of dementia
Risk factors: Age of 70 or older, poor scores on two simple cognitive tests, slow physical function on everyday tasks, history of coronary artery bypass surgery, body mass index of less than 18, current
non-consumption of alcohol
A simple test that can be given by any physician predicts a person’s risk for developing dementia within six years
with 87 percent accuracy, according to a study led by researchers at San Francisco VA Medical Center (SFVAMC).
The test, developed in the study by the researchers, is a 14-point index combining medical history, cognitive testing,
and physical examination. It requires no special equipment and can be given in a clinical setting such as a doctor’s
office or at a patient’s bedside.
The new index is the “bedside” version of a longer, more technically comprehensive “best” test, also developed
during the study, that is 88 percent accurate.
These are the first tools to accurately predict dementia, according to lead author Deborah E. Barnes, PhD, a mental
health researcher at SFVAMC. Barnes described the tests in a presentation at the 2007 International Conference on
Prevention of Dementia, in Washington, DC, sponsored by the Alzheimer’s Association.
“There are tests that accurately predict an individual’s chances of developing cardiovascular disease and other
maladies, but, until now, no one has developed similar scales for dementia,” says Barnes, who also is an assistant
professor of psychiatry at the University of California, San Francisco (UCSF).
As measured by the “bedside” index, the risk factors for developing dementia are an age of 70 or older, poor scores
on two simple cognitive tests, slow physical functioning on everyday tasks such as buttoning a shirt or walking 15 feet,
a history of coronary artery bypass surgery, a body mass index of less than 18, and current non-consumption of alcohol.
People who score 0 to 3 on the “bedside” test have a 6 percent chance of developing dementia within six years. A
score of 4 to 6 indicates a 25 percent chance.
People with a score of 7 or higher have a 54 percent chance of developing dementia within six years.
The 18-point comprehensive, or “best,” test measures for all “bedside” risk factors plus factors that would be more
difficult to measure as part of a routine clinical visit. These include brain magnetic resonance imaging (MRI) findings of
enlarged ventricles –– the fluid-filled cavities between brain tissue -- or diseased white matter –– the nerve cells that
transmit signals between grey matter; thickening of the internal carotid artery, which brings blood to the head and
neck; and the presence of one or two copies of the e4 allele, or subtype, of APO-E, the gene that codes for the protein
known as Apolipoprotein. The presence of APO-E e4 alleles is a known risk factor for Alzheimer’s disease.
A “best” test score of 0 to 4 indicates a 4 percent chance of developing dementia within six years. A score of 5 to 8
indicates a 25 percent chance.
A score of 9 or higher indicates a 52 percent chance of developing dementia within six years.
To develop the tests, the study authors tracked a broad range of physical, mental, demographic and other variables
for six years among 3,375 participants in the Cardiovascular Health Cognition Study, a national prospective study
sponsored by the National Heart, Lung, and Blood Institute (NHLBI). At the beginning of the study, none of the
subjects were demented.
Their mean age was 76. Fifty-nine percent were women and 15 percent were African-American. By the end of the
study, 14 percent of the subjects had developed dementia. The variables that were predictive of dementia in a statistically significant way became the basis of the tests.
The authors caution that there were no Hispanics or Asian-Americans included in the study population, and that the
new scales need validation in other study groups before they can become standard clinical tools.
“We certainly plan to look at other groups to see if these results are valid across a variety of populations,” says
Barnes.