While NASA Administrator Michael Griffin and National Public Radio debate semantics on whether NASA is doing enough to support climate studies, the agency and Columbia University are about to release more data on global warming, supported by Goddard Space Flight Center spacecraft data.
The data involve the arctic and Antarctic, viewed here from the Galileo spacecraft (Click on picture to enlarge). The lead author of a new report on the issue is James Hansen, of NASA’s Goddard Institute for Space Studies in New York. The institute makes heavy use of data generated by spacecraft under the Goddard Space Center in Greenbelt, Md. near Washington.
Hansen’s scientific work was earlier at the center of controversy when a Bush administration political appointee in public affairs at NASA Headquarters allegedly changed NASA releases to reinforce administration policy.
The new NASA and Columbia Earth Institute data find that human-made greenhouse gases have brought the Earth's climate close to critical tipping points, with potentially dangerous consequences for the planet, NASA managers say. "If global emissions of carbon dioxide continue to rise at the rate of the past decade, this research shows that there will be disastrous effects, including increasingly rapid sea level rise, increased frequency of droughts and floods, and increased stress on wildlife and plants due to rapidly shifting climate zones," says Hansen.
From a combination of climate models, satellite data, and data from ancient polar ice, scientists conclude that the West Antarctic ice sheet, Arctic ice cover, and regions providing fresh water sources and species habitat are under threat from continued global warming, NASA says.
Hansen’s research appears in the current issue of the scientific journal Atmospheric Chemistry and Physics.
Tipping points can occur during climate change when the climate reaches a state such that strong amplifying feedbacks are activated by only moderate additional warming. This study finds that global warming of 0.6 deg C in the past 30 years has been driven mainly by increasing greenhouse gases, and only moderate additional climate forcing is likely to set in motion disintegration of the West Antarctic ice sheet and Arctic sea ice.
Amplifying feedbacks include increased absorption of sunlight as melting exposes darker surfaces and speedup of iceberg discharge as the warming ocean melts ice shelves that otherwise inhibit ice flow. Goddard and other researchers used data on earlier warm periods in Earth's history to estimate climate impacts as a function of global temperature, climate models to simulate global warming, and satellite data to verify ongoing changes.
After 10 years of development and billions of dollars of cost overrun, the Pentagon is beginning to gain confidence in progress on its $11 billion next-generation space-based missile warning system. Testing on the first geosynchronous Space-Based Infrared System (Sbirs) satellite at Lockheed Martin's Sunnyvale, Calif., manufacturing facility is continuing as officials there await delivery of its payload this summer. Northrop Grumman, which builds the Sbirs sensors, is also continuing tests on the GEO-1 payload at its plant in Asuza, Calif. Pictured here is the GEO-1 bus in the thermal vacuum chamber in Sunnyvale. It completed the test cycle earlier this year. Below the jump (click on "Continue reading...") is the Northrop Grumman sensor. This photo is taken from the front of the sensor, and the two lenses – one each for an infrared scanning sensor and staring sensor – are shown. For details about progress with Sbirs and new infrared imagery of a satellite boosting into orbit captured from the first Sbirs sensor in space, read the article on page 24 of the June 4 edition of Aviation Week & Space Technology.
Using observations by NASA's Mars Odyssey orbiter, (Click on image to enlarge) scientists have discovered that water ice lies at variable depths over small-scale patches on Mars, an indication that Mars has a currently active water cycle—a major discovery.
The findings draw a much more detailed picture of underground ice on Mars than was previously available. They suggest that when NASA's next Mars mission, the Phoenix Mars Lander, starts digging to icy soil on an arctic plain in 2008, it might find the depth to the ice differs in trenches just a few feet apart. Phoenix is completing launch processing at the Kennedy Space Center in preparation for its scheduled Aug. 3 launch.
The new results appear in the May 3, 2007, issue of the journal Nature.
"We find the top layer of soil has a huge effect on the water ice in the ground," said Joshua Bandfield, a research specialist at Arizona State University, Tempe, and author of the paper. His findings come from data sent back to Earth by the Thermal Emission Imaging System camera on Mars Odyssey. The instrument takes images in five visual bands and 10 heat-sensing (infrared) ones.
The new results were made using infrared images of sites on far-northern and far-southern Mars, where buried water ice within an arm's length of the surface was found five years ago by the Gamma Ray Spectrometer suite of instruments on Mars Odyssey.
Color coding in this map of a far-northern site on Mars indicates the change in nighttime ground-surface temperature between summer and fall. This site, like most of high-latitude Mars, has water ice mixed with soil near the surface. The ice is probably in a rock-hard frozen layer beneath a few centimeters or inches of looser, dry soil. The amount of temperature change at the surface likely corresponds to how close to the surface the icy material lies. Phoenix will land in the north polar regon at about 70 degrees north latitude.
The dense, icy layer retains heat better than the looser soil above it, so where the icy layer is closer to the surface, the surface temperature changes more slowly than where the icy layer is buried deeper. On the map, areas of the surface that cooled more slowly between summer and autumn (interpreted as having the ice closer to the surface) are coded blue and green. Areas that cooled more quickly (interpreted as having more distance to the ice) are coded red and yellow.
The depth to the top of the icy layer estimated from these observations, as little as 5 centimeters (2 inches), matches modeling of where it would be if Mars has an active cycle of water being exchanged by diffusion between atmospheric water vapor and subsurface water ice.
NASA Administrator Michael Griffin responded Wednesday to inquiries related to National Public Radio excerpts from an interview that included comments on global climate change.
"NASA is the world's preeminent organization in the study of Earth and the conditions that contribute to climate change and global warming.” Griffin said.
"The agency is responsible for collecting data that is used by the science community and policy makers as part of an ongoing discussion regarding our planet's evolving systems.
"It is NASA's responsibility to collect, analyze and release information. It is not NASA's mission to make policy regarding possible climate change mitigation strategies. As I stated in the NPR interview, we are proud of our role and I believe we do it well," Griffin said.
NASA Administrator Michael Griffin tells National Public Radio that he is not sure whether or not climate change caused by humans should be a long term concern. And he also tells NPR in an interview to be broadcast May 31 that NASA is not an agency chartered to battle climate change. The Washington based Web site NASAWatch.com obtained a transcript of the NPR interview conducted by broadcaster Steve Inskeep.
Griffin’s comments on climate change after the jump.
-- Craig Covault
The European Space Agency Mars Express orbiter continues to return extraordinary imagery like this two year old view of the Deuteronilus Mensae region imaged at 95 ft. (29 meter) resolution by the High-Resolution Stereo Camera. The valleys in the image likely originated from intense flooding by melted ice water. (Click on the image to enlarge) Further erosion was caused when the water then froze rather quickly into glaciers, that then moved down the slopes reshaping the terrain.
(photo credit: ESA/DLR/University of Berlin)
-- Craig Covault
(Noted cosmologist Stephen Hawking of Cambridge University in England presented a recent talk on his life in physics prior to experiencing about 5 min. of zero-gravity at the NASA Kennedy Space Center on board a Zero-G Corp. Boeing 727 flying parabolic maneuvers. Hawking made the presentation at a dinner in his honor hosted by the Florida Space Authority and Zero Gravity Corp. Craig Covault, senior editor of Aviation Week & Space Technology, attended the dinner and transcribed Hawking’s remarks. Hawking, the co-author of the Big Bang theory, is confined to a wheelchair and unable to speak, except through a voice synthesizing computer, which is how he delivered this presentation. Hawking is also about to visit Iran (See Big Bang Diplomacy, an earlier blog on this page)
I will skip over the first 20 of my 60 years, and pick up the story in October 1962, when I arrived in Cambridge as a graduate student. I had applied to work with Fred Hoyle, the principal defender of the steady state theory, and the most famous British astronomer of the time. I say astronomer, because cosmology was at that time hardly recognized as a legitimate field. Yet that was where I wanted to do my research, inspired by having been on a summer course with Hoyle's student, Jayant Narlikar.
However, Hoyle had enough students already, so to my great disappointment, I was assigned to Dennis Sciama, of whom I had not heard. But it was probably for the best. Hoyle was away a lot, seldom in the department, and I wouldn't have had much of his attention. Sciama, on the other hand, was usually around and ready to talk. I didn't agree with many of his ideas, particularly on Mach's principle, but that stimulated me to develop my own picture.
When I began research, the two areas that seemed exciting were cosmology and elementary particle physics. Elementary particles was the active, rapidly changing field that attracted most of the best minds, while cosmology and general relativity were stuck where they had been in the 1930s. The late physicist Richard Feynman has given an amusing account of attending the conference on general relativity and gravitation, in Warsaw in 1962.
In a letter to his wife, he said, “I am not getting anything out of the meeting. I am learning nothing. Because there are no experiments, this field is not an active one, so few of the best men are doing work in it. The result is that there are hosts of dopes here (126) and it is not good for my blood pressure. Remind me not to come to any more gravity conferences!”
Of course, I wasn't aware of all this when I began my research. But I felt that elementary particles at that time was too like botany. Quantum electro-dynamics, the theory of light and electrons that governs chemistry and the structure of atoms, had been worked out completely in the ‘40s and ‘50s.
Attention had now shifted to the weak and strong nuclear forces, between particles in the nucleus of an atom, but similar field theories didn't seem to work. Indeed the Cambridge school, in particular, held that there was no underlying field theory. Instead, everything would be determined by unitarity, that is, probability conservation, and certain characteristic patterns in the scattering. With hindsight, it now seems amazing that it was thought this approach would work, but I remember the scorn that was poured on the first attempts at unified field theories of the weak nuclear forces. Yet it is these field theories that are remembered, and the analytic S matrix work is forgotten.
I'm very glad I didn't start my research in elementary particles. None of my work from that period would have survived. Cosmology and gravitation, on the other hand, were neglected fields, that were ripe for development at that time.
Unlike elementary particles, there was a well-defined theory, the general theory of relativity, but this was thought to be impossibly difficult. People were so pleased to find any solution of the field equations; they didn't ask what physical significance, if any, it had. This was the old school of general relativity that Feynman encountered in Warsaw. But the Warsaw conference also marked the beginning of the renaissance of general relativity, though Feynman could be forgiven for not recognizing it at the time.
A new generation entered the field, and new centers of general relativity appeared. Two of these were of particular importance to me. One was in Hamburg under Pascal Jordan. I never visited it, but I admired their elegant papers, which were such a contrast to the previous messy work on general relativity. The other center was at Kings College, London, under Hermann Bondi, another proponent of the steady state theory, but not ideologically committed to it like Hoyle.
I hadn't done much mathematics at school, or in the very easy physics course at Oxford, so Sciama suggested I work on astrophysics. But having been cheated out of working with Hoyle, I wasn't going to do something boring like Faraday rotation. I had come to Cambridge to do cosmology, and cosmology I was determined to do. So I read old textbooks on general relativity, and traveled up to lectures at Kings College, London, each week with three other students of Sciama. I followed the words and equations, but I didn't really get a feel for the subject.
Also, I had been diagnosed with motor neurons disease, or ALS, and given to expect I didn't have long enough to finish my PhD.
Then suddenly, towards the end of my second year of research, things picked up. My disease wasn't progressing much, and my work all fell into place, and I began to get somewhere.
Sciama was very keen on Mach’s principle, the idea that objects owe their inertia to the influence of all the other matter in the universe. He tried to get me to work on this, but I felt his formulations of Mach's principle were not well defined. However, he introduced me to something a bit similar with regard to light, the so-called Wheeler Feynman electrodynamics. This said that electricity and magnetism were time symmetric. However, when one switched on a lamp, it was the influence of all the other matter in the universe that caused light waves to travel outward from the lamp, rather than come in from infinity and end on the lamp.
For Wheeler-Feynman electrodynamics to work, it was necessary that all the light traveling out from the lamp should be absorbed by other matter in the universe. This would happen in a steady state universe, in which the density of matter would remain constant, but not in a Big Bang universe, where the density would go down as the universe expanded. It was claimed that this was another proof, if proof were needed, that we live in a steady state universe. There was a conference on Wheeler Feynman electrodynamics and the arrow of time in 1963.
Feynman was so disgusted by the nonsense that was talked about the arrow of time that he refused to let his name appear in the proceedings. He was referred to as Mr. X, but everyone knew who X was.
I found that Hoyle and Narlikar had already worked out Wheeler Feynman electrodynamics in expanding universes, and had then gone on to formulate a time symmetric new theory of gravity. Hoyle unveiled the theory at a meeting of the royal society in 1964. I was at the lecture, and in the question period, I said that the influence of all the matter in a steady state universe would make his masses infinite. Hoyle asked why I said that, and I replied that I had calculated it. Everyone thought I had done it in my head during the lecture, but in fact, I was sharing an office with Narlikar, and had seen a draft of the paper.
Hoyle was furious. He was trying to set up his own institute, and threatening to join the brain drain to America if he didn't get the money. He thought I had been put up to it to sabotage his plans. However, he got his institute, and later gave me a job, so he didn't harbor a grudge against me.
The big question in cosmology in the early ‘60's was, did the Universe have a beginning? Many scientists were instinctively opposed to the idea, because they felt that a point of creation would be a place where science broke down. One would have to appeal to religion and the hand of God, to determine how the universe would start off.
Two alternative scenarios were therefore put forward. One was the steady state theory, in which as the universe expanded, new matter was continually created to keep the density constant on average. The steady state theory was never on a very strong theoretical basis, because it required a negative energy field to create the matter. This would have made it unstable to runaway production of matter and negative energy. But it had the great merit as a scientific theory of making definite predictions that could be tested by observations.
By 1963 the steady state theory was already in trouble. Martin Ryle's Radio Astronomy group at the Cavendish did a survey of faint radio sources. They found the sources were distributed fairly uniformly across the sky. This indicated that they were probably outside our galaxy, because otherwise they would be concentrated along the Milky Way. But the graph of the number of sources against source strength did not agree with the prediction of the steady state theory. There were too many faint sources, indicating that the density of sources was higher in the distant past.
Hoyle and his supporters put forward increasingly contrived explanations of the observations, but the final nail in the coffin of the steady state theory came in 1965 with the discovery of a faint background of microwave radiation.
This could not be accounted for in the steady state theory, though Hoyle and Narlikar tried desperately. It was just as well I hadn't been a student of Hoyle, because I would have had to have defended the steady state.
The microwave background indicated that the universe had had a hot dense stage in the past. But it didn't prove that was the beginning of the universe. One might imagine that the universe had had a previous contracting phase, and that it had bounced from contraction to expansion, at a high, but finite density. This was clearly a fundamental question, and it was just what I needed to complete my PhD thesis.
Gravity pulls matter together, but rotation throws it apart. So my first question was, could rotation cause the universe to bounce? Together with George Ellis, I was able to show that the answer was no - if the universe was spatially homogeneous, that is, if it was the same at each point of space.
However, two Russians, Lifshitz and Khalatnikov, had claimed to have proved that a general contraction without exact symmetry would always lead to a bounce, with the density remaining finite. This result was very convenient for Marxist-Leninist dialectical materialism, because it avoided awkward questions about the creation of the universe. It therefore became an article of faith for Soviet scientists.
Lifshitz and Khalatnikov were members of the old school in general relativity. That is, they wrote down a massive system of equations, and tried to guess a solution. But it wasn't clear that the solution they found was the most general one. However, Roger Penrose introduced a new approach which didn't require solving the field equations explicitly, just certain general properties, such as that energy is positive and gravity is attractive. Penrose gave a seminar in Kings College, London, in January 1965.
I wasn't at the seminar, but I heard about it from Brandon Carter, with whom I shared an office on Silver Street. At first, I couldn't understand what the point was. Penrose had showed that once a dying star had contracted to a certain radius, there would inevitably be a singularity, a point where space and time came to an end. Surely, I thought, we already knew that nothing could prevent a massive cold star collapsing under its own gravity until it reached a singularity of infinite density.
But, in fact, the equations had been solved only for the collapse of a perfectly spherical star. Of course, a real star won't be exactly spherical. If Lifshitz and Kalatnikov were right, the departures from spherical symmetry would grow as the star collapsed, and would cause different parts of the star to miss each other and avoid a singularity of infinite density. But Penrose showed they were wrong. Small departures from spherical symmetry will not prevent a singularity. I realised that similar arguments could be applied to the expansion of the universe. In this case, I could prove there were singularities where space-time had a beginning.
So again, Lifshitz and Khalatnikov were wrong. General relativity predicted that the universe should have a beginning, a result that did not pass unnoticed by the church.
The original singularity theorems of both Penrose and myself required the assumption that the universe had a Cauchy surface, that is, a surface that intersects every time like curve once, and only once. It was therefore possible that our first singularity theorems just proved that the universe didn't have a Cauchy surface. While interesting, this didn't compare in importance with time having a beginning or end. I therefore set about proving singularity theorems that didn't require the assumption of a Cauchy surface.
In the next five years Roger Penrose, Bob Geroch and I developed the theory of causal structure in general relativity. It was a glorious feeling, having a whole field virtually to ourselves. How unlike particle physics, where people were falling over themselves to latch onto the latest idea. They still are.
Up to 1970, my main interest was in the big bang singularity of cosmology, rather than the singularities that Penrose had shown would occur in collapsing stars. However, in 1967, Werner Israel produced an important result. He showed that unless the remnant from a collapsing star was exactly spherical, the singularity it contained would be naked, that is, it would be visible to outside observers. This would have meant that the breakdown of general relativity at the singularity of a collapsing star would destroy our ability to predict the future of the rest of the universe.
At first most people, including Israel himself, thought that this implied that because real stars aren't spherical, their collapse would give rise to naked singularities and breakdown of predictability.
However, a different interpretation was put forward by Roger Penrose and John Wheeler. It was that there is cosmic censorship. This says that nature is a prude, and hides singularities in black holes where they can't be seen.
I used to have a bumper sticker “black holes are out of sight” on the door of my office. This so irritated the head of department that he engineered my election to the Lucasian professorship, moved me to a better office on the strength of it, and personally tore off the offending notice from the old office.
My work on black holes began with a Eureka moment in 1970, a few days after the birth of my daughter, Lucy. While getting into bed, I realized that I could apply to black holes the causal structure theory I had developed for singularity theorems. In particular, the area of the horizon, the boundary of the black hole, would always increase.
When two black holes collide and merge, the area of the final black hole is greater than the sum of the areas of the original holes. This, and other properties that Jim Bardeen, Brandon Carter and I discovered, suggested that the area was like the entropy of a black hole. This would be a measure of how many states a black hole could have on the inside, for the same appearance on the outside. But the area couldn't actually be the entropy, because as everyone knew, black holes were completely black, and couldn't be in equilibrium with thermal radiation.
There was an exciting period culminating in the Les Houches summer school in 1972, in which we solved most of the major problems in black hole theory. This was before there was any observational evidence for black holes, which shows Feynman was wrong when he said an active field has to be experimentally driven. Just as well for M-theory. The one problem that was never solved was to prove the cosmic censorship hypothesis, though a number of attempts to disprove it failed. It is fundamental to all work on black holes, so I have a strong vested interest in it being true. I therefore have a bet with Kip Thorne and John Preskill.
It is difficult for me to win this bet, but quite possible to lose, by finding a counter-example with a naked singularity. In fact, I have already lost an earlier version of the bet, by not being careful enough about the wording. They were not amused by the T-shirt I offered in settlement.
We were so successful with the classical general theory of relativity that I was at a bit of a loose end in 1973, after the publication with George Ellis, of The Large Scale Structure Of Space-time.
My work with Penrose had shown that general relativity broke down at singularities. So the obvious next step would be to combine general relativity, the theory of the very large, with quantum theory, the theory of the very small. I had no background in quantum theory, and the singularity problem seemed too difficult for a frontal assault at that time. So as a warm-up exercise, I considered how particles and fields governed by quantum theory would behave near a black hole. In particular, I wondered, can one have atoms, in which the nucleus is a tiny primordial black hole, formed in the early universe.
To answer this, I studied how quantum fields would scatter off a black hole. I was expecting that part of an incident wave would be absorbed, and the remainder scattered. But to my great surprise, I found there seemed to be emission from the black hole. At first, I thought this must be a mistake in my calculation. But what persuaded me that it was real was that the emission was exactly what was required to identify the area of the horizon with the entropy of a black hole. I would like this simple formula, to be on my tombstone.
Work with Jim Hartle, Gary Gibbons, and Malcolm Perry, uncovered the deep reason for this formula. General relativity can be combined with quantum theory in an elegant manner, if one replaces ordinary time by imaginary time.
I have tried to explain imaginary time on other occasions, with varying degrees of success. I think it is the name, imaginary, that makes it so confusing. It is much easier if you accept the positivist view that a theory is just a mathematical model. In this case, the mathematical model has a minus sign, whenever time appears twice.
The Euclidean approach to quantum gravity, based on imaginary time, was pioneered in Cambridge. It met a lot of resistance, but is now generally accepted. Between 1970 and 1980, I worked mainly on black holes, and the Euclidean approach to quantum gravity. But the suggestions that the early universe had gone through a period of inflationary expansion renewed my interest in cosmology. Euclidean methods, were the obvious way to describe fluctuations and phase transitions, in an inflationary universe.
We held a Nuffield workshop in Cambridge in 1982, attended by all the major players in the field. At this meeting, we established most of our present picture of inflation, including the all important density fluctuations, which give rise to galaxy formation, and so to our existence. This was ten years before fluctuations in the microwave were observed, so again in gravity, theory was ahead of experiment.
The scenario for inflation in 1982 was that the universe began with a big bang singularity. As the universe expanded, it was supposed somehow to get into an inflationary state. I thought this was unsatisfactory, because all equations would break down at a singularity. But unless one knew what came out of the initial singularity, one could not calculate how the universe would develop. Cosmology would not have any predictive power.
After the workshop in Cambridge, I spent the summer at the Institute of Theoretical Physics, Santa Barbara, which had just been set up. We stayed in student houses, and I drove in to the institute in a rented electric wheel chair. I remember my younger son, Tim aged three, watching the Sun set on the mountains, and saying, “it's a big country.”
While in Santa Barbara, I talked to Jim Hartle about how to apply the Euclidean approach to Cosmology. According to Dewitt and others, the universe should be described by a wave function that obeyed the Wheeler Dewitt equation. But what picked out the particular solution of the equation that represents our universe. According to the Euclidean approach, the wave function of the universe is given by a Feynman sum over a certain class of histories in imaginary time. Because imaginary time behaves like another direction in space, histories in imaginary time can be closed surfaces, like the surface of the Earth, with no beginning or end.
Jim and I decided that this was the most natural choice of class, indeed the only natural choice. We had side stepped the scientific and philosophical difficulty of time beginning, by turning it into a direction in space.
Most people in theoretical physics have been trained in particle physics, rather than general relativity. They have therefore been more interested in calculations of what they observe in particle accelerators, than in questions about the beginning and end of time. The feeling was that if they could find a theory that in principle allowed them to calculate particle scattering to arbitrary accuracy, everything else would somehow follow.
In 1985 it was claimed that string theory was this ultimate theory. But in the years that followed, it emerged that the situation was more complicated, and more interesting. It seems that there's a network called M-theory.
All the theories in the M-theory network can be regarded as approximations to the same underlying theory, in different limits. None of the theories allow calculation of scattering to arbitrary accuracy, and none can be regarded as the fundamental theory, of which others are reflections. Instead, they should all be regarded as effective theories, valid in different limits. String theorists have long used the term “effective theory” as a pejorative description of general relativity, but string theory is equally an effective theory, valid in the limit that the M-theory membrane is rolled into a cylinder of small radius. Saying that string theory is only an effective theory isn't very popular, but it's true.
The dream of a theory that would allow calculation of scattering to arbitrary accuracy led people to reject quantum general relativity, and super gravity, on the grounds that they were “non-renormalizable.” This means that one needs undetermined subtractions at each order to get finite answers. In fact, it is not surprising that naive perturbation theory breaks down in quantum gravity. One can not regard a black hole, as a perturbation of flat space.
I have done some work recently on making super gravity renormalizable, by adding higher derivative terms to the action. This apparently introduces ghosts - states with negative probability. However, I have found this is an illusion. One can never prepare a system in a state of negative probability. But the presence of ghosts that one can not predict with arbitrary accuracy. If one can accept that, one can live quite happily with ghosts.
This approach to higher derivatives and ghosts allows one to revive the original inflation model of Starobinski and other Russians. In this, the inflationary expansion of the universe is driven by the quantum effects of a large number of matter fields. Based on the no-boundary proposal, I picture the origin of the universe as like the formation of bubbles of steam in boiling water. Quantum fluctuations lead to the spontaneous creation of tiny universes out of nothing. Most of the universes collapse to nothing, but a few that reach a critical size will expand in an inflationary manner, and will form galaxies and stars and maybe beings like us.
It has been a glorious time to be alive, and doing research in theoretical physics. Our picture of the universe has changed a great deal in the last 40 years, and I'm happy if I have made a small contribution. I want to share my excitement and enthusiasm. There's nothing like the Eureka moment, of discovering something that no one knew before. I won't compare it to sex, but it lasts longer.
Mars Reconnaissance Orbiter images of channels and gullies cut in the sides and floor of this 7 mi. diameter crater on the northern plains of Mars are indicative of water erosion and subsurface ice at this location known as Acidalia Planitia. The Image from the HiRise high resolution imaging system has a 24 in. resolution, meaning boulders as small as 2 ft, diameter are visible along with linear features much smaller than that. Click image to enlarge.(Photo credit: JPL/University of Arizona)
The gullies and alcoves cut in the crater rim originate at the same height, suggesting that the carving agent -- most likely water -- emanated from one single layer exposed in the crater's wall.
The muted topography of the crater and its surroundings, the relatively shallow floor, the convex slope of its walls—all are consistent with water ice being present under the surface. Ice would have acted as a lubricant, facilitating the flow of rocks and soils and hence smoothing landscape's features such as ridges and crater’ rims. The concentric and radial fissures in the crater's floor may indicate decrease of volume due to loss of underground ice. Piles of rocks aligned along these fissures and arranged forming polygons are similar to features observed in Antarctica. Watch this space for continuing coverage of Mars.
-- Craig Covault
For every spacecraft there comes a moment when its keepers must ask “is it time to shut this thing down and move on?” Sometimes the question answers itself, as when the hardware fails in space, the data stream drops out and there’s no way to recover it. Sometimes the answer hangs on a cost/benefit calculation – “are we still getting our money’s worth here?”
Colleen Hartman, deputy associate NASA administrator for science, compares it to deciding when it’s time to get a new car or keep the old one running. “You are going to come to the point where getting a new car is the right thing to do,” she says. But Hartman is referring to the Hubble Space Telescope, and the possibility that it can be kept running even after the space shuttle fleet is grounded for good in 2010.
Next year’s “final” Hubble servicing mission will leave behind a docking device that can be used either to attach a deorbit motor, or to dock some future human spacecraft for yet more maintenance and upgrades. NASA has no plans for the latter approach. Instead, says Matt Mountain, director of the Space Telescope Science Institute that manages use of the Hubble, “it’s done an amazing job, but there are other great things we need to do.”
Ultimately, the decision will be made by the scientists who put together the decadal survey for astrophysics due out in 2010, setting research priorities for the decade of the 20-teens. Just this week scientists have released new Hubble data with evidence that mysterious “dark matter” exists in the Universe. But can a 17-year-old instrument keep delivering new discoveries, or should it be scrapped for something newer? What do you think?
--Frank Morring, Jr.