Global Positioning System (GPS)

Saturday 19 March 2011

 
Global Positioning System (GPS
 
 
 
 







Originally designated the NAVSTAR
(Navigation System with Timing And Ranging) Global Positioning System, (GPS) was
developed by the US Department of Defense to provide all-weather round-the-clock
navigation capabilities for military ground, sea, and air forces. Since its
implementation, the GPS system has also become an integral asset in numerous
civilian applications and industries around the globe, including recreational
uses (e.g. boating, aircraft, hiking), corporate vehicle fleet tracking, and
surveying.
The GPS system employs 24 spacecraft in 20,200 km circular orbits
inclined at 55 degrees. These spacecraft are placed in 6 orbit planes with four
operational satellites in each plane. All launches have been successful except
for one launch failure in 1981. The full 24-satellite constellation was
completed on March 9, 1994.
The first eleven spacecraft (GPS Block 1) were
used to demonstrate the feasibility of the GPS system. The orbit inclination
used for these satellites was 63 degrees, differing from the 55 degrees used for
the operational system. The Block 2 spacecraft began the operational system. The
Block 2A spacecraft (A = Advanced) were a slight improvement over the Block 2.

The Global Positioning System (GPS) was designed as a dual-use system with
the primary purpose of enhancing the effectiveness of U.S. and allied military
forces. GPS is rapidly becoming an integral component of the emerging Global
Information Infrastructure, with applications ranging from mapping and surveying
to international air traffic management and global change research. The growing
demand from military, civil, commercial, and scientific users has generated a
U.S. commercial GPS navigation systems equipment and service industry that leads
the world. Augmentations to enhance basic GPS services could further expand
these civil and commercial markets.
GPS systems receivers use triangulation
of the GPS satellites' navigational signals to determine their location. The
satellites provide two different signals that provide different accuracies.
Coarse-acquisition (C/A) code is intended for civilian use, and is deliberately
degraded. The accuracy using a typical civilian GPS receiver with C/A code is
typically about 100 meters. The military's Precision (P) code is not corrupted,
and provides positional accuracy to within approximately 20 meters. Numerous
on-line tutorials on how GPS works and its applications are available, including
those at the University of Texas and Rentec International. GPS systems
satellites are controlled at the GPS Master Control Station (MCS) located at
Falcon Air Force Base outside Colorado Springs, Colorado. The ground segment
also includes four active-tracking ground antennas and five passive-tracking
monitor stations.
GPS receiver technology has developed by leaps and bounds
over the last few years. GPS receivers were initially the size of a suitcase
with the antenna the size of a kid’s blow up swimming pool. Over time, the
system has been developed into a civilian friendly program, and GPS receiver
technology has miniaturized as well. Automobile GPS receivers are the size of a
deck of cards. The gps receiver used in hand held devices is not much larger
than a small cell phone. Many newer cell telephones have a GPS receiver integral
in their hand set. As manufacturers develop the GPS receiver, they will have to
work through display, power use and dexterity limitations. An individual will
need a screen with a size that can be viewed from any angle and at a reasonable
distance. The GPS receiver is generally always on while in use, so managing
power will continue to be an on going problem. The ability to push the small
buttons will limit just how small a GPS receiver can be. As touch screens
develop and other input systems are introduced, we will see the GPS receiver
continue to change in appearance and use.

Author: John B. Whitsell

Making Tracks GPS
http://www.makingtracksgps.com

Information
referenced from NASA and USCG data

Source:

Fine Grinder for Mine Mineral Processing

Fine Grinder for Mine

 

 

 

 Mineral Processing










 
 
Deswik Mining Consultants have launched in Australia the highly innovative fine
grinding mill which they claim is efficient, green and smart.
A fine
grinding mill is a machine that grinds mineral ore into standard particle sizes
for easier mineral extraction.
There are many ways to extract
minerals from ore. For example, when tiny grains of gold are spread through a
gravel deposit the gravel is poured onto a table that is coated with mercury.
When the table is vibrated the gold grains work their way to the table surface
and combine with the mercury. This method works because gold has a strong
tendency to combine with mercury. The gold of course then has to be separated
from the mercury.
A different method is used to separate sulphide
(sulfide) minerals from others in order to obtain their sulphur. In this case
the conglomeration of minerals might be finely ground up and then vigorously
mixed in a tank of water through which air is being bubbled.
In this
process, which is called flotation, the sulphide minerals cling to the bubbles
and are collected from the froth that spills over the top of the
tank.
As mineral prices soar due to the boom in economies like China
and India, it becomes more and more economically viable to reprocess that
tailings from old mines using new technology.
As South African Mines
have discovered, the Deswik mill has many distinct advantages over conventional
mills.
The Deswik product has a highly variable speed hydraulic
drive, patented vertical turbo disc impeller, inert bead media, small footprint,
high availability, shorter delivery times, and high energy
efficiency.
The simple and robust design of the vertical mill means
ease of maintenance and higher availability.
The mill can offer
greater mineral recovery at reduced capital and running cost.
The
first test mill is currently being commissioned for Australian Metallurgical and
Mineral Testing Consultants (AMMTEC), in Western Australia and will be available
for testing in July of this year.
The Deswik mill is of vertical
construction, manufactured in mild steel/stainless steel, with all wetted parts
lined with hard wearing and contamination-free polyurethane resin. The chamber
is water jacketed for cooling. A separate heat exchanger system is also
available.
The impeller is a patented turbo-disc design unique to the
Deswik mill. It is designed in such a manner that its assembly is modular and
the configuration may be easily changed to achieve different end product size
and distribution characteristics.
The Deswik mill is available in a
range of sizes, from 250 litre mills delivering four (4) to eight (8) tonnes per
hour (t/hr), to 5,000 litre mills delivering 75 – 150 t/hr dependent upon the
required end product size.
The Deswik mill can produce particle sizes
from a D-50 of 200 micron down to 0.1 micron. This is achieved over a range of
some 200 different products at energy consumption levels as low as 3 kWh/ton.
Recent production and testwork for various clients has indicated that in certain
process circuits a Deswik fine grinding mill can assist in doubling the
concentrate value increasing the recovery by over 50%.
Already the
mill has generated interest in Australia due to its potential to improve
recovery whilst significantly reducing power consumption.
As cleaner,
greener technological processes are being demanded from Australian mining
companies, the Deswik mill’s introduction could not come at a better time. In
certain applications the use of the mill will more than halve the grinding and
recovery costs.
Source:

How Are Fossils Formed

How Are Fossils Formed















How are fossils formed? For fossil formation to take place a series of fortunate
events must occur. If any part of the series is missing, we will never see the
fossil! In fact, fossilization is a rare occurrence. Nature tends toward
recycling. That includes just about everything from plants and animals to rocks
and minerals.

Let?s narrow it down to just animals for a minute.
Animals, dead or alive, are food for other animals. From insects to dinosaurs,
an animal could be someone?s lunch! Any part of the animal?s body that isn?t
consumed is usually scattered about; leftovers! Just like those leftovers in
your fridge, these leftovers make great food for bacteria. In addition, these
leftovers are exposed to the elements: sun, rain, and even the soil itself all
help to breakdown and decompose the sturdiest of bones, shells and wood.


If we are ever going to see a fossil, some very specialized events must
intervene to ward off the natural process of decomposition. The following is the
most common scenario for fossil formation:

How Are Fossils Formed? Death
Is The First Step
To start with, an animal or plant must die in water or
near enough to fall in shortly after death. The water insulates the remains from
many of the elements that contribute to decomposition. An example may be
helpful. Let?s say that a trilobite has died of old age on the bottom of the
sea. Bacteria consume the soft body parts but leave the hard exoskeleton intact.


How are fossils formed? Step two is Sedimentation
As time passes,
sediments bury the exoskeleton. The faster this happens the more likely
fossilization will occur. Land and mudslides definitely help. River deltas are
also good for quick accumulation of sediments. This further insulates our
trilobite from decomposition.

The sediments themselves have a huge
influence on how well our trilobite fossil turns out. Very fine-grained
particles, like clays, allow more detail in the future fossil. Course sediments,
like sand, allow less detail to show. The chemical make up of the sediments also
contributes to the future fossil. If iron is present, it may give the rock a
reddish color. Phosphates may darken the rock to gray or black. The
possibilities are truly endless.

Permineralization
As the sediments
continue to pile on, the lower layers become compacted by the weight of the
layers on top. Over time, this pressure turns the sediments into rock. If
mineral-rich water percolates down through the sediments, the fossilization
process has an even better chance of preserving our ancient animal. Some of the
minerals stick to the particles of sediment, effectively gluing them together
into a solid mass. These minerals make an impact on our original trilobite as
well. Over the course of millions of years, they dissolve away the outer shell,
sometimes replacing the molecules of exoskeleton with molecules of calcite or
other minerals. In time, the entire shell is replaced leaving rock in the exact
shape of the trilobite.

Uplift
As the continental plates move around
the earth, crashing into each other, mountains are formed. Former sea floors are
lifted up and become dry land. This is exactly what has happened to our
trilobite. Now a fully formed fossil, our trilobite is buried under hundreds or
even thousands of feet of rock! Thanks to the movement of the plates, our
trilobite will come closer to the surface and nearer to discovery by some
fortunate fossil hunter. Luckily, nothing stays the same.

Erosion at
work
Rain, wind, earthquakes, freeze and thaw all work toward erosion. The
mountains that were built up are worn away over time. Our fossil trilobite once
again sees the light of day! With a little wisdom about where to look and some
luck, you may be the first one to find him!

This is the fossilization
process known as Permineralization. It is not the only answer to the question:
"How Are Fossils Formed?" There are many other ways that fossils can be formed.
You can read about them using the links below.

Source:
http://www.ArticlePros.com/author.php?Claudia
Mann

Biofuel Renewable Energy Resource

Biofuel Renewable Energy Resource
 
 Author: Fabricio Guerrero, approved at 02.04.2008










 
Biofuels are moving
fuels like ethanol and biodiesel is a diesel that is finished from biomass
resources. These fuels are generally merged with the petroleum fuels - gasoline
and diesel fuel. They can also be used individually as well. Using ethanol or
biodiesel facilitates lesser burning than fossil fuel. However, ethanol and
biodiesel are unfortunately, more expensive than the fossil fuels. Nevertheless,
they are uncontaminated fuels, producing smaller quantity of air pollutants and
are safer and greener to the environment.

Ethanol is an alcoholic fuel
prepared from the sugars found in grains, such as corn, sorghum, and wheat,
along with potato skins, rice, sugarcane, sugar beets, and yard clippings.
Biodiesel is prepared from vegetable oils, fats, or greases. Biodiesel fuels can
be used in diesel engines not including any changes in them. It is the best ever
budding substitute petroleum in countries such as United States. Biodiesel is a renewable source of energy and thus is safe, recyclable, and decreases the
release of the majority air toxins. It is no doubt an eco-friendly version of
diesel.

It is frequently asserted that biofuels are carbon-neutral as
they release CO2 when burnt that was previously present in the atmosphere. There
is a considerable CO2 discharge from the refinery and distillery processes
required to make biodiesel or bioethanol, as well as for transport, the use of
ranch machinery, and manure production. Biodiesel, in particular, is connected
to high releases of the powerful and long-term greenhouse gas nitrous oxide,
released by microbes when nitrogen fertilizers are applied to soils, and also
throughout the manufacture of nitrogen fertilizers.

There are two main
types of biofuels for transport:

Bioethanol, which is an alcohol
derivative of sugar or starch, for example from sugar beet, cane or from corn,
and

Biodiesel, derivative of vegetable oils, for example from rapeseed
oil, jatropha, soy or palm oil.

United States is the world’s biggest
bioethanol manufacturer, and this books for 99% of their biofuel for road
transport. The region is, as of yet, the world’s chief biodiesel creator, and
prefer biodiesel than bioethanol. It is discovered that ethanol has preferably
less greenhouse gas releases than petrol.

Among the biofuel crops grown
in Europe and the US, biodiesel is usually measured to be more energy competent
than bioethanol. A few biodiesel crops, such as oilseed rape are developed with
huge magnitude of fertilizers, which compensate for a lot of the greenhouse gas
reserves.

To learn much more about the different types of
renewable energy
sources
, visit
http://renewable-energy-sources-info.blogspot.com/ where
you'll find this and mucho more, including biodiesel, biofuel, Bioethanol,
biomass,geothermal and many more renewable energy sources

Geochemistry

        Geochemistry   



 

Geochemistry

Geochemistry is the study of the chemical processes that form and shape the earth.
Earth is essentially a large mass of crystalline solids that are constantly subject to physical and chemical interaction with a variety of solutions (e.g., water  ) and substances. These interactions allow a multitude of chemical reactions.
It is through geochemical analysis that estimates of the age of Earth are formed. Because radioactive isotopes decay at measurable and constant rates (e.g., half-life  ) that are proportional to the number of radioactive atoms remaining in the sample, analysis of rocks and minerals   can also provide reasonably accurate determinations of the age of the formations in which they are found. The best measurements obtained via radiometric dating (based on the principles of nuclear reactions) estimate the age of Earth to be four and one half billion years old.
Dating techniques combined with spectroscopic analysis provide clues to unravel Earth's history. Using neutron activation analysis, Nobel Laureate Luis Alvarez discovered the presence of the element iridium when studying samples from the K-T boundary layer (i.e., the layer of sediment laid down at the end of the Cretaceous and beginning of the Tertiary Periods). Fossil evidence shows a mass extinction at the end of the Cretaceous Period  , including the extinction of the dinosaurs. The uniform iridium layerand presence of quartz crystals   with shock damage usually associated only with large asteroid impacts or nuclear explosionsadvanced the hypothesis that a large asteroid impact caused catastrophic climatic damage that spelled doom for the dinosaurs.
Although hydrogen and helium comprise 99.9% of the atoms in the universe, Earth's gravity   is such that these elements readily escape Earth's atmosphere. As a result, the hydrogen found on Earth is found bound to other atoms in molecules.
Geochemistry generally concerns the study of the distribution and cycling of elements in the crust   of the earth. Just as the biochemistry of life is centered on the properties and reaction of carbon  , the geochemistry of Earth's crust is centered upon silicon  . Also important to geochemistry is oxygen  . Oxygen is the most abundant element on Earth. Together, oxygen and silicon account for 74% of Earth's crust.
The type of magma   (Basaltic, Andesitic or Ryolytic) extruded by volcanoes and fissures (magma is termed lava   when at Earth's surface) depends on the percentage of silicon and oxygen present. As the percentage increases, the magma becomes thicker, traps more gas, and is associated with more explosive eruptions.
The eight most common elements found on Earth, by weight, are oxygen (O), silicon (Si), aluminum   (Al), iron   (Fe), calcium (Ca), sodium (Na), potassium (K), and magnesium (Mg).
Unlike carbon and biochemical processes where the covalent bond is most common, however, the ionic bond is the most common bond in geology  . Accordingly, silicon generally becomes a cation and will donate four electrons to achieve a noble gas configuration. In quartz, each silicon atom   is coordinated to four oxygen atoms. Quartz crystals are silicon atoms surrounded by a tetrahedron of oxygen atoms linked at shared corners.
Rocks are aggregates of minerals and minerals are composed of elements. A mineral has a definite (not unique) formula or composition. Diamonds and graphite   are minerals that are polymorphs (many forms) of carbon. Although they are both composed only of carbon, diamonds and graphite have very different structures and properties. The types of bonds in minerals can affect the properties and characteristics of minerals.
Pressure and temperature   affect the structure of minerals. Temperature can determine which ions can form or remain stable enough to enter into chemical reactions. Olivine  , ((Fe, Mg)2 SiO4), for example is the only solid that will form at 1,800°C. According to olivine's formula, it must be composed of two atoms of either Fe or Mg. Olivine is built by the ionic substitution of Fe and Mgthe atoms are interchangeable because they the same electrical charge and are of similar sizeand thus, olivine exists as a range of elemental compositions termed a solid solution series  . Olivine can thus be said to be "rich" in iron or rich in magnesium. As magma cools larger atoms such as potassium ions enter into reactions and additional minerals form.
The determination of the chemical composition of rocks involves the crushing and breakdown of rocks until they are in small enough pieces that decomposition by hot acids (hydrofluoric, nitric, hydrochloric, and perchloric acids) allows the elements present to enter into solution for analysis. Other techniques involve the high temperature fusion of powdered inorganic reagent (flux) and the rock  . After melting   the sample, techniques such as x-ray fluorescence spectrometry may be used to determine which elements are present.
Chemical and mechanical weathering   break down rock through natural processes. Chemical weathering of rock requires water and air. The basic chemical reactions in the weathering process include solution (disrupted ionic bonds), hydration, hydrolysis, and oxidation.
The geochemistry involved in many environmental issues has become an increasing important aspect of scientific and political debate. The effects of acid rain   are of great concern to geologists not only for the potential damage to the biosphere  , but also because acid rain accelerates the weathering process. Rainwater is made acidic as it passes through the atmosphere. Although rain becomes naturally acidic as it contacts nitrogen, oxygen, and carbon dioxide   in the atmosphere, many industrial pollutants bring about reactions that bring the acidity of rainwater to dangerous levels. Increased levels of carbon dioxide from industrial pollution can increase the formation of carbonic acid. The rain also becomes more acidic. Precipitation   of this "acid rain" adversely affects both geological and biological systems.
According to plate tectonic theory, the crust (lithosphere  ) of Earth is divided into shifting plates. Geochemical analysis of Earth's tectonic plates reveals a continental crust that is older,



source http://www.encyclopedia.com/topic/geochemistry.aspx

Global Warming











Understanding the causes of and responses to global warming requires interdisciplinary cooperation between social and natural scientists. The theory behind global warming has been understood by climatologists since at least the 1980s, but only in the new millennium, with an apparent tipping point in 2005, has the mounting empirical evidence convinced most doubters, politicians, and the general public as well as growing sections of business that global warming caused by human action is occurring.

DEFINITION OF GLOBAL WARMING

Global warming is understood to result from an overall, long-term increase in the retention of the suns heat around Earth due to blanketing by greenhouse gases, especially CO2 and methane. Emissions of CO2 have been rising at a speed unprecedented in human history, due to accelerating fossil fuel burning that began in the Industrial Revolution.
The effects of the resulting climate change are uneven and can even produce localized cooling (if warm currents change direction). The climate change may also initiate positive feedback in which the initial impact is further enhanced by its own effects, for example if melting ice reduces the reflective properties of white surfaces (the albedo effect) or if melting tundra releases frozen methane, leading to further warming. Debate continues about which manifestations are due to long-term climate change and which to normal climate variability.

SPEEDING UP THE PROCESS

Global warming involves an unprecedented speeding up of the rate of change in natural processes, which now converges with the (previously much faster) rate of change in human societies, leading to a crisis of adaptation. Most authoritative scientific bodies predict that on present trends a point of no return could come within ten years, and that the world needs to cut emissions by 50 percent by mid twenty-first century.
It was natural scientists who first discovered and raised global warming as a political problem. This makes many of the global warming concerns unique. Science becomes the author of issues that dominate the political agenda and become the sources of political conflict (Stehr 2001, p. 85). Perhaps for this reason, many social scientists, particularly sociologists, wary of trusting the truth claims of natural science but knowing themselves lacking the expertise to judge their validity, have avoided saying much about global warming and its possible consequences. Even sociologists such as Ulrich Beck and Anthony Giddens, who see risk as a key attribute of advanced modernity, have said little about climate change.
For practical purposes, it can no longer be assumed that nature is a stable, well understood, background constant and thus social scientists do not need direct knowledge about its changes. Any discussion of likely social, economic, and political futures will have to heed what natural scientists say about the likely impacts of climate change.

GROWING EVIDENCE OF GLOBAL WARMING

While originally eccentric, global warming was placed firmly on the agenda in 1985, at a conference in Austria of eighty-nine climate researchers participating as individuals from twenty-three countries. The researchers forecast substantial warming, unambiguously attributable to human activities.
Since that conference the researchers position has guided targeted empirical research, leading to supporting (and increasingly dire) evidence, resolving anomalies and winning near unanimous peer endorsement. Skeptics have been confounded and reduced to a handful, some discredited by revelations of dubious funding from fossil fuel industries.
Just before the end of the twentieth century, American researchers released ice-thickness data, gathered by nuclear submarines. The data showed that over the previous forty years the ice depth in all regions of the Arctic Ocean had declined by approximately 40 percent.
Five yearly aerial photographs show the ice cover on the Arctic Ocean at a record low, with a loss of 50 cubic kilometers annually and glacier retreat doubling to 12 kilometers a year. In September 2005 the National Aeronautics and Space Administration (NASA) doubled its estimates of the volume of melted fresh water flowing into the North Atlantic, reducing salinity and thus potentially threatening the conveyor that drives the Gulf Stream. Temperate mussels have been found in Arctic waters, and news broadcasts in 2005 and 2006 have repeatedly shown scenes of Inuit and polar bears (recently listed as endangered) cut off from their hunting grounds as the ice bridges melt.
In 2001 the Intergovernmental Panel on Climate Change (IPCC), the United Nations scientific panel on climate change, had predicted that Antarctica would not contribute significantly to sea level rise this century. The massive west Antarctic ice sheet was assumed to be stable. However, in June 2005 a British Antarctic survey reported measurements of the glaciers on this ice sheet shrinking. In October 2005 glaciologists reported that the edges of the Antarctic ice sheets were crumbling at an unprecedented rate and, in one area, glaciers were discharging ice three times faster than a decade earlier.
In 2005 an eight-year European study drilling Antarctic ice cores to measure the past composition of the atmosphere reported that CO2 levels were at least 30 percent higher than at any time in the last 65,000 years. The speed of the rise in CO2 was unprecedented, from 280 parts per million (ppm) before the Industrial Revolution to 388 ppm in 2006. Early in 2007 the Norwegian Polar Institute reported acceleration to a new level of 390 ppm. In January 2006 a British Antarctic survey, analyzing CO2 in crevasse ice in the Antarctic Peninsula, found levels of CO2 higher than at any time in the previous 800,000 years.
In April 2005 a NASA Goddard Institute oceanic study reported that the earth was holding on to more solar energy than it was emitting into space. The Institutes director said: This energy imbalance is the smoking gun that we have been looking for (Columbia 2005).
The second IPCC report in 1996 had predicted a maximum temperature rise of 3.5 degrees Fahrenheit by the end of the twenty-first century. The third report, in 2001, predicted a maximum rise of 5.8 degrees Fahrenheit by the end of the twenty-first century. In October 2006 Austrian glaciologists reported in Geophysical Research Letters (Kaser et al.) that almost all the worlds glaciers had been shrinking since the 1940s, and the shrinking rate had increased since 2001. None of the glaciers (contrary to skeptics) was growing. Melting glaciers could pose threats to the water supply of major South American cities and is already manifest in the appearance of many new lakes in Bhutan.
In January 2007 global average land and sea temperatures were the highest ever recorded for this month; in February 2007 the IPCC Fourth Report, expressing greater certainty and worse fears than the previous one, made headlines around the world. In 1995 few scientists believed the effects of global warming were already manifest, but by 2005 few scientists doubted it and in 2007 few politicians were willing to appear skeptical.
Although rising temperatures; melting tundra, ice and glaciers; droughts; extreme storms; stressed coral reefs; changing geographical range of plants, animals, and diseases; and sinking atolls may conceivably all be results of many temporary climate variations, their cumulative impact is hard to refute.

ANOMALIES AND REFUTATIONS

The science of global warming has progressed through tackling anomalies cited by skeptics. Critics of global warming made attempts to discredit the methodology of climatologist Michael Manns famous Hockey stick graph (first published in Nature in 1998). Manns graph showed average global temperatures over the last 1,000 years, with little variation for the first 900 and a sharp rise in the last century. After more than a dozen replication studies, some using different statistical techniques and different combinations of proxy records (indirect measures of past temperatures such as ice cores or tree rings), Manns results were vindicated. A report in 2006 by the U.S. National Academy of Sciences, National Research Council, supported much of Manns image of global warming history. There is sufficient evidence from the tree rings, boreholes, retreating glaciers and other proxies of past surface temperatures to say with a high level of confidence that the last few decades of the twentieth century were warmer than any comparable period for the last 400 years. For periods before 1600, the 2006 report found there was not enough reliable data to be sure but the committee found the Mann teams conclusion that warming in the last few decades of the twentieth century was unprecedented over the last 1,000 years to be plausible (National Academy of Science press release 2006).
Measurements from satellites and balloons in the lower troposphere have until recently indicated cooling, which contradicted measurements from the surface and the upper troposphere. In August 2005 a publication in Science of the findings of three independent studies described their measurements as nails in the coffin of the skeptics case. These showed that faulty data, which failed to allow for satellite drift, lay behind the apparent anomaly.
Another anomaly was that observed temperature rises were in fact less than the modelling of CO2 impacts predicted. This is now explained by evidence on the temporary masking properties of aerosols, from rising pollution and a cyclical upward swing of volcanic eruptions since 1960.
Critics of global warming have been disarmed and discredited. Media investigations and social research have increasingly highlighted the industry funding of skeptics and their think tanks, and the political pressures on government scientists to keep silent. Estimates of the catastrophic costs of action on emissions have also been contradicted most dramatically by the British Stern Report in October 2006. Many companies have been abandoning the skeptical business coalitions. The Australian Business Round Table on Climate Change estimated in 2005 that the cost to gross domestic product of strong early action would be minimal and would create jobs.

SCIENTIFIC CONSENSUS

In May 2001 sixteen of the worlds national academies of science issued a statement, confirming that the IPCC should be seen as the worlds most reliable source of scientific information on climate change, endorsing its conclusions and stating that doubts about the conclusions were not justified.
In July 2005 the heads of eleven influential national science academies (from Brazil, Canada, China, France, Germany, India, Italy, Japan, Russia, the United Kingdom, and the United States) wrote to the G8 leaders warning that global climate change was a clear and increasing threat and that they must act immediately. They outlined strong and long-term evidence from direct measurements of rising surface air temperatures and subsurface ocean temperatures and from phenomena such as increases in average global sea levels, retreating glaciers and changes to many physical and biological systems (Joint Science Academies Statement 2005).
There are many unknowns regarding global warming, particularly those dependent on human choices; yet the consequences for society of either inadequate action or of any effective responses (through reduced consumption or enforced and subsidized technological change) will be huge. It is, for example, unlikely that the practices and values of free markets, individualism, diversity, and choice will not be significantly modified either by economic and political breakdowns or alternatively by the radical measures needed to preempt them.

INADEQUATE ACTION AND NEEDED TRANSFORMATIONS

Kyoto targets are at best a useful first step. However, even these targets, which seek to peg back emissions to 1990 levels by 2010, are unlikely to be met. World CO2 emissions in 2004 continued to rise in all regions of the world, by another 4.5 percent, to a level 26 percent higher than in 1990. A rise of over 2 degrees is considered inevitable if CO2 concentrations pass 400 ppm. At current growing emission rates, the concentration would reach 700 ppm by the end of the twenty-first century. The continuing industrialization of China, recently joined by India, points to the possibility of even faster rises than these projections indicate.
If unpredictable, amplifying feedback loops are triggered, improbable catastrophes become more likely. The Gulf Stream flow could be halted, freezing Britain and Northern Europe. Droughts could wipe out the agriculture of Africa and Australia, as well as Asia, where millions depend on Himalayan melt water and monsoon rains. If the ice caps melt completely over the next centuries, seas could rise by 7 meters, devastating all coastal cities. Will the human response to widespread ecological disasters give rise to solidarity and collective action, such as the aid that came after the 2004 Asian Tsunami or to social breakdowns, as seen in New Orleans after 2005s Hurricane Katrina and in the Rwandan genocide?
Social and technical changes with the scale and speed required are not unprecedented. The displacement of horsepower by automobiles, for example, was meteoric. Production of vehicles in the United States increased from 8,000 in 1900 to nearly a million by 1912. Substantial regulation or differential taxation and subsidies would be indispensable to overcome short term profit motives and free riding dilemmas (where some evade their share of the cost of collective goods from which they benefit). Gains in auto efficiency in the 1980s, for example, were rapidly reversed by a new fashion for sport utility vehicles.
The debates that have emerged in the early twenty-first century have been related to responses, with different winners and losers, costs, benefits, dangers, and time scales for each response. Advocates of reduced energy consumption or increased efficiency, or energy generation by solar, wind, tidal, hydro, biomass, geothermal, nuclear, or clean coal and geo-sequestration, argue often cacophonously. Yet it seems probable that all these options are needed.
It will be essential for social and natural scientists to learn to cooperate in understanding and preempting the potentially catastrophic collision of nature and society. In order to accomplish this, market mechanisms; technological innovation; international, national, and local regulations; and cultural change will all be needed. Agents of change include governments, nongovernmental organizations, and public opinion, but the most likely front-runner might be sectors of capital seeking profit by retooling the energy and transport systems, while able to mobilize political enforcement.

BIBLIOGRAPHY

Columbia University Earth Institute. 2005. Press release 28, April. http://www.earthinstitute.columbia.edu/news/2005/story04-28-05.html.
Cooper, Richard N., and Richard Layard. 2002. What the Future Holds: Insights from Social Science. Cambridge MA: MIT Press.
Diamond, Jared. 2005. Collapse: How Societies Choose to Fail or Survive. Camberwell, U.K.: Penguin, Allen Lane.
Dunlap, Riley H., Frederick H. Buttel, Peter H. Dickens, and August Gijswijt, eds. 2002, Sociological Theory and the Environment: Classical Foundations, Contemporary Insights, Lanham, MD: Rowman and Littlefield.
Flannery, Tim. 2006. The Weather Makers. Berkeley, CA: Grove Atlantic.
Kaser, G., et al. Mass Balance of Glaciers and Ice Caps: Consensus Estimates for 19612004. Geophysical Research Letters, Vol. 33. 2006.
Legget, Jeremy. 2000. The Carbon War: Global Warming and the End of the Oil Era. New York: Routledge.
Leggett, Jeremy. 2005. Half Gone: Oil, Gas, Hot Air and the Global Energy Crisis. London: Portobello.
Monbiot, George. 2006. Heat: How to Stop the Planet Burning. London: Allen Lane.
National Academy of Sciences. 2006. Press release 22, June. http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=11676.
Stehr, Nico. 2001. Economy and Ecology in an Era of Knowledge-Base Economies. Current Sociology 49(1) January: 6790.
Zillman, John W. 2005. Uncertainty in the Science of Climate Change. In Uncertainty and Climate Change: The Challenge for Policy, Policy Paper 3. Canberra: Academy of the Social Sciences in Australia. http://www.assa.edu.au/publications/op/op22005.pdf.