Risk Assessment and Benefits (Nanotechnology)

INTRODUCTION

Risk assessment is the foundation of our capacity to evaluate economic, health, and ecological impacts. Yet, a wave of scientific discovery is transforming our understanding of not only risk but also the environments that sustain us. This will force basic changes to how we evaluate risk and make decisions to restrict new technology.

Until we acknowledge such changes and incorporate them into our assessment of risk, it will not be possible for detractors or supporters of nanotechnologies to accurately evaluate their risks and benefits.

OVERVIEW

Some critics perceive the risk of nanomaterials as too great to go ahead with until we study them further.[1] The most often-cited example is the risk that out-of-control nanomachines, which use carbon-based life forms for an energy source, will turn everything alive into ”gray goo” as they multiply. Technology luminaries such as Eric Drexler and Robert Freitas,a Ray Kurzweil and Bill Joy,b along with science fiction writers such as Michael Crichtonc have explored such potential threats. Related debates have often spilled over into the legislative arena as governments have limited development of biotechnologies such as stem cell research,[6] while initiating investigations into ways of restricting nanotechnology.[7]

These debates are similar to those presented in the early 1950s when it was postulated that nuclear proliferation would get out of control and lead to contamination of the whole world. As with the nuclear argument, critics say that this new generation of technologies may be too volatile to control. Similar arguments were made about the machine gun at the turn of the century. As every new major technology comes along, fears are expressed that it may annihilate humanity.


Many scientists disagree with these arguments and say that the potential benefits outweigh the potential risks.d They point to near-term benefits such as nanoscale methods that can detect and target diseases far more precisely and less invasively than we do today. Vast improvements in energy efficiency created by a new generation of solar cells may also solve our energy supply problems, they argue. They say further that while the destructive power of technologies has been growing enormously, this has not hindered the expansion of civilization on Earth so far.

So we are faced with a familiar quandary: risk the benefits or eliminate the risks?

Principles for controlling powerful technologies, so that we can manage risks and get benefits, have been around for some time. For example, as early as 1950 the well-known science fiction writer Isaac Asimov put forward his ”Laws of Robotics.”[9,10] Organizations that specialize in nanotechnology, such as the Foresight Institute have developed principles for managing nano-technology risks.[11] Still other organizations such as the Center for Responsible Nanotechnology have drafted ethical guidelines to help cope with disruptive economic and social impacts.[12]

These works are each helpful. Together they may constitute the beginnings of a regulatory framework for administering technology risks. Yet by themselves and together they are still incomplete.

This article briefly describes three considerations that could transform the debate: Technologies that merge with ecology, enhanced intelligence, and punctuated equilibrium. Finally, the article shows how these discoveries may completely change the regulatory paradigm. This entry is necessarily limited on space and I encourage the interested reader to look at my book-length work Our Molecular Future^-13] and the reference list for more details and information.

BACKGROUND

Some scientists have begun to describe a point known as the ”Singularity,” where the rate of technology convergence makes it impossible for human beings to accurately forecast the near future. If this point is approaching, then it seems that all the present discussions by environmentalists, scientists, and government regulators over how to regulate nanotechnologies may soon become moot, because the process will be out of our hands regardless of what we do, unless we impose a draconian ban on all new technology development. Given our history, such a ban seems improbable; therefore we must consider the implications of exponentially accelerating technologies.

The ideas that Homo sapiens—as we are presently constituted—won’t be able to control our own destiny, and that some other form of development that we can’t comprehend right now may take over from present paradigms, seem so fatalistic that many people don’t dare to consider such thoughts. These confound the basic human tenet of belief in the future. Yet with technology rapidly progressing, and with the particularly accelerating advances in nanotechnologies—that are described throughout this topic—we must consider such a possibility as a starting point in the discussion over how to regulate the risks posed by new technologies. Nor do we need to be fatalistic about it, because there are avenues whereby we can participate in the accelerated evolution that has begun to occur.

Just in the same way that the atomic bomb transformed the concepts of security and war, so advanced technologies are already upending conventional notions of evolution. For example, it has now become clear that computers have started to solve problems in ways that their human designers do not comprehend. Chess Grand Master Gary Kasparov acknowledged this reality many years ago when, after having been defeated by a computer at his own game, he stated that he had lost to ”an alien.”[14] This was rapidly followed by the development of genetic computing where software using genetic algorithms was used to design circuits in ways that human designers couldn’t fully comprehend.[15]

Why is this so relevant right now to the discussion over regulation of nanotechnology?

The extrapolation to be drawn from this is that because some computers already exceed human intelligence in limited areas, that nanotechnology-enabled artificial intelligence will increasingly supercede our own. Given that it would be extremely difficult to regulate something that is smarter than a human, again we are confronted with this possibility that the regulatory discussion is moot.

However, such a viewpoint—and the fears expressed about ”runaway” technology—overlook one central development. Human intelligence and machine intelligence are beginning to merge. As they do, the possibilities for anticipating and regulating development of further technologies take on a new light.

The perceptual problem that we face today is that most discussions over regulation of nanotechnologies are based on the assumption that while technology continues to evolve rapidly, Homo sapiens will somehow continue to evolve as we have for millennia: slowly and biologically. If this is true, then we can stop the discussion over regulation, because the rate of technological evolution is already exceeding the rate of biological evolution, and will render Homo sapiens intelligence obsolete or at least inferior.

However, this article looks at another possibility: that evolution of human intelligence is about to accelerate past our biological limitations into another realm. If so, this will transform the regulatory landscape.

Furthermore, our growing understanding of the natural environment is showing us that we may have no choice but to proceed, because history demonstrates that sooner or later nature will create conditions that make our existence on Earth difficult or untenable unless we take measures to protect ourselves.

NEW FACTORS THAT MAY TRANSFORM RISK ASSESSMENT

Using Life Cycle Assessment methodologies,[16-18] combined with an examination of new technological developments and new discoveries about the natural environment, these emerging theories and technologies can be identified as having the potential to profoundly transform the present paradigm of technology regulation.

Technologies That Merge with the Natural Environment

Ray Kurzweil, who pioneered technologies such as the flatbed scanner, argues that technology is a continuation of evolution by other means.[19] This implies that our technologies are becoming an integral part of the ecology. What are the physical manifestations of this?

Smart Dust[20] comprises a massive array of micro-machines made of nanoscale components that ride on air or water currents, undetectable to the human eye. Each expendable machine can have a camera, communications device, and varying sensors for chemicals, temperature, and sound. It has its own rechargeable energy source. It can serve as the eyes, ears, nose, and guidance mechanism for everyone from soldiers to hurricane watchers. It may soon cost a fraction of a penny to manufacture, and its prototype exists today. It forms part of a massive array that delivers information to one or hundreds of computers in one or many locations. It may soon be in our environment in the trillions, delivering information about everything from troop to sewage movements.

This nanoscale level of incursion into—and integration with—the ecology suggests emergence of an intelligent environment. Just as the natural environment exercises its own type of intelligence by passing on information from generation to generation via DNA, so we are creating an intelligent human-built environment not just alongside that, but also as part of it. An intelligent environment has elements that are able to sense virtually every part of the ecology, from the epicenter of earthquakes to the heart of a hurricane and the heartbeat of every species, then interpret what this means and how to react. Right now, we are only at the very first primitive stages of this, but our sensing capacities are accelerating.

Such intelligent particles are also gaining the capacity to self-assemble. Several universities have pioneered self-assembling photovoltaic materials that generate and conduct an electric current (see entry on ”Photovoltaics for the Next Generation: Organic-Based Solar Cells”).[21] These materials can be painted onto surfaces, thus eliminating the need for solar panels. Such chemical self-assembly is only a primitive precursor to molecular assembly that is described in other entries to this topic.

When we combine self-assembly with intelligent sensing at the nanometer scale, then multiply it a trillion-fold, we see that our technology is becoming an integral part of the ecology instead of just impacting it, and that human technologies may soon be indistinguishable from the natural environment. This is a profound transition.

Furthermore, such pervasive intelligence is developing outside the human brain, but also in deep contact with it.

Enhanced Intelligence Changes the Groundrules

Hans Moravec, of Carnegie Mellon’s Robotics Institute, has shown convincingly—as have others—that the rate of acceleration in information processing is logarith-mic.[22] Not only is the capacity to process ones and zeros multiplying, but the rate at which it is multiplying is also increasing.

For millennia, this exponential rate was barely perceptible, because it took thousands, then hundreds, then tens of years for such capacity to multiply, from the abacus to the microprocessor, and now the nanoprocessor.

Today, this exponential acceleration enables super-fast manufacturing by machines and software. An example of this is desktop manufacturing that is transforming desktop printing into three-dimensional desktop manufacturing of products.-23]

Such hyperchange is upending the ground rules for intelligence, and by extension for environmental risk management.

Most risk assessment today implicitly assumes that evolution of human intelligence will proceed in the same way that it has over the past few thousands of years—that is, gradually.

Here are examples of why this assumption may be wrong.

In 2001, a computer used ”genetic computing”-24-1 to build a thermostat and actuator that were superior to the counterparts designed by a human. The computer’s programmers were unable to trace how the computer reached its conclusion. This is because genetic algorithms allow computers to solve problems in their own way without human intervention.

Machines with enhanced intelligence do certain things far faster and better than we do. Not everything, but many things. Stockbrokers now use algorithms that forecast commodity markets more accurately than humans do.-25-Satellites that repair themselves and make unilateral data transmission decisions are already in orbit.-26]

Moreover, massive networks are enhancing our own intelligence. It is now possible for the layperson to perform Internet searches in real time to get answers to complex questions. This acceleration in data retrieval by the general population constitutes a mass enhancement to our own memories.

At the more specialized level, remote robotic surgery is creating a networked medical ”mind” that can perform operations in and from many locations at once.-27]

The extraordinary development is that human intelligence and primitive forms of machine ”intelligence” are merging already. This is apparent from the use of artificial retinas for the blind, where computer microchips are implanted in the eye then connected to the optical nerve. They interpret and relay visual information to the brain. The merging of human intelligence with genetic algorithms and massive networks is being applied to modeling of, for example, climate change, but it has only just begun to be applied to evaluation of phenomena such as those described below.

Understanding Punctuated Equilibrium

The theory of punctuated equilibrium-28] was first proposed in 1972 by Niles Eldredge and Harvard evolutionary biologist Stephen Jay Gould.-41- This holds that evolutionary change occurs relatively rapidly in comparatively brief periods of environmental stress, separated by longer periods of evolutionary stability. After many years of skepticism, their theory is now gaining acceptance. This is because proof is emerging.

Intelligent tools, such as those described earlier, are helping us to discover that the natural ecology experiences periodic instability that threatens our society; not just in the extended time frames that we used to think.

In 1994, Comet Shoemaker-Levy 9 (SL9) hit Jupi-ter,[29] blasting holes the size of Earth in its atmosphere. Had this hit the Earth, human life would have been virtually extinguished. Only recently have we developed the tools to see such distant impacts, and until such technologies were invented we could only theorize about how often catastrophic collisions occur.

Before that, it was thought that such upheavals happened only every few million years and that we’d have lots of time to see them coming. SL9 demolished this idea. It demonstrated that we live in a galaxy where life can be snuffed out on a planetary scale without warning—in this era, not just the distant past.

Furthermore, scientists have found that smaller events have upset the ecology here on Earth. Ice core and tree ring records show that around the year A.D. 536 an unknown event triggered a catastrophic cooling of the Northern Hemisphere, resulting in years without summers that led to wholesale crop failures and starvation.[30]

Thousands of samples taken from ice cores and tree rings around the world show that naturally induced climate flips occur more frequently than we once thought they do, and that they don’t only unfold over centuries but also erupt in a few years.[31,32]

At the regional scale, in 1700, a fracture at the Cas-cadia subduction zone produced a gigantic tsunami that scoured much of the Pacific coast for miles inland, where many of our cities now stand.[33] In 1958, a 1,500-foot wave swept away a forest after a mountain collapsed into Lituya Bay, Alaska.[34]

At the nanometer scale we are also getting a surprise. Researchers have discovered vast numbers of nanoscale organisms that are a hundred times smaller than most bacteria. In geology, they are named nanobacteria,[35] nanobes, and nanoarchaea.[36] In human ecology, a similar-sized entity has been labeled Nanobacterium san-guineum or blood nanobacteria.[37] Despite the name, it may not be a bacterium at all, but instead seems to be a newly discovered infection with the unusual ability to form a tough shell consisting of the same type of calcium found in many diseases. For decades, researchers have seen evidence that epidemic illnesses such as heart disease are triggered by infection.[38] This was proven for stomach ulcers decades ago, but for other illnesses no one could find a culprit. Now it seems that one has been discov-ered,[39] as chronicled in the book Has Heart Disease Been Cured?[40]

If the existence of such organisms turns out to be valid after the hot discussion over them is resolved, then it may fundamentally alter our understanding of how ecology works, what constitutes an ecosystem, and how epidemics decimate populations.

The reality that has been overlooked by environmental agencies and theorists is that many of these nano- and macro-scale phenomena pose deep threats to our society. Agencies such as the United States Federal Emergency Management Agency (FEMA) and United States Environmental Protection Agency (EPA) have few defenses against them. Such agencies usually do not consider how to adapt to climate altering super-volcanoes or epidemics from strange nano-organisms, because these are perceived as indefensible, or they haven’t entered the awareness of the organizations yet.

Thus, punctuated equilibrium is not part of the risk assessment framework. As such, a chunk of the equation is missing. This is especially true when considering the relative risks and benefits posed by nanotechnologies. Such technologies may be driving the next ”punctuation” in evolution by upending longstanding paradigms. At the same time they may give us the tools to protect ourselves from newly discovered big natural threats. Such is the contradictory reality of the two-edged sword.

MATCH NATURE’S COMPLEXITY

The convergence of these discoveries may let us achieve something that we have only dreamed of until now: how to match nature’s complexity.

Right now, most of our technologies are unable to match the complexity of natural environments. For example, we use antibiotics to cure bacterial infections, but they lose their potency when the environment that they work in adapts to them. We build power lines to survive ice storms, but our miscalculation of the worst scenarios leads to collapses that paralyze our high-technology infrastructures.

Most of our agricultural, medical, energy, transportation, and housing systems are in a constant struggle to respond to the complexity of the natural environment.

Yet this imbalance may shift. Molecular technologies are empowering us to find solutions that replicate natural processes at the molecular level (for some examples, see entries on ”Biomedical Applications: Tissue Engineering, Therapeutic Devices, and Diagnostic Systems” and ”Nanomaterials: New Trends.”). We may see energy grids based on solar ”paint” that slash the political and economic risks associated with fossil fuel infrastructures. Our drugs may be so precise that they backfire only occasionally instead of generating widespread immune responses as they do now.

This nascent capacity to match nature’s complexity constitutes the next environmental revolution. Over the centuries, such efforts to replicate natural processes have been criticized as arrogant and unachievable approaches to nature, and today are sparking a political and religious furor. Nonetheless, they may soon force us to redefine the boundaries of risk assessment.

WHAT TO DO

These new realities—enhanced intelligence, technologies that merge with the environment, and newly understood evolutionary paradigms—are the elephants in the room of risk assessment. To cope with them, we must initiate a new regulatory discussion. We must first acknowledge that the yardsticks for measuring risk are being moved dramatically by our own rapidly expanding knowledge.

Just a few small examples: Although nanoscale organisms have been identified in geological formations and the human body since the early 1990s, few projects have examined the implications for human or natural ecology or for environmental chemistry. NASA is studying it, as are the universities of Texas, McGill (Canada), Regens-burg (Germany), Kuopio (Finland), Melbourne (Australia), and others. However, no major government initiative is considering the implications. At the opposite end of the scale, few if any governmental, environmental, or disaster preparedness agencies are examining newly discovered mega-scale anomalies such as the naturally induced climate flip of circa A.D. 536, or the giant west coast tsunami of 1700. These would certainly disrupt natural ecosystems and civilized society if they recurred today, and evidence suggests that they may.

No disaster preparedness or environmental agency yet examines how nanotechnologies might be used for adapting to such phenomena. Future initiatives may develop, but at this time they are not apparent. To rectify this, the author has suggested that a forum be held under the auspices of one or several of the nanotechnology NonGovernmental Organizations such as The Foresight Institute or Center for Responsible Nanotechnology, to examine such an issue.

Examples of technologies that might help us to adapt to ”nature’s time bombs,” and to explosive risks that may be posed by nanotechnologies themselves, include:

• Artificially intelligent software that is transforming the way that we make products and carry out business. The role that artificial and enhanced intelligence will play in risk assessment is so far under-studied and overlooked. This takes us to the heart of the issue of evolution of human intelligence relative to thousands of years of biological evolution. Likewise, technologies that are merging with the human body and mind, such as artificial retinas and other implants, merit far greater attention, as these are the stepping stones to development of Homo sapiens who have enhanced evaluative capacities.

• Superstrong nanostructured materials such as aerogels that exist now and that may let human settlements withstand mega-hurricanes, earthquakes, and tornadoes without causing more environmental damage than they prevent. Furthermore, the self-assembling and disassembling properties of other newer materials may protect us from more serious near-earth object threats that are now considered impossible to defend against, and that have been badly misjudged as too infrequent to worry about.

• ”Desktop manufacturing” that may replace thousands of polluting factories while producing materials such as self-assembling solar materials that may stabilize our energy supplies.

• Nanomedicine that is opening the doors to new solutions for many prevalent diseases, and that may stop epidemics that have retarded human progress for millenia.

• Many other technologies that are cited throughout this topic and constitute an excellent basis to start with.

Of equal importance is the process used to evaluate them. By expanding the interdisciplinary approach to technology, it is possible and necessary to bring together experts in fields that are infrequently combined. These include:

• Computer scientists who have applied artificially intelligent software to technologies that might be used for environmental adaptation and risk assessment.

• Biochemists, geologists, and physicians who discovered nanobacteria in the environment and human body, and also developed treatments that seem to reverse nanobacterial infections.

Climatologists, geologists, and astronomers who uncovered evidence of recurring climate disruptions, giant tsunamis, and near-earth object collisions.

• Scientists who have developed adaptive technologies such as desktop manufacturing and self-assembling photovoltaic materials that may let us adapt rapidly to big ecological changes.

• Critics who have proposed moratoria on nano-manufacturing.

CONCLUSION

By focusing such wide-ranging expertise on the challenges depicted in this article, risk assessment could be made into a more effective tool for proponents, detractors, and users of advanced technologies.

Next post:

Previous post: