emergence :cross posts

greenspun.com : LUSENET : Human-Machine Assimilation : One Thread

You boys need to hop over and visit. The concept is emerging into the broad collective awareness: OT: New Technologies Imperil Humanity - U.S. Scientist greenspun.com : LUSENET : TB2K spinoff uncensored : One Thread --------------------------------------------------------------------------------

[Fair Use: For Education and Research Only] New Technologies Imperil Humanity - U.S. Scientist

Story Filed: Monday, March 13, 2000 4:36 PM EST

SAN FRANCISCO (Reuters) - The co-founder of one of Silicon Valley's top technology companies believes scientific advances may be ushering humanity into a nightmare world where supersmart machines force mankind into extinction.

In a heartfelt appeal published in the April issue of Wired magazine, Sun Microsystems Inc. chief scientist Bill Joy urges technologists to reconsider the ethics of the drive toward constant scientific innovation.

``We are being propelled into this new century with no plan, no control, no brakes,'' Joy writes. ``The last chance to assert control -- the fail-safe point -- is rapidly approaching.''

Joy's article comes as a rare cry of caution in an industry that thrives on relentless and often unplanned advances and is now riding the boom of a ``new economy'' expansion attributed to technological progress.

The warning is all the more disturbing because of the author's own impressive tech credentials. A leading computer researcher who developed an early version of the Unix operating system, Joy has more recently pioneered the development of software technologies like Java and was co-chairman of a presidential commission on the future of information technology.

Joy's fears focus on three areas of technology undergoing incredibly rapid change.


The first, robotics, involves the development of ''thinking'' computers that within a matter of three short decades could be as much as a million times more powerful than those currently available. Joy sees this as setting the groundwork for a ``robot species'' of intelligent robots that create evolved copies of themselves.

The second, genetics, deals with scientific breakthroughs in manipulating the very structure of biological life. While Joy says this has led to benefits such as pest-resistant crops, it also has set the stage for new, man-made plagues that could literally wipe out the natural world.

The third, nanotechnology, involves the creation of objects on an atom-by-atom basis, which before long could be harnessed to create smart machines that are microscopically small.

All three of these technologies share one characteristic absent in earlier dangerous human inventions such as the atomic bomb: They could easily replicate themselves, creating a cascade effect that could sweep through the physical word in much the same way that a computer virus spreads through the cyberworld.

``It is no exaggeration to say we are on the cusp of the further perfection of extreme evil,'' Joy writes. ``An evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to nation states on to surprising and terrible empowerment of extreme individuals.''


Joy says his new, darker vision of the potential threat to humanity posed by technology -- one he notes is shared in part by convicted Unabomber Theodore Kaczynski -- has led him to reconsider his own contributions to the field.

``I have always believed that making software more reliable, given its many uses, will make the world a safer place,'' Joy writes. ``If I were to come to believe the opposite, then I would be morally obligated to stop this work. I can now imagine such a day may come.''

Joy does hold out some hope, saying humanity's effort to control the threat of nuclear and biological weapons was evidence of the strength of the species' self-preservation instinct.

But he urges a wider dialogue on the implications of new technological advances and specifically asks that they be incorporated into the program at the annual Pugwash Conferences, which began in 1957 as a forum for scientists to discuss the threat posed by nuclear weapons.

``The experiences of the atomic scientists clearly show the need to take personal responsibility, the danger that things will move too fast, and the way in which the process can take on a life of its own,'' Joy says.

``We can, as they did, create insurmountable problems in almost no time flat. We must do more thinking upfront if we are not to be similarly surprised and shocked by the consequences of our inventions.''

Copyright ) 2000 Reuters Limited. All rights reserved.


Portions of above Copyright ) 1997-2000, Northern Light Technology Inc. All rights

-- (Dee360Degree@aol.com), March 13, 2000

Answers Another article on the subject. Wonder if he's going into the freeze-dried food business?

-- Mikey2k (mikey2k@he.wont.eat.it), March 13, 2000.


Any computer wants to take me over and make my humanity obsolete is gonna have to kick my ass!

-- INever (inever@dot.com), March 13, 2000.


Here is an excerpt from an article entitled "Conflict and Defense in an Age of Nanotechnology," which seems to reiterate the dangers Mr. Joy refers to: Introduction

Nanotechnology is an idea which is not yet a physical technology, but is likely to exist within a few decades. It is based on two premises: (1) that humans will learn to build things out of individually positioned atoms (as nature does when making proteins) and (2) that we will achieve vast economies of scale by building atomically precise structures that self-replicate (as nature does when making cells). Once humanity has nanotechnology in a mature form, the only costs for making almost any physical object will be energy, raw materials, software, and time. Vastly more powerful computers than exist today will be sold cheaply by the bushel, like potatoes or beans.

When it arrives, in addition to its many benefits, nanotechnology will provide grave new dangers for terrorism, warfare, and totalitarianism. It's important to start thinking about all this before the real hardware arrives.

The Disproportion of Cause and Effect

A terrorist using sophisticated nanotechnology could pose a greater danger than a more conventional terrorist. He could design a rapidly communicable air-borne virus with a 100% human mortality rate. He could use nanotechnology to build a delivery system that could spread the virus over the entire Earth. Variations are possible: the virus could instead be a self-replicating machine that destroys metal or plastic parts, rendering almost all conventional technology useless. This would ensure that nobody else had the requisite infrastructure to develop a counter-measure.

The potential threat from nanotechnological weapons could be greater than the threat of nuclear weapons: either is capable of destroying all life on Earth. The difference is that with self-replicating nanotechnological weapons, this could be accomplished by a single individual. In a world with six billion individuals, the chance that one person would attempt this and succeed is too dangerous to ignore.

What sets nanotechnology apart is that one lunatic might be able to kill everybody (for reasons that almost every victim would consider random or inconsequential). Howie Goodell has called this the disproportion of cause and effect, and it is a direct consequence of the plausibility of building nanotechnological weapons that are self- replicating.

In the recent popular movie 12 Monkeys, a lone biotechnology researcher used a few grams of an engineered virus and some airline tickets to wipe out almost all of humanity. If there were no protections in place, a suitably motivated lunatic could do this now. Nanotechnology could be viewed as a generalization of present-day genetic engineering, and as such, it lowers the "entrance requirement" for acquiring the destructive capabilities of a world superpower.

Here's the link where many other papers on nanotechnology can be found as well.

-- Celia Thaxter (celiathaxter@yahoo.com), March 13, 2000.


This is sincerely serious stuff. A key property is self-replication, as noted just after the middle of the article. And I'd add an adjective: _rapid_. Rapid self-replication. Rapid mutation and adaptation. Rapid on the scale of microseconds.

We biological creatures developed over long, long times, and our species resilience and resistance to biological foes are unsuited to combatting the potential speed of attack of dangers posed by foreseeable combinations of robotics, genetic engineering, and nanotechnology. It's one thing to develop immunity to a new strain of Asian flu every year. It'll be quite another to ward off the accidentally-(or deliberately-)released inventions of someone who decides to add microsecond-scale directed mutation to a batch of self-sufficient self-replicating nanorobots originally designed to repair certain human body parts but which have gone askew.

-- No Spam Please (nos_pam_please@hotmail.com), March 14, 2000.


Has anyone else out there read the sci-fi novel "The White Plague" (by Frank Herbert, I think) from a couple of decades ago? The premise is that an American genetic engineer visiting England sees his wife and daughter killed by an Irish Republican Army car bomb, and decides to get revenge by developing a virus that is infectious to all, but fatal only to women. He distributes the virus by placing it on paper currency that he donates to IRA fundraisers.

The virus does the job in Ireland, all right, but of course soon spreads elsewhere.

-- No Spam Please (nos_pam_please@hotmail.com), March 14, 2000.


After reading Bill Joy's lengthy and deeply troubling essay, Why the Future Doesn't Need Us, I am suddenly humbled by all the small humane actions of old-fashioned life we take for granted, such as making a cup of tea, visiting a friend, or reading a book. Slow, cumbersome, worn-out products and procedures seem suddenly precious. I feel like Jimmy Stewart when he comes back to his old house at the end of Its a Wonderful Life and kisses the loose knob at the base of the stair rail that had previously so vexed him. I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-state, on to a surprising and terrible empowerment of extreme individuals.

The issues raised in this essay dwarf those floated about Y2k, yet society as a whole seems to be paying small heed to them. Does anyone else see the irony in this essay being released on the same day that the news matter-of-factly reports the cloning of five pigs?

-- Celia Thaxter (celiathaxter@yahoo.com), March 14, 2000.


Celia, my thoughts exactly. This is one of those topics that disturbs my peace of mind. I think I'll go have a cup of tea and forget I ever read this post.

-- gilda (jess@listbot.com), March 14, 2000.


Gilda, Did you read the whole article? It's definitely worth a read. It's not an "infomagic" sort of read -- it's deep and many layered. The implications for the future are astounding. I think the question is, how close are we to really developing artificial intelligence anywhere on a par with human intelligence? How plausible are the kinds of self-replicating products one can foresee with nanotechnology? I think we still have quite a way to go to perfect the technologies.

Mr. Joy believes that it's no longer a question of if, but when. His essay is a warning that we must pause to consider potential accidents and human frailty before we apply robotics, genetic engineering, and nanotechnology within commercial enterprises.

-- Celia Thaxter (celiathaxter@yahoo.com), March 14, 2000.


Thank you everyone for your good input on this article and for the additional links.

-- (Dee360Degree@aol.com), March 14, 2000.


Joy's article was interesting but not really surprising to those who have been following this subject for some time. The djini is out of the bottle so-to-speak and I'm not sure we can create any effective measures to control it - I think the next 20-30 years are going to be quite interesting to watch unfold... Here are a few more links that some of you might find of interest:

Foresight Institute

Extropy Institute

Vernor Vinge on the Singularity

The Age of Spiritual Machines: When Computers Exceed Human Intelligence by Ray Kurzweil


-- Jim Morris (prism@bevcomm.net), March 14, 2000.


Celia, >how close are we to really developing artificial intelligence anywhere on a par with human intelligence?

If you mean how close are we to duplicating the processes of human intelligence by artificial means, the answer is: probably a long way away, because we don't even understand our own human intelligence yet, let alone how to artificially duplicate it.

But if one asks how close we are to creating artificial intelligence that can match the achievements of human intelligence, _without necessarily using the same processes_ to make those achievements, then I think the answer is that we have already started, and will continue, to do that, in small increments.

Chess has long been considered a game that requires intelligence to play well. Remember several months or a year ago when world chess champion Gary Kasparov played a match against the world's best chess-playing computer? The computer won.

That match was played at standard tournament speed, wherein each player had a total of 2 1/2 hours for his/its first 40 moves. Computers are even better, relative to humans, at faster "speed chess" where each player gets only 5 minutes for the entire game. At speed chess, computers have been beating the best human players for a few years already.

So in the very specialized niche of chess-playing intelligence, computers have already matched or exceeded human achievement.

But here's an important point -- one of the amazing things we (folks who have been following computer chess news for the last 30-40 years) have learned during the development of chess-playing computers has been that it has been unnecessary to program the computers to duplicate the thought processes of the best human chess players! The best chess computers don't try to duplicate the way humans plan moves -- they just approach it in a so-called "brute force" simplistic way that evaluates millions of possibilities per second. It has turned out that this approach, once it reaches a certain speed, overcomes the greater subtlety and higher efficiency of the human players' approach by just plain outracing it.

This has cast a cloud over the future of chess -- some chessplayers have said, "Why bother anymore? Computers will always win from now on."

Others point out that we still hold track-and-field competitions even though our cars and planes travel faster, higher, and farther than any human that ever lived -- we just separate the drag races from the footraces. Similarly, the official rules of chess now have special limitations and provisions for chess-playing computer participation in tournaments. However, neither computer chess nor the automobile presents the level of danger to which Bill Joy refers in his warnings.

-- No Spam Please (nos_pam_please@hotmail.com), March 14, 2000.


I'm willing to bet most of you are younger than I. I was raised on Asimov and Bradbury, so this posting is neither novel or threatening, but it strikes a responsive chord. I've seen the predictions of the "Great Old Ones" of Science Fiction come to pass, and more. There was a little glitch between generations (and there is a twenty year difference between my youngest siblins and myself, so I speak from first hand knowledge), and you who are younger did not have the advantage of coming from the "old machine" age of "unconscious" servants to the time we are in- ie, AI. Popular literature reflects the collective unconscious, in the Jungian sense, which is a valid perception of reality. My generation produced HAL. Yours produced the Terminator.I hope the ending of the story will be as your bards wrote it.

-- mike in houston (mmorris67@hotmail.com), March 14, 2000.

-- mike in houston (mmorris67@hotmail.com), March 14, 2000


mike in houston wrote:

You boys need to hop over and visit. The concept is emerging into the broad collective awareness:

I've already been there, Mike... ;-)

It will be interesting to see how people react to this information as it becomes more readily available. I admit that even though I've been interested in this sort of things for many years I still look at it with as much trepidation as fascination.

It looks like we're on the upward swing toward the Singularity - I wonder if we'll make it?


-- Jim Morris (aka SuperLuminal) (prism@bevcomm.net), March 14, 2000.

lately I've been active on oil-exhaustion dieoff discussion to the exclusion of this 'other' issue (takeover by non-human artificial species). Maybe the oil will run out before these robots 'n such have a chance to clamp onto us. However, maybe I've found the connection: the new artificial species will be more energy efficient or will run on other sources of energy. Maybe this is the link between human- machine assimilation and petroleum exhaustion dieoff... Hmmh, 'Matrix' anyone ??

-- scott (lynx5_5@hotmail.com), March 15, 2000.


Announcement from Kurzweil Technologies, Inc.

Ray Kurzweil will be a guest on National Public Radio (NPR) "Science Friday" this Friday (March 17) at 3 PM EST (for one hour). Host Ira Flatow will be interviewing Ray and Bill Joy (Cofounder and Chief Scientist of SUN Microsystems).

The topic will be Bill Joy's cover story for the April issue of WIRED titled "Why the Future Doesn't Need Us." Joy's story in WIRED has itself received wide coverage in other publications, including an article on the front page of the New York Times Business Section on Monday (March 13).

T HIS WEEK ON SCIENCE FRIDAY: Hour Two: Perils of Technology

-- Jim Morris (aka SuperLuminal) (prism@bevcomm.net), March 16, 2000.

Moderation questions? read the FAQ