Tuesday, December 25, 2007

Introduction to Technological Singularity

The technological singularity is a hypothesized point in the future at which the rate of technological growth approaches infinity. Moore's Law is often cited to assist in the prediction of the date of the singularity. Theorists are increasingly of the opinion that the singularity will occur via the creation of artificial intelligence (AI) or brain-computer interfaces, of smarter-than-human entities who rapidly accelerate technological progress beyond the capability of human beings to participate meaningfully in said progress.

Intelligence Explosion

Good speculated on the consequences of machines smarter than humans:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."

Most proposed methods for creating smarter-than-human or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bio- and genetic engineering, nootropic drugs, AI assistants, direct brain-computer interfaces, and mind transfer.

Potential Dangers

Some speculate superhuman intelligences may have goals inconsistent with human survival and prosperity. AI researcher Hugo de Garis suggests AIs may simply eliminate the human race, and humans would be powerless to stop them. Other oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us".

Bostrom discusses human extinction scenarios, and lists superintelligence as a possible cause:

“When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.”

Moravec argues that although super-intelligence in the form of machines may make humans in some sense obsolete as the top intelligence, there will still be room in the ecology for humans.

Isaac Asimov’s Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI. The laws are intended to prevent artificially intelligent robots from harming humans.

No comments: