Cybernetic revolt, more commonly known as "the computers take over", is a science fiction scenario in which AIs (often a single supercomputer or a computer network) decide that humans are a threat (to either themselves or to the machines) and try to destroy or enslave them, potentially leading to Machine Rule. In this genre, humans often prevail using "human" qualities, for example using emotions, illogic, inefficiency, duplicity, or exploiting the postulated rigid ruled based thinking and lack of innovation of the computer's black/white mind.
While so far a fictional scenario, major academics and researchers have called for humanity to confront the possible ramifications of AI before they could occur.
The fear of humanity being made obsolete by technology taps into some of modern human's deepest fears. This can be shown to have been the case even before the computer became prominent, such as Charlie Chaplin's movie Modern Times and Fritz Lang's Metropolis shows.
However, even as he was slowly being displaced from most physical tasks, man has always prided himself on his brain, taking the mechanistic 'thoughts' of early computers as proof that he would not be overtaken by his 'Frankenstein' creations.
While artificial intelligence is still a remote concept at this time, successes in simulating parts of intelligence -- as for example in the victories of the Deep Blue chess computer -- have shaken mankind's certainty about its permanent place at the top of sentience.
As Moore's law has shown, computer power has (seemingly) limitless growth potential. While there are physical constraints to the speed at which modern microprocessors can function, scientists are already developing means to eventually supersede these limits, such as quantum computers.
As futurist and computer scientist Raymond Kurzweil has noted, "There are physical limits to computation, but they're not very limiting." If this process of growth continues, and existing problems in creating artificial intelligence are overcome, sentient machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to -- either as a single being or as a new species -- become much more powerful than humans, and to displace them.
Necessity of conflict
For a cybernetic revolt to occur, it has to be postulated that two intelligent species cannot coexist peacefully in a single society - especially if one is of much more advanced intelligence and power.
While a cybernetic revolt (where the machine is the more advanced species) is thus a possible outcome of machines gaining sentience, neither can it be disproven that a peaceful outcome is possible.
The fear of a cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. However, there are some examples of less powerful or advanced societies or groups existing in parallel to advanced or powerful ones, such as the relationship between the Amish and English societies.
Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. Such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.
In fact, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially-intelligent machine (not sharing humanity's evolutionary context) would be hostile -- or friendly -- unless its creator programs it to be such (and indeed military systems would be designed to be hostile, at least under certain circumstances).
Some scientists dispute the likelihood of cybernetic revolts as depicted in science fiction such as The Matrix, claiming that it is more likely that any artificial intelligences powerful enough to threaten humanity would probably be programmed not to attack it. This would not, however, protect against the possibility of a revolt initiated by terrorists, or by accident.
Artificial General Intelligence researcher Eliezer Yudkowsky has stated on this note that, probabilistically, humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well-being.
Another factor which may negate the likelihood of a cybernetic revolt is the vast difference between humans and AIs in terms of the resources necessary for survival. Humans require a "wet," organic, temperate, oxygen-laden environment while an AI might thrive essentially anywhere because their construction and energy needs would most likely be largely non-organic.
With little or no competition for resources, conflict would perhaps be less likely no matter what sort of motivational architecture an artificial intelligence was given, especially provided with the superabundance of non-organic material resources in, for instance, the asteroid belt. This, however, does not negate the possibility of an disinterested or unsympathetic AI artificially decomposing all life on earth into mineral components for consumption or other purposes.
Some groups, called Singularitarians, who advocate what might be defined as a peaceful (non-violent, non-invasive, non-coercive) cybernetic revolt known as a 'technological singularity', argue that it is in humanity's best interests to bring about such an event, as long as it can be ensured that the event would be beneficial. They postulate that a society run by intelligent machines (or cyborgs) could potentially be vastly more efficient than a society run by human beings.
A society led by friendly, altruistic sentiences of this type would therefore be to humanity's great benefit. To this end, there has been much recent work in what has become known as Friendliness Theory, which holds that, as advocate and AI researcher Eliezer Yudkowsky states, "... you ought to be able to reach into 'mind-design-space' (i.e. the hypothetical realm which contains all possible intelligent minds) and pull out a mind (design an intelligent machine) such that afterwards, you're glad you made it real."