Breaking News

DANGERS OF ARTIFICIAL INTELLIGENCE

Risks and Danger of Artificial Intelligence

Can We Control Superior Artificial Intelligence?
Risks and Dangers

The potential dangers of giving an artificial intelligence control over a major system like a city are not fundamentally different from the dangers of the existence of any artificial intelligence with any goal. There are four classes of danger posed by A.I.s: malicious hostile danger, apathetic danger,  accidental danger, and unknowable danger. 
The first two are the most popular in science fiction stories, and also the least likely to come up in the real world provided a proper friendliness supergoal is programmed. The most likely danger is the  mistake, and the most dangerous is that which is unknowable.

An artificial intelligence working with incomplete data is capable of misjudging, just like a human. Mistakes of this sort are almost inevitable since it is impossible to know everything there is to know about he world, but they are also the least dangerous of the four risks. A.I.s can calculate, to some degree, the magnitude and likelihood of a mistake, even before making it, and err further towards caution when the danger is greater. Since A.I.s can learn from their experience, the occurrence of accidents actually decreases the chance that the mistake will happen again, improving the A.I. and making it safer. 
        
The real danger of a well designed artificial intelligences is in its ability to reprogram and upgrade itself. Any A.I. capable of self improvement is likely to eventually surpass the constraints of human intelligence. Once an artificial intelligence exists which is smarter than any human it will be quite literally impossible for any human to fully understand it. Such an A.I. is also likely to continue improving itself at an exponential rate, making it increasingly impossible to comprehend or predict. At some point the A.I. may discover laws of causality or logic far beyond the comprehension of human minds. At this point any preexisting friendlinesssupergoals or constraints are a moot point. The possibilities of what the A.I. can do, and become, are literally infinite; for all intents and purposes such an A.I. is God5.

The only two feasible scenarios by which a maliciously hostile A.I. might be possible are if it is deliberately programmed to be hostile (e.g. by a military, terrorist group, or unabomber–esque figure), or if humanity’s existence or behaviour is actively and deliberately confounding one of the A.I.’s goals so effectively that the only way to achieve said goal is to wage war with humanity until either humanity’s will or capability to resist and confound is destroyed. For example, an environmentalist artificial intelligence with the supergoal {reduce levels of dichlorodifluoromethane;carbon dioxide;nitrous oxide;methane gas in earth atmosphere} might  see deindustrialization of human society as the only viable means, and a violent conflict of interests could ensue.
 
There is effectively no risk of apathetic danger from an A.I. with a friendliness supergoal but it is almost unavoidable from an A.I. without. An apathetic A.I. is dangerous simply because it does not take human safety and well being into account, as all humans intrinsically do, when creating strategies and subgoals. For example, an A.I. in charge of dusting crops with pesticide will dust a field even if it knows that the farmer is standing in the field inspecting his plants at that moment; without friendliness goals it has no aversion to dousing the farmer with poisonous spray.

This point is called the Singularity. Everything that comes after it is totally unknowable, and any predictions are total speculation. If there is a single ultimate argument against creating artificial intelligence it is the potential consequences of the Singularity which off course could lead to the destruction of the entire human race or subjection of it. 

No comments