Warning: Use of undefined constant add_shortcode - assumed 'add_shortcode' (this will throw an Error in a future version of PHP) in /nfs/c03/h02/mnt/49321/domains/hackingtheuniverse.com/html/wp-content/plugins/stray-quotes/stray_quotes.php on line 615

Warning: Use of undefined constant MSW_WPFM_FILE - assumed 'MSW_WPFM_FILE' (this will throw an Error in a future version of PHP) in /nfs/c03/h02/mnt/49321/domains/hackingtheuniverse.com/html/wp-content/plugins/wordpress-file-monitor/wordpress-file-monitor.php on line 39


The debate rages about whether or not Artificial Intelligence (AI) will be friendly to humans. Some think that AI will, by it’s nature, be friendly to us and others disagree, afraid that it could be malevolent and aggressive, threatening to wipe out humanity. Preceding widespread discussion of AI, the same kind of debate was held over the hostile or friendly potential for “aliens from outer space”.

Below, not only does Hugo de Garis doubt there can be certainty about any friendliness built into AI, he is sure that humans will launch a huge conflict over the issue.

Then Eliezer Yudkowsky from the Singularity Institute counters with a thesis on how to design and create friendly AI.

Michael Anissimov quotes a section from Steven Pinker’s book “How the Mind Works” and discusses it to stress the importance of friendliness in AI.

Friendly AI: A Dangerous Delusion? – [hplusmagazine.com]

By: Hugo de Garis

I’m known for predicting that later this century, there will be a terrible war over the issue of species dominance. More specifically, it will be fought over whether humans should build artilects (artificial intellects), which could become so vastly superior to human beings in intellectual capacity that they may end up treating us as grossly inferior pests, wiping us out. I anticipate billions of casualties resulting from the conflict over the artilect question.

To combat this horrible scenario, the Singularity Institute in Silicon Valley has been set up to ensure that the above scenario does not occur. The Institute’s principal theorist, Eliezer Yudkowsky, has coined the term “Friendly AI,” which he defines essentially as intelligent machines designed to remain friendly to humans, even as they modify themselves to attain higher levels of intelligence.

Creating Friendly AI 1.0
– [intelligence.org]

The goal of the field of Artificial Intelligence is to understand intelligence and create a human-equivalent or transhuman mind. Beyond this lies another question – whether the creation of this mind will benefit the world; whether the AI will take actions that are benevolent or malevolent, safe or uncaring, helpful or hostile.

Creating Friendly AI describes the design features and cognitive architecture required to produce a benevolent – “Friendly” – Artificial Intelligence. Creating Friendly AI also analyzes the ways in which AI and human psychology are likely to differ, and the ways in which those differences are subject to our design decisions.

Does the Universe Contain a Mysterious Force Pulling Entities Towards Malevolence?
– [acceleratingfuture.com]

Unfortunately, benevolence is extremely complex too, so to build a friendly AI, we have a lot of work to do. I see this imperative as much more important than other transhumanist goals like curing aging, because if we solve friendly AI, then we get everything else we want, but if we don’t solve friendly AI, we have to suffer the consequences of human-indifferent AI running amok with the biosphere. If such AI had access to powerful technology, such as molecular nanotechnology, it could rapidly build its own infrastructure and displace us without much of a fight. It would be disappointing to spend billions of dollars on the war against aging just to be wiped out by unfriendly AI in 2045.

All of these points of view depend upon the base definition of intelligence that is being used. When that base definition is sourced from human history, it is easy to understand the pessimistic viewpoints. Human history is full of well known examples of squandering our intelligent capability in favor of reckless and foolish behavior. But that same history also reveals a consistent climb in both native intelligence and the ability to apply it to produce rational ethical behavior.

Ethical behavior depends upon the intelligent ability to analyze trends and predict future outcomes that vary according to choices made in present time. When the choice produces outcomes that have the most benefit at the most levels for the greatest scope, we call this ethical behavior.

Dumb machines don’t seem threatening to us because while they may be dangerous, we have confidence that we can eventually out-think and disable something that is not as intelligent as we are. It is an artificially created intelligence that is MORE intelligent that poses the scary threat.

With a greater level of intelligence comes a greater ability to self-improve through feedback and out of this kind of positive loop comes the rapid expansion theory of an intelligence singularity that soars beyond our comprehension. As intelligence grows, it can predict more accurately and becomes more capable of ethical reasoning that produces decisions with greater benefit. A highly intelligent reasoning center, will by definition also produce highly ethical reasoning. This should be considered a friendly AI.

Ethics as Prediction
Beyond Turing
Strong AI
What Comes After Minds?

Comments are closed.