Volume 1, Issue 1
1st Quarter, 2006


Implications of Adaptive Artificial General Intelligence, for Legal Rights and Obligations

Peter Voss

page 4 of 7

Welcome or Fear?
There are two perspectives on A.G.I.. Should we welcome it? Is it our savoir? Or should we be afraid of it? Do we need A.G.I. to save us from ourselves?

Nick Bostrom[1] wrote a good article analyzing the existential risks, such as runaway biotechnology, in the hands of a common Voss quotecriminal or terrorist. That is scary stuff. Nanotechnology, gray goo -- there are a lot of dangers out there. Of course, there are many social risks that we face every day. There are more and more ways in which single individuals or small groups can inflict a lot of damage on society and that is frightening.

A.G.I. certainly could potentially help us in this area in a number of ways. It could provide tools to prevent disaster. It could protect us directly in some way. It could help by uplifting mankind, generally, resulting in fewer people who have a grudge or a reason to be unhappy. It could make us more moral, which I know is a controversial statement. I really believe that there is a lot of evidence and reason to believe that A.G.I. will improve human morality in a very individual way.

Let’s address how much danger A.G.I might pose. First, let’s ask if we should be more afraid of an A.G.I. with a mind of its own or one that does not have a mind of its own. This is an interesting perspective that is not often examined. If an A.G.I. has a mind of its own, that mind may well be benign, rational, and moral. If it does not have a mind of its own and it is purely a tool in the hands of a human, then it is only as good or as moral as the human. Therefore, I think not having a mind of its own is much more frightening.

I believe there is little evidence that A.G.I., by itself, will be detrimental to humans, unless it is specifically designed to be. Original applications may have impacts of their own here. For example, there would be a big difference in result between our company (A2I2) building the first A.G.I. or the military. Presumably, there is some difference in the psychology of the A.G.I. whether it was designed with a whole purpose to kill the enemy or to help humans in their day-to-day endeavors. Unlike what we see in the movies, I do not believe that there is an inherent propensity for A.G.I.’s to be evil. I think that’s just plain wrong. As I mentioned before, the power of A.G.I. in the wrong human hands is a much bigger concern. The mitigating factor is the positive moral influence that it could have.

Human Interaction with A.G.I.'s
I would like to touch on the human interaction on how we treat A.G.I. and how they might treat us. First of all, how should we treat A.G.I.’s from a moral point of view? This question leads us to ask if they actually even desire life, liberty, and the pursuit of happiness. It is very unlikely that they will desire these things. These desires are evolutionary. Nonetheless, how will we treat A.G.I.’s? This is an interesting question. Will there be more moral amplifiers, as I like to call them? Basically, will they make bad people worse and good people better? Will they make us more of what we are, bring out our fears or bring out the best in us?

Next Page

Footnote (back to top)
1. Nick Bostrom's article is entitled "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards" and was published in the Journal of Evolution and Technology, Vol. 9, March 2002. First version: 2001 http://www.nickbostrom.com/existential/risks.html (February 22, 2005, 2:17 P.M. EST)

1 2 3 4 5 6 7 next page>