Volume 1, Issue 1
1st Quarter, 2006


Implications of Adaptive Artificial General Intelligence, for Legal Rights and Obligations

Peter Voss

This article was adapted from a lecture given by Peter Voss at the
1st Annual Colloquium on the Law of Transhuman Persons, December 10, 2005, at the Space Coast Office of Terasem Movement, Inc., Melbourne Beach, Florida.

Editor's Note: Peter Voss insightfully expounds the differences of traditional A.I. (Artificial Intelligence) and adaptive A.G.I., (Artificial General Intelligence). Peter informatively describes Artificial Intelligence as a domain-specific machine with the intelligence capacity of a human, but performing only that which it was specifically programmed to do. Comparatively, Artificial General Intelligence, as expressed, is a machine that will learn adaptively and contextually; will be self aware; and possess self concept.

Through A.G.I, Peter envisions that within three to six years, humans will progress to a more benevolent existence and surpass their primitive instincts for survival. 

 

I believe that the issues surrounding the legal and moral complexity of Artificial General Intelligence are not only extremely important, but also much more urgent and imminent than many people think. In this paper, I make a number of controversial Voss quotestatements that I do not have the room to support. I provide references at the end so that readers can find more information.

The subject of this article is Artificial General Intelligence, or A.G.I., and how that differs from traditional artificial intelligence, or A.I.. I will also address some of the key uncertainties about A.G.I. For example, many wonder if A.G.I. will save us from various threats that face humanity. Others question whether A.G.I. is a danger to us. I will also explore the moral implications and legal issues surrounding A.G.I..

A.G.I. Versus A.I.
First of all, what exactly is A.G.I.? A.G.I. is a bit of a forgotten science or technology.

Originally, A.I. was all about human level intelligence. If you picture what the average person thinks of when they think of artificial intelligence, or you imagine the movie “AI”, it is basically a machine that has the intelligence of a human. In reality, only a very small subsection of A.I. research deals with that kind of A.I.

When we speak about intelligence, we must consider the ability to acquire knowledge and skills. True intelligence is dynamic. It is ongoing, as in the way children learn. It is not just having knowledge per se. Dictionaries contain a lot of knowledge, but they are not intelligent. Intelligence is not a database of knowledge; rather, it is being able to acquire new knowledge as it learns.

With conventional A.I., the knowledge and skills are programmed. In A.G.I., they are acquired through learning, rather than programming. The ability is general, using abstraction (meaning the ability to generalize) and context. We learn our lessons once or twice, and then we generalize. We apply our knowledge to different situations.

We also learn that things are contextual. If somebody sets a rule that you should never hurt another person, we know that it is within the context of not being attacked. Our intelligence figures out that there are expections to this rule.

Conventional A.I. is very poor in generalizing because it is usually written for a specific domain. A.I. tends to be domain-specific, rule-based and concrete. That is why the traditional computer systems that we are using now tend to be stupid and brittle. That is the difference between general ability (being able to learn any kind of task like children can) and being programmed to do a specific task and being rule-bound.

1 2 3 4 5 6 7 next page>