Volume 1, Issue 1
1st Quarter, 2006


Implications of Adaptive Artificial General Intelligence, for Legal Rights and Obligations

Peter Voss

page 2 of 7

The other distinction is ongoing, cumulative, adaptive, grounded and self-directed learning. This boils down to the way children or even animals learn to interact with the environment. We learn our lessons as we go along. We become smarter and we become better as time goes on because we learn from experience. This is A.G.I. Image 1 provides a visual comparison of A.G.I. and A.I..

AGI vs. AI
Image1

Very few people are actually working on A.G.I. There are many reasons for that. One reason is that the field of A.I. became overly ambitious about fifty years ago. They thought they could crack this in five or ten years. They made that promise and they haven’t been able to live up to it. Consequently, A.I. has become basically a swear word, and very few people will touch the subject.

Self-Awareness
The implications of A.G.I. are that you have human-level learning and understanding. You have machines that learn adaptively and contextually. What follows from that - and this is a controversial point – is that they will be self-aware. They will have a self concept. They will improve and achieve a point that is called “ready to learn” in developmental psychology and education. At this level, they will have the competence and background knowledge to allow them to really go out and hit the books and learn on their own. Once the system reaches that threshold, it will be able to improve itself.

Seed A.I.
The stronger version of that is seed A.I. This means that at some point, the program will become smart enough to become a programmer - like an A.I. psychologist - and to understand its own workings and be able to improve itself. This will be very similar to our own experience as humans as we grow and learn more about ourselves. We learn how to improve ourselves, except we do not have the blueprints to our design. A.G.I. will have the blueprints to its design. And it will likely become a very good programmer very quickly.

Once A.G.I. reaches that threshold, it will improve dramatically. When A.G.I. has that capability, it will also be able to augment our own ability as humans, but it will be very difficult to actually integrate it with our wetware[1]. That is a difficult problem to surmount.

Next Page

Footnote (back to top)
1. The term Wetware is generally used throughout fiction (and conversation, notably USENET) to describe one of two concepts. The first, also known as liveware, meatware or the abbreviation PEBKAC (Problem Exists Between Keyboard And Chair), is a term generally used to refer to a person operating a computer. It refers to human beings (programmers, operators, administrators) attached to a computer system, as opposed to the system's hardware or software. In this context the term is frequently used in humorous contexts; for example, in the frequently wry humour of technical support staff, a wetware-related problem is a (semi)polite euphemism for user error. The second definition of Wetware, common in many contemporary science fiction novels (Peter F. Hamilton's neural nanonics and wetware, as well as Richard K. Morgan's wetwire), is used to describe cybernetic augmentation to human beings. http://en.wikipedia.org/wiki/Wetware (February 22, 2006. 2:27 P.M. EST)

1 2 3 4 5 6 7 next page>