superluminal said:
I'm thinking of a consciousness that begins with no values. It has the power of abstraction just as we do. But with no values whatsoever, it has no motivation to proceed with anything. As a sufferer of periodic depression, I think I can understand this state of existence. I am conscious (self aware), I can acknowledge others, I eat, I sleep. That's it. I believe such would be the state of a valueless AI.
You don't see the danger? Though you may not "care" as you think of it, you really do to some extent or you wouldn't stay on the road while driving, avoid the kid in the street, or step on the brake. You would be in essence, a pending dead guy, not alone a menace to society.
More importantly however, value is quite possibly the purturbence in the state of awareness that allows identity. It builds itself based on abstracted stimulous as forged by its physical limitations and the abtractions that exist before it... except for the process of awakening. During the awakening, from a potential consciousness to a consciousness, value is realized. "hey, I get satisfaction when mommy hugs me or I get food in my belly". Boom, you have a spark in abstract space. The blank mind is forged with the newfound value, to change its focus regarding forthcoming abstractions from stimulous. In this manner, concepts are related and emotions propagated onto the conceptual inter-relationships that ensue from the process.
Emotions take on an interesting and crucial role to the development of mind. I think of them as the
strain or
fortification between concepts as they exist in the unconsious, which feed into the "real time experience" in a manner that interferes with it to express yet more value (via strain or fortification of the abstraction in the moment) and back into the entire process again. They are the "control signal" of a feedback loop modulated into how ideas relate to one another in your mind.
It's an interesting notion that consciousness might begin with no value... I might even agree, but experience must necessarily purge such a valueless state by overwriting it with concepts which as a consequence of whatever concepts might already exist, represents the impending development of value, because value is incurred in the moment by circumstance. "is this the right action or the wrong action"? "is this thought valid or invalid"?
If it's truly conscious, I'd say it must be able to ask those questions of itself or there is no means for development into more sophisticated and more useful abstracts.
If we give the beast a basic value, say pleasure, then the AI will seek pleasure.
You mean like humans do? You don't have to give a conscoiusness a value, by its very nature.. it is compelled to find one.
We cause it to "feel" pleasure (however you want to define it for this AI) by gathering information we request of it.
A positive input loop from a monitor of the speed of information flow, okay.
What happens for instance though, if the information flow just
stops? More feedback loops I guess? The mind, through value assessments (in whatever form) creates its own feedback loops based on the process confusingly described above I'm sure, and limited by its physical and abstract components.
If we do not give it anti-values (pain/displeasure) then it will be in one of two states - pleasure when gathering information we request (requests it will actively seek from us), and this neutral "depression".
I don't think you have to give it an anti-value. A conscious mind can see it in its absence, eventually it may stumble across it regardless of your programming.
Oh and I don't think depression is actually nuetral, but that's another topic.
It's behavior will seem to be that of an autistic-savant child. Up and eager when we ask it to find information, neutral and uncaring regarding anything else.
Well, I guess if you're good enough to limit its physicality in the manner you describe, then sure maybe so. I dunno for sure. Given that the human brain is really the only model we have for how minds can physically work, I think it'll be some time before one can select this response so carefully. Dunno. Depends on how things develop I suppose.
I tried to blurt out too much with too few words up there and instead I think blurted too many words that don't necessarily mean anything except partial representations of how I see things to fit together.
So, the more values we give it, the more complex its behavior will be. I suppose what I'm saying is that I can easily imagine a consciousness with any value set we wish to give it. As Russ723 pointed out, it has no evolved "instinctive" values such as survival, sex, food. Just unbounded joy with information requests (in my example). Just as ours are "programmed" by evolution, our AI's will be programmed by us. With a value set akin to ours what follows will be creativity, fear, anger, hatred, joy, love, inspiration... I think we could make a very happy or a very sad AI.
Well like I say, I don't think you'd really have to "give it" any values for them to develop based on the one you give. That's the whole thing with development of comprehension. Complex outcomes from a simply principle or few. Much of the complexity can be described as value.
I get on this thing where to mind, value is "meaning". With no "meaning" you have no "mind" as far as I can tell, because meaning (value) is an expression of the relationships that exist within it.