First Robot able to Show Emotion & develop bonds (Humans)

by xeno6696 @, Sonoran Desert, Sunday, August 22, 2010, 22:31 (5014 days ago) @ dhw

I am trying to find out just how "human" a robot can become, and how responsible it might be for its actions. I asked Matt to what extent its sentience would be controlled by a will of its own, and to what extent by the preparatory work of the designer.
> 
> MATT: AI programming shares one thing in common with imperative programming; the machine will only do what it's told to do. However, what an AI is told to do is learn, and make decisions based on what it has learned. A TRUE AI is tabula rasa. It is connected to some kind of sensory equipment and its programming is purely to make sense of something(s) in its environment. The designer of the machine would have to train the AI from the ground up--just as if it was a child.
> 
> If a true AI is a tabula rasa, I don't see how it can possibly be independent of its designer in the way humans are of their parents. A child is born not only with instincts but also with a will of its own and a vast array of inborn characteristics: personal temperament, individualized intelligence, selective memory etc. How it responds to the impressions created by its sensory equipment will largely depend on these inborn elements, even if its decisions and the "sense" it makes of its environment may be influenced by training. Your robot is born with nothing except the programme its designer has given it. If the designer endows it with its own temperament, degree of intelligence, selective memory, it may appear to behave like a human, but its will (i.e. the degree of control it has over its own actions) and character will still be the product of the programme. In my book, that makes the designer 100% responsible. You wrote, however, that "this confusion comes down to the completely different paradigms that exist for machine programming" in which the machine "cannot display any behavior that we as humans haven't planned for." I can't see the difference, so what have I overlooked?-AI works by setting high-level policies and letting the machine do the rest. Current AI is as independent as it can get for the job(s) it is asked to do. -Okay, lets say we build Bob. Bob is the first ever General-AI; he can process emotions, he can feel pain. Bob started out in our lab knowing nothing, he was only pure potential. Only by moving through our world and experiencing--both through what we taught him and he taught himself--does his knowledge base grow. He learned language similar to how a child learns--by making associations and inferences. -It has been ten years. -Now my point is that AI programming works only to give the machine tools to do its job; as a designer, I'm only culpable up until the moment I have decided on a final learning program. But what the machine chooses to learn on his own isn't anything I can be held accountable for, just as anything your children choose to learn you aren't held accountable for. If your kids learn how to hotwire cars--the won't send you to prison! What kind of "learning" boundaries can we set for ourselves or our children? What can we enforce? In some instances I think I've seen anecdotal evidence of parents getting into trouble when say, they expose their kids to bad habits they have such as drugs, alcohol, etc. But we still don't punish the parents if the kids act out. -All the things that make up an individual's personality are built from experience, and as a set of consequence(s) from the actions they take to deal with those experiences. As to what extent the AI's personality is designed; to me personalities would be like "general policies" that the machine follows in the world. You could make a case of designer culpability in circumstances where it could be demonstrated that you gave the machine policies to be followed that made it a Charles Manson or some serial killer. -Furthermore, if the goal of the machine was to create a sentient and independent entity--the by definition the designer loses culpability if the machine had fulfilled this requirement.

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"


Complete thread:

 RSS Feed of thread

powered by my little forum