First Robot able to Show Emotion & develop bonds (Humans)

by dhw, Saturday, August 28, 2010, 08:47 (5200 days ago) @ xeno6696

Matt is looking into the robotic future.-MATT: We have what I would call an inborn "filter". But generally this filter doesn't change ... only our responses to it. [...] The program IS exactly that inborn characteristic (filter) that you're referring to. [...] A generalized AI starts with no knowledge; only with intuitions. [...] ...my argument is that a person who's experienced NOTHING isn't too likely to have a very robust personality.-Please forgive my cherry-picking the quotes, but together they form a pattern with which I largely agree. Only your argument seems to me to confirm the DEPENDENCE of the robot on its designer, so at the risk of repeating arguments, let me try to put the bits and pieces together in my own way. -There are inevitably areas of our own nature (as well as other people's) that we know nothing about. I don't, for instance, know how brave I am. I've never ... fortunately! ... been confronted by a situation that demands extremes of courage. But I know that I'm conscientious, because I worry about even minor problems and can't rest till they're put right. I've always been like that, and I take this to be what you mean when you say you have the same patterns now as when you were a kid. So I would like to modify your statement that a person with no experience at all isn't likely to have a "robust personality". I think the basic foundations of the personality are already there, but neither we nor anyone else can know what they are until they're brought out by experience. Admittedly, some experiences may be so dramatic or traumatic that they can change these foundations, but I think the inborn base is generally pretty determinate. In your words, "generally this filter doesn't change".-These basic foundations are designed by the robot's programmer ... as you say, the programme IS the filter. Only when the "intuitions" have been deliberately put in place can the choices follow accordingly, just as ours do. You ask: "What if it had read Machiavelli and liked it?" Of course it can't like or dislike M until it's read his book, but my question to you would be: WHY would it like (or dislike) M.? Why would it like (or dislike) anything? Where do its predilections come from? An example I gave earlier was of exposure to alcohol (= experience). Within the same family, child X may become an alcoholic, and child Y a teetotaller. For me, one of the prime aims of early education should be to expose the learner to as many different fields as possible, in order to find out what the child has an aptitude for. In other words, experience doesn't create aptitudes but reveals them. The Machiavellian tendencies are not created by reading Machiavelli, but reading M. brings out the innate tendencies. And so although the designer of your robot will not have programmed the experiences, he will have programmed the responses. -However, perhaps we're getting way ahead of ourselves here. You specified the "general algorithm" as being the identification of a problem and its possible responses, and then "execution" [of the decision]. Robots, so far as I know, are currently created in order to perform specific tasks, or to solve specific problems. I have no trouble visualizing a machine solving problems and making decisions in accordance with the given data or with past experience. But a "sentient and independent entity" (i.e. one with self-awareness, willpower, imagination, fully developed emotions etc.) goes a great deal further. The first robot to show emotion and develop bonds ... albeit at the level of a one-year-old child ... is clearly a big leap in this direction, but regardless of my interpretation of the "filter" (which of course you may disagree with), do you think technology really can go all the way?


Complete thread:

 RSS Feed of thread

powered by my little forum