First Robot able to Show Emotion & develop bonds (Humans)

by xeno6696 @, Sonoran Desert, Sunday, August 29, 2010, 23:49 (5008 days ago) @ dhw

dhw,
> 
> Please forgive my cherry-picking the quotes, but together they form a pattern with which I largely agree. Only your argument seems to me to confirm the DEPENDENCE of the robot on its designer, so at the risk of repeating arguments, let me try to put the bits and pieces together in my own way. 
> 
> There are inevitably areas of our own nature (as well as other people's) that we know nothing about. I don't, for instance, know how brave I am. I've never ... fortunately! ... been confronted by a situation that demands extremes of courage. But I know that I'm conscientious, because I worry about even minor problems and can't rest till they're put right. I've always been like that, and I take this to be what you mean when you say you have the same patterns now as when you were a kid. So I would like to modify your statement that a person with no experience at all isn't likely to have a "robust personality". I think the basic foundations of the personality are already there, but neither we nor anyone else can know what they are until they're brought out by experience. Admittedly, some experiences may be so dramatic or traumatic that they can change these foundations, but I think the inborn base is generally pretty determinate. In your words, "generally this filter doesn't change".
> -Actually, so far it looks like you have a good grasp on my thinking. Eerily, heh. -> ...And so although the designer of your robot will not have programmed the experiences, he will have programmed the responses. 
> -Our disagreement here is probably just due to me having more familiarity with the act of commanding machines; I translate your final sentence as this:-"The designer will have to predict a various number of responses to an equally various number of stimuli, and predetermine the outcome." To me, you might be thinking that the machine would have to constantly come to the designer to help with some issue. Or you're thinking that if anything like a policy is built in, then it is completely owned by the designer? I don't think so, unless of course our designer makes a "policy" that every time he hears "God save the Queen" he runs out into the street to dance. The goal of general AI is to get away from defining a specific problem and a specific solution, such as what I demonstrated here. The goal is a single algorithm that can solve many problems. -I'm thinking that there will be general... "policies" if you will, that would be built into the machine that would allow the machine to solve problems on its own. These would of course come from the designer, but the machine would have the power to override policies if it deemed it necessary, or to adapt a solution method from a different intelligence type to another. I'm approaching this from the perspective that the designer built as much free will into the machine as possible; though some futurists disagree with me and think that Asimov's rules should be built into machines. (It is a minor aside, but it's pertinent to any talk of AI.) The policies don't dictate responses, only tendencies. And if the machine has the same power to override tendencies as we do... at what point can we say that the designer is necessary after pressing "on?"-> However, perhaps we're getting way ahead of ourselves here. You specified the "general algorithm" as being the identification of a problem and its possible responses, and then "execution" [of the decision]. Robots, so far as I know, are currently created in order to perform specific tasks, or to solve specific problems. I have no trouble visualizing a machine solving problems and making decisions in accordance with the given data or with past experience. But a "sentient and independent entity" (i.e. one with self-awareness, willpower, imagination, fully developed emotions etc.) goes a great deal further. The first robot to show emotion and develop bonds ... albeit at the level of a one-year-old child ... is clearly a big leap in this direction, but regardless of my interpretation of the "filter" (which of course you may disagree with), do you think technology really can go all the way?-I'm just ignorant enough about the AI field to not be able to say "yes" with certainty, but like Kurzweil, the gentleman who wrote the article I linked here most certainly believes it is possible. Though if pressed, I think that at present I don't see why we couldn't do it. Hard to commit. One method that could prove the entire endeavor futile is if someone writes a valid proof that it is impossible to create such a general algorithm.

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"


Complete thread:

 RSS Feed of thread

powered by my little forum