First Robot able to Show Emotion & develop bonds (Humans)

by dhw, Thursday, September 02, 2010, 20:07 (5004 days ago) @ xeno6696

MATT: Okay, to see if I'm reading this correctly: Because tendencies (even with a degree or ten of freedom) had to be put down by a designer, the robot is forever... linked is the only word I can think of--to the notions and whims of its designer? I don't really know what to do with this; it seems like the argument boils down to "Bob was designed." -That is exactly my argument, except that I'm not happy with the word "tendencies", which seems too weak to me. Inborn characteristics are far more binding and restrictive. We don't even know the extent of our own free will (see the discussion on the Intelligence thread), but every bit of the robot has to be deliberately designed from scratch, including and especially its programme. But ... and it's a huge "but" ... my moral argument depends on how feasible it is to build a programme that gives the robot complete autonomy. I think that's what lies behind your statement that "if the intention of the program was for the robot to act in potentially unpredictable ways, I simply don't see how the designer would be held accountable". My turn to interpret: I read this as saying that if the programme allows for attitudes, character traits, preferences, modes of thought and behaviour not programmed by the designer, Bob and not the designer is culpable. Agreed. However, while my "but" was huge, your "if" is colossal. That's why I asked if you thought robotic technology could go all the way, and your response was a definite maybe! On reflection, our discussion may have been at cross purposes (probably my fault). You're saying that if the robot is autonomous, the designer will not be culpable (correct), and I'm saying I don't see how a robot can be autonomous, and if it isn't, the designer will be culpable. It all hinges on the "ifs". As I said earlier, though, my main interest is not moral, but concerns the evidence such a robot would provide that consciousness, emotion, imagination etc. are all the product of materials ... in which case we can dismiss the notion of a soul. You have already done so, but I have not.
 
You go on to ask what exactly I think "the ramifications are that the robot's internal "filter" or "inherited personality traits" were built-in". It doesn't seem to change much to me... Maybe a good question would be, what if Bob builds a friend with a different personality? If we're talking generalized AI, this would be easy..." As I've tried to explain above, the ramifications are both moral (culpability) and ... for want of a better word ... spiritual, since a completely manufactured, totally independent, self-willed identity would preclude the soul. If Bob built a completely different robot, which had its own independent set of characteristics, I'd say that was the same as our designer building an independent Bob, but in both cases it's a bit like arguing that if we can prove there are other universes, there are other universes.
 
Thank you very much for putting me onto the robot article. I did indeed find it a good read, and also thought it very pertinent to our discussion. Initially, I gasped at the achievements and the immediate plans, because these already sounded way beyond what I'd expected. But then came the anti-climax: "But while Xpero advances machine learning, it is still far short of the capabilities of a baby," says Kahl. "Of course, the robot can now learn the concept of movability. But it does not understand in the human sense what movability means." It's early days but, like yourself, at this stage I find it "hard to commit" to the belief that we can ever create an autonomous, sentient machine with a human mind.


Complete thread:

 RSS Feed of thread

powered by my little forum