First Robot able to Show Emotion & develop bonds (Humans)

by xeno6696 @, Sonoran Desert, Tuesday, August 10, 2010, 17:31 (5027 days ago) @ dhw

Matt has given us a link to an article in yesterday's Guardian, reporting the unveiling of the emotional robot Nao. (I'd drafted a post about this before I saw the link, which actually goes into more detail than the newspaper article.)
> 
> If robots can learn from the environment, can form relationships, and can be individualized in their responses, obviously more advanced programmes will enable them to expand their skills. (Nao has the emotional level of a one-year-old child.) I'd be very interested to know, Matt, what light you think this sheds on the nature of consciousness and identity, and also if theoretically you envisage any limits to the range of mental activity robots might eventually cover. If so, why, and what are they? Sorry to put you in the hot seat, but you are our "resident" expert on the subject!-I. Consciousness and Identity
Star Trek and Star Wars invariably allow this question to be asked. Some people would say that machines are ultimately resting on a man-made consciousness and therefore are at base only carrying out instructions. I think that one of the true abilities unique to consciousness would be the "infectious" nature of ideas... if innovation on those ideas is then demonstrated, coupled with a distinct sense of self, this would raise a tremendous amount of evidence for a case that these machines should be treated as fully sentient humans. -What does that say about us? It would make ME think that our consciousness truly is more a collection of our experiences; if machines can do the same thing (even on a rudimentary level) than it would suggest that the mechanism for consciousness must lie not in the mechanics of the brain (neurons, synapses, etc.) but in their collective ability to process information. (The whole is greater than the sum of its parts.) As for identity... I think it would perhaps relegate identity to a relative idea; you are only "self" when compared to things that are "not you." Experience then molds this simple concept over time into a distinct entity; not the machinery itself but an emergent property of the whole; you cannot break it down or separate it. -II. Limits on ability for machines to process "humanly." This will be mere speculation on my part...-It depends heavily on how these early robots successfully process emotion. A good argument could be that they learn little differently than animals--responding to stimulus instead of say, "reading, as if from a book." But a counter argument could be that being able to "read" a face is an even more important and "human" type of abstraction. A year ago I probably would have said that machine intelligence would be limited to computational type-chores. But the explosion in robotics over the past year is culminating in many things that make me question emotions as being purely a human thing. -Human intelligence is a combination of computational and emotional intelligence. We have competing drives, which in the simplified world of machine intelligence, hasn't been attempted yet. Nietzsche hypothesized that our consciousness was exactly the "entity" that sat on the very edge where the competing drives meet and battle. So there's that possibility.-Does this answer your question sufficiently?

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"


Complete thread:

 RSS Feed of thread

powered by my little forum