First Robot able to Show Emotion & develop bonds (Humans)

by xeno6696 @, Sonoran Desert, Thursday, August 26, 2010, 12:08 (5202 days ago) @ dhw

dhw,-I could have sworn I responded to this days back... obviously not. -> Yes indeed, but that is the whole point of our discussion. I'm questioning whether such a machine can possibly be independent of its designer. But please don't misunderstand me. This is one of the many subjects I know nothing about, so I'm picking your brains to find out just what is and what isn't feasible. That means questioning whatever seems unclear to me, so I hope you'll remain patient. (I should add, though, that my main interest is not in culpability but in the implications of robotics for the concept of the soul. However, your explanations shed light on both subjects.)
> 
> MATT: All the things that make up an individual's personality are built from experience, and as a set of consequences from the actions they take to deal with those experiences.
> 
> ALL the things? I can do no more than repeat what I wrote earlier: "a child is born not only with instincts but also with a will of its own and a vast array of inborn characteristics: personal temperament, individualized intelligence, selective memory etc. How it responds to the impressions created by its sensory equipment [for which you can substitute experiences here] will largely depend on these inborn elements [...] Your robot is born with nothing except the programme its designer has given it." 
> -We have what I would call an inborn "filter." But generally this filter doesn't change--only our responses to it. I still get knee-jerks when I hear fire & brimstone street preachers. Many patterns I have as an adult I had when I was a kid... just with different items. -The program IS exactly that inborn characteristic (filter) that you're referring to. The typical programming paradigm has the designer thinking of as many ways that circumstances could break his code; that's what you seem to be referring to. The AI programmer solves that problem by making the machine figure out its own way. A Generalized AI starts with no knowledge; only with intuitions. The general algorithm for AI (and humans, for that matter) is this:-1. Identify a problem. 
2. Identify possible responses, including ignoring the problem. 
3. Execute. -The AI programmer only needs to come up with good, general, algorithms that can take at least 3 sets of sensory input and perform these three high-level tasks. -> You're quite right when you say that "only by moving through our world and experiencing ... both through what we taught him and he taught himself ... does his knowledge base grow." But our innate capabilities and leanings help to determine how great that knowledge base becomes, and they determine how we use it. Of course experience changes people, but nobody on this earth can tell you the degree to which inborn characteristics and outside circumstances are responsible for the evolution of personality. 
> -No; but my argument is that a person who's experienced nothing isn't too likely to have a very robust personality. -> Your robot has no inborn characteristics. You have said yourself that a true AI is a tabula rasa. -I was probably using the term incorrectly: I thought it just meant "no knowledge," or the "open book." ->
Humans are not. You're again quite right when you say we do not punish the parents for the behaviour of the child, but no parent deliberately implants degrees of willpower, intelligence, memory, sensitivity etc. in the child. Its sentience might be called natural, whereas the robot's sentience has been designed. The parent may be culpable for the upbringing (external), but not for the response to the upbringing (internal), and so one child exposed to alcohol may turn into a drunkard, while another may become a teetotaller. If the designer starts hitting his robot with a hammer, will it just howl and let itself be hammered, or will it fight back? What will dictate its response?-I hope some of what I said above answers this, but to answer your last questions: A generalized AI will have to make a decision. It won't know in advance what to do. It's response will be dictated by any/all the input it had received[EDIT]--its past experiences. The scary part about AI would be its ability to have word-for-word access to all of humanity's collected knowledge and wisdom. But what if it had read Machiavelli and liked it?-[EDITED]

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"


Complete thread:

 RSS Feed of thread

powered by my little forum