First Robot able to Show Emotion & develop bonds (Humans)
by xeno6696 , Sonoran Desert, Tuesday, August 10, 2010, 14:17 (5218 days ago)
http://www.guardian.co.uk/technology/2010/aug/09/nao-robot-develop-display-emotions
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
First Robot able to Show Emotion & develop bonds
by dhw, Tuesday, August 10, 2010, 17:00 (5218 days ago) @ xeno6696
Matt has given us a link to an article in yesterday's Guardian, reporting the unveiling of the emotional robot Nao. (I'd drafted a post about this before I saw the link, which actually goes into more detail than the newspaper article.)-If robots can learn from the environment, can form relationships, and can be individualized in their responses, obviously more advanced programmes will enable them to expand their skills. (Nao has the emotional level of a one-year-old child.) I'd be very interested to know, Matt, what light you think this sheds on the nature of consciousness and identity, and also if theoretically you envisage any limits to the range of mental activity robots might eventually cover. If so, why, and what are they? Sorry to put you in the hot seat, but you are our "resident" expert on the subject!
First Robot able to Show Emotion & develop bonds
by xeno6696 , Sonoran Desert, Tuesday, August 10, 2010, 17:31 (5218 days ago) @ dhw
Matt has given us a link to an article in yesterday's Guardian, reporting the unveiling of the emotional robot Nao. (I'd drafted a post about this before I saw the link, which actually goes into more detail than the newspaper article.) > > If robots can learn from the environment, can form relationships, and can be individualized in their responses, obviously more advanced programmes will enable them to expand their skills. (Nao has the emotional level of a one-year-old child.) I'd be very interested to know, Matt, what light you think this sheds on the nature of consciousness and identity, and also if theoretically you envisage any limits to the range of mental activity robots might eventually cover. If so, why, and what are they? Sorry to put you in the hot seat, but you are our "resident" expert on the subject!-I. Consciousness and Identity Star Trek and Star Wars invariably allow this question to be asked. Some people would say that machines are ultimately resting on a man-made consciousness and therefore are at base only carrying out instructions. I think that one of the true abilities unique to consciousness would be the "infectious" nature of ideas... if innovation on those ideas is then demonstrated, coupled with a distinct sense of self, this would raise a tremendous amount of evidence for a case that these machines should be treated as fully sentient humans. -What does that say about us? It would make ME think that our consciousness truly is more a collection of our experiences; if machines can do the same thing (even on a rudimentary level) than it would suggest that the mechanism for consciousness must lie not in the mechanics of the brain (neurons, synapses, etc.) but in their collective ability to process information. (The whole is greater than the sum of its parts.) As for identity... I think it would perhaps relegate identity to a relative idea; you are only "self" when compared to things that are "not you." Experience then molds this simple concept over time into a distinct entity; not the machinery itself but an emergent property of the whole; you cannot break it down or separate it. -II. Limits on ability for machines to process "humanly." This will be mere speculation on my part...-It depends heavily on how these early robots successfully process emotion. A good argument could be that they learn little differently than animals--responding to stimulus instead of say, "reading, as if from a book." But a counter argument could be that being able to "read" a face is an even more important and "human" type of abstraction. A year ago I probably would have said that machine intelligence would be limited to computational type-chores. But the explosion in robotics over the past year is culminating in many things that make me question emotions as being purely a human thing. -Human intelligence is a combination of computational and emotional intelligence. We have competing drives, which in the simplified world of machine intelligence, hasn't been attempted yet. Nietzsche hypothesized that our consciousness was exactly the "entity" that sat on the very edge where the competing drives meet and battle. So there's that possibility.-Does this answer your question sufficiently?
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
First Robot able to Show Emotion & develop bonds
by dhw, Wednesday, August 11, 2010, 14:07 (5217 days ago) @ xeno6696
A robot can learn from the environment, register emotions, form relationships, and be given an individualized set of responses. I asked Matt what light this shed on consciousness and identity, and whether he thought there was any limit to the range of mental activities such robots might eventually achieve.-Many thanks, Matt, for your reply. You think that the mechanism for consciousness lies "not in the mechanics of the brain (neurons, synapses, etc.) but in their collective ability to process information. (The whole is greater than the sum of its parts.)" This is probably the nub of the matter, and links up with both remaining elements of my question. If (a huge "if") a robot could produce all the mental activities ... emotional, intellectual, imaginative, behavioural ... of a human, that would in my view prove that there is no such thing as a "soul", the case for which depends on the neurons, synapses etc. being the receivers and not the producers of consciousness. -If consciousness is the product of our materials, then presumably so too is identity, as what you call "an emergent property of the whole". By "identity", though, I don't just mean what makes you you and me me, but the mechanism that governs the way each of us uses the neurons ... the individual self that both controls and is controlled by the body. You sort of answered my question about the limits when you envisaged a possible scenario in which machines might emulate humans by innovating ideas and developing a sense of self. Even if in the "simplified world of machine intelligence", the "combination of computational and emotional intelligence" has not yet been attempted, it seems to me that Nao is very much a step in that direction. The logical progression would indeed be for machines eventually to become fully sentient, and that would prove that identity is not only dictated by materials, but ceases to exist when they cease to function. (The alternative would be to believe that machines have souls, which I for one would find hard to swallow!) -Of course, this hypothetical scenario would not settle the chance v. design debate, since the robots have been designed, but it would have an enormous impact on the God issue. Without a "soul", there can be no afterlife, and we would be in the same situation as our robots: functioning while the power is on, and thrown on the scrapheap when our various parts are no longer repairable. The existence of a god in a psychic dimension beyond the material world would then become virtually irrelevant to us, except for those who believe that such a being is actively interested in our earthly lives. -With regard to machines being treated as sentient beings, the ethical ramifications are vast. Robot rights are inseparable from robot responsibilities, but is it possible to separate the programme from the programmer? (Current theologians may ponder the same question, and in any case we have never really established the parameters of human responsibility, given the impact of heredity and environment on our identity.) As I said, it's all a huge "if", and perhaps it will remain indefinitely in the realm of science fiction. I'm just trying to clarify the implications, but who knows ... the science and technology of robotics may yet provide the biggest philosophical revolution of them all.
First Robot able to Show Emotion & develop bonds
by xeno6696 , Sonoran Desert, Thursday, August 12, 2010, 03:37 (5217 days ago) @ dhw
A robot can learn from the environment, register emotions, form relationships, and be given an individualized set of responses. I asked Matt what light this shed on consciousness and identity, and whether he thought there was any limit to the range of mental activities such robots might eventually achieve. > > Many thanks, Matt, for your reply. You think that the mechanism for consciousness lies "not in the mechanics of the brain (neurons, synapses, etc.) but in their collective ability to process information. (The whole is greater than the sum of its parts.)" This is probably the nub of the matter, and links up with both remaining elements of my question. If (a huge "if") a robot could produce all the mental activities ... emotional, intellectual, imaginative, behavioural ... of a human, that would in my view prove that there is no such thing as a "soul", the case for which depends on the neurons, synapses etc. being the receivers and not the producers of consciousness. > -I guess I might disagree; to me Phineas Gage pretty much destroyed any hope of a soul in my book. One could perhaps argue that the damage to the brain "only disallowed the man's soul from interfacing properly with the body," but to me that seems no different than the discussion of "body thetans." (Google, if you don't know...)-> If consciousness is the product of our materials, then presumably so too is identity, as what you call "an emergent property of the whole". By "identity", though, I don't just mean what makes you you and me me, but the mechanism that governs the way each of us uses the neurons ... the individual self that both controls and is controlled by the body. You sort of answered my question about the limits when you envisaged a possible scenario in which machines might emulate humans by innovating ideas and developing a sense of self. Even if in the "simplified world of machine intelligence", the "combination of computational and emotional intelligence" has not yet been attempted, it seems to me that Nao is very much a step in that direction. The logical progression would indeed be for machines eventually to become fully sentient, and that would prove that identity is not only dictated by materials, but ceases to exist when they cease to function. (The alternative would be to believe that machines have souls, which I for one would find hard to swallow!) > -I guess on part of this I should clarify: the emotional machine discussed in the guardian article is essentially built to learn emotions; they didn't bother to teach it other things that 1yr olds might learn such as language skills, nor does it have the innate capacity for intuitive physics. It's a one-trick pony. A truer test will be to integrate this piece with say, the piece MIT physicists made last year that was able to deduce mathematical laws of physics by simply observing phenomenon. (Newton's basic laws of motion.) The human mind seems to be an inference machine; it's what it does best, and it can do it with anything (with varying degrees of accuracy.) -> Of course, this hypothetical scenario would not settle the chance v. design debate, since the robots have been designed, but it would have an enormous impact on the God issue. Without a "soul", there can be no afterlife, and we would be in the same situation as our robots: functioning while the power is on, and thrown on the scrapheap when our various parts are no longer repairable. The existence of a god in a psychic dimension beyond the material world would then become virtually irrelevant to us, except for those who believe that such a being is actively interested in our earthly lives. > -No... design advocates would simply take the invention as proof that something as complex as human intelligence could only arise by intervention on behalf of an intelligent entity. Atheists would take it as proof that -> With regard to machines being treated as sentient beings, the ethical ramifications are vast. Robot rights are inseparable from robot responsibilities, -Is it really? What's the responsibility of a human--or a dog? If robots become somehow sentient, to me rights would trump even their designed purpose. ->but is it possible to separate the programme from the programmer? (Current theologians may ponder the same question, and in any case we have never really established the parameters of human responsibility, given the impact of heredity and environment on our identity.) As I said, it's all a huge "if", and perhaps it will remain indefinitely in the realm of science fiction. I'm just trying to clarify the implications, but who knows ... the science and technology of robotics may yet provide the biggest philosophical revolution of them all.-It's what Ray Kurzweil spends his life studying.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
First Robot able to Show Emotion & develop bonds
by dhw, Thursday, August 12, 2010, 14:59 (5216 days ago) @ xeno6696
I wrote that if a robot could produce all the mental activities of a human, it would in my view prove that there is no such thing as a "soul", the case for which depends on the neurons, synapses etc. being the receivers and not the producers of consciousness.-MATT: I guess I might disagree; to me Phineas Gage pretty much destroyed any hope of a soul in my book.-I think the only disagreement here is that you have already decided there is no such thing as a soul, and I haven't. That doesn't affect the argument that a fully sentient robot would provide proof that the material brain is the producer, not the receiver, of consciousness (and by extension identity), in which case there is no soul.-You have pointed out that Nao is a "one-trick pony". Yes, it has the emotional capacity of a one-year-old. My post was based on the possibility of further developments.-I wrote that a fully sentient robot would not settle the chance v. design debate, since the robots have been designed, but it would mean there was no soul and hence no afterlife, so God would become largely irrelevant. You responded: "No...design advocates would simply take the invention as proof that something as complex as human intelligence could only arise by intervention on behalf of an intelligent entity." That is precisely the point of my saying it would NOT settle the chance v. design debate.-I wrote that robot rights would be inseparable from robot responsibilities, but asked if one could separate the programme from the programmer. You question this, and ask what is the responsibility of a human ... or a dog. Perhaps my argument was not clear. If a sentient robot ran amok and killed a dozen people, presumably it would have the same rights as a human to a fair trial, but to what extent would we blame the robot, and to what extent the person who designed its programme? (In the case of a dog, we would hold the owner responsible.) As I said, the ethical ramifications are vast, and also extend to the sphere of our own responsibility for our actions ... see my parenthesis in yesterday's post on heredity and environment.-This is a complex and exciting subject, and I appreciate your keeping us updated both with the new developments and with your own interpretation of them.
First Robot able to Show Emotion & develop bonds
by xeno6696 , Sonoran Desert, Saturday, August 14, 2010, 04:41 (5214 days ago) @ dhw
dhw,-First part duly noted, and I apologize for... not reading carefully I guess. >_< > I wrote that robot rights would be inseparable from robot responsibilities, but asked if one could separate the programme from the programmer. You question this, and ask what is the responsibility of a human ... or a dog. Perhaps my argument was not clear. If a sentient robot ran amok and killed a dozen people, presumably it would have the same rights as a human to a fair trial, but to what extent would we blame the robot, and to what extent the person who designed its programme? (In the case of a dog, we would hold the owner responsible.) As I said, the ethical ramifications are vast, and also extend to the sphere of our own responsibility for our actions ... see my parenthesis in yesterday's post on heredity and environment. > -There's a book that Adler cites in "The Difference of Man and the Difference it Makes" that talks about just such a court trial. I... would appeal to David for the name of the books as I no longer have my copy. (Library.) -I would say that for argument's sake... once a machine becomes sentient, the original designer loses any culpability. You can raise your child to be a mean, nasty, thieving S.O.B., but in our legal system, we do not hold the parents culpable for bad parenting. (Only physical neglect, sometimes mental.) -So... I would think that the legal precedent of designer-machine would be parent-child. -http://en.wikipedia.org/wiki/Ray_Kurzweil-I first learned about him by buying one of his synthesizers... in the flagship line the technology is from 1996 and is still considered as good as you can get.-Especially his concepts of the Singularity... he's probably the most interesting techno-sopher of our time. -> This is a complex and exciting subject, and I appreciate your keeping us updated both with the new developments and with your own interpretation of them.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
First Robot able to Show Emotion & develop bonds
by dhw, Saturday, August 14, 2010, 12:42 (5214 days ago) @ xeno6696
MATT: First part duly noted. I apologize for...not reading carefully, I guess.-Thank you. This happens to all of us, and may also be the reason for George's occasional misreading of my posts. I always print out the text I'm responding to, as I find it far easier to check.-You say that "once a machine becomes sentient, the original designer loses any culpability. [...] I would think that the legal precedent of designer-machine would be parent-child."-That sounds like a fair analogy, and my thoughts are probably based on ignorance of how it all works, but you will set me right if that is so. Doesn't a robot have to be programmed? To what extent would its sentience be controlled by a will of its own, and to what extent by the preparatory work done by the designer? Even if it appears to have free will, how would we know if the designer had not deliberately built in, say, a propensity for charity or conversely a killer instinct? In relation to responsibility (and ignoring the "designer" element for obvious reasons), one can ask similar questions about human genes and, as you say, the influence of upbringing, but so long as robots are deliberately designed and manufactured (in contrast to human reproduction), perhaps we can say that these questions take on an even sharper profile.-Thank you for the three different website references (maybe we could keep future links on this thread, as they're all interconnected). I found the Kurzweil one particularly fascinating, as it gives a pretty clear answer to my earlier question of just how far robot technology might be developed. An amazing man! I'd be very interested to know what David, George and, of course, any other contributors think of his "technosophy" and its implications. In the context of science and religion, as I pointed out in my earlier post, the impact on the concept of "soul" would be massive ... even if at the moment the fully sentient robot remains a product of science fiction.
First Robot able to Show Emotion & develop bonds
by xeno6696 , Sonoran Desert, Monday, August 16, 2010, 01:08 (5213 days ago) @ dhw
MATT: First part duly noted. I apologize for...not reading carefully, I guess. > > Thank you. This happens to all of us, and may also be the reason for George's occasional misreading of my posts. I always print out the text I'm responding to, as I find it far easier to check. > > You say that "once a machine becomes sentient, the original designer loses any culpability. [...] I would think that the legal precedent of designer-machine would be parent-child." > > That sounds like a fair analogy, and my thoughts are probably based on ignorance of how it all works, but you will set me right if that is so. Doesn't a robot have to be programmed? To what extent would its sentience be controlled by a will of its own, and to what extent by the preparatory work done by the designer? Even if it appears to have free will, how would we know if the designer had not deliberately built in, say, a propensity for charity or conversely a killer instinct? In relation to responsibility (and ignoring the "designer" element for obvious reasons), one can ask similar questions about human genes and, as you say, the influence of upbringing, but so long as robots are deliberately designed and manufactured (in contrast to human reproduction), perhaps we can say that these questions take on an even sharper profile. > -Again, this confusion comes down the completely different paradigms that exist for machine programming. Most people who work with computers (myself included) program a machine to do a very limited and confined set of tasks. The machine will do nothing we won't tell it to. Or rather, it cannot display any behavior that we as humans haven't planned for. However, AI programming starts from a completely different viewpoint. David grew up in the world of computers as punch-cards; it's very difficult to reason how such a machine could become sentient.-AI programming shares one thing in common with imperative programming; the machine will only do what its told to do. However, what an AI is told to do is learn, and make decisions based on what it has learned. A TRUE AI is tabula rasa. It is connected to some kind of sensory equipment and it's programming is purely to make sense of something(s) in its environment. The designer of the machine would have to train the AI from the ground up--just as if it was a child. -If a sentient robot killed someone, we would have the benefit of being able to access its program to see if it had been tampered with. This itself would be a whole new set of crimes--and legal systems would be in a shock. If you infected a sentient machine with an imperative virus to kill someone, how could we hold the machine accountable at all? Questions like this leave many people claiming that this is why AI can never be human--because we can't do something similar. There would likely have to be something like Asimov's imperatives of robotics built in, but then we would have sentience with limited free-will. Would THAT be right, from a philosophical or humanistic standpoint?
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
First Robot able to Show Emotion & develop bonds
by dhw, Wednesday, August 18, 2010, 10:43 (5210 days ago) @ xeno6696
I am trying to find out just how "human" a robot can become, and how responsible it might be for its actions. I asked Matt to what extent its sentience would be controlled by a will of its own, and to what extent by the preparatory work of the designer.-MATT: AI programming shares one thing in common with imperative programming; the machine will only do what it's told to do. However, what an AI is told to do is learn, and make decisions based on what it has learned. A TRUE AI is tabula rasa. It is connected to some kind of sensory equipment and its programming is purely to make sense of something(s) in its environment. The designer of the machine would have to train the AI from the ground up--just as if it was a child.-If a true AI is a tabula rasa, I don't see how it can possibly be independent of its designer in the way humans are of their parents. A child is born not only with instincts but also with a will of its own and a vast array of inborn characteristics: personal temperament, individualized intelligence, selective memory etc. How it responds to the impressions created by its sensory equipment will largely depend on these inborn elements, even if its decisions and the "sense" it makes of its environment may be influenced by training. Your robot is born with nothing except the programme its designer has given it. If the designer endows it with its own temperament, degree of intelligence, selective memory, it may appear to behave like a human, but its will (i.e. the degree of control it has over its own actions) and character will still be the product of the programme. In my book, that makes the designer 100% responsible. You wrote, however, that "this confusion comes down to the completely different paradigms that exist for machine programming" in which the machine "cannot display any behavior that we as humans haven't planned for." I can't see the difference, so what have I overlooked?
First Robot able to Show Emotion & develop bonds
by xeno6696 , Sonoran Desert, Sunday, August 22, 2010, 22:31 (5206 days ago) @ dhw
I am trying to find out just how "human" a robot can become, and how responsible it might be for its actions. I asked Matt to what extent its sentience would be controlled by a will of its own, and to what extent by the preparatory work of the designer. > > MATT: AI programming shares one thing in common with imperative programming; the machine will only do what it's told to do. However, what an AI is told to do is learn, and make decisions based on what it has learned. A TRUE AI is tabula rasa. It is connected to some kind of sensory equipment and its programming is purely to make sense of something(s) in its environment. The designer of the machine would have to train the AI from the ground up--just as if it was a child. > > If a true AI is a tabula rasa, I don't see how it can possibly be independent of its designer in the way humans are of their parents. A child is born not only with instincts but also with a will of its own and a vast array of inborn characteristics: personal temperament, individualized intelligence, selective memory etc. How it responds to the impressions created by its sensory equipment will largely depend on these inborn elements, even if its decisions and the "sense" it makes of its environment may be influenced by training. Your robot is born with nothing except the programme its designer has given it. If the designer endows it with its own temperament, degree of intelligence, selective memory, it may appear to behave like a human, but its will (i.e. the degree of control it has over its own actions) and character will still be the product of the programme. In my book, that makes the designer 100% responsible. You wrote, however, that "this confusion comes down to the completely different paradigms that exist for machine programming" in which the machine "cannot display any behavior that we as humans haven't planned for." I can't see the difference, so what have I overlooked?-AI works by setting high-level policies and letting the machine do the rest. Current AI is as independent as it can get for the job(s) it is asked to do. -Okay, lets say we build Bob. Bob is the first ever General-AI; he can process emotions, he can feel pain. Bob started out in our lab knowing nothing, he was only pure potential. Only by moving through our world and experiencing--both through what we taught him and he taught himself--does his knowledge base grow. He learned language similar to how a child learns--by making associations and inferences. -It has been ten years. -Now my point is that AI programming works only to give the machine tools to do its job; as a designer, I'm only culpable up until the moment I have decided on a final learning program. But what the machine chooses to learn on his own isn't anything I can be held accountable for, just as anything your children choose to learn you aren't held accountable for. If your kids learn how to hotwire cars--the won't send you to prison! What kind of "learning" boundaries can we set for ourselves or our children? What can we enforce? In some instances I think I've seen anecdotal evidence of parents getting into trouble when say, they expose their kids to bad habits they have such as drugs, alcohol, etc. But we still don't punish the parents if the kids act out. -All the things that make up an individual's personality are built from experience, and as a set of consequence(s) from the actions they take to deal with those experiences. As to what extent the AI's personality is designed; to me personalities would be like "general policies" that the machine follows in the world. You could make a case of designer culpability in circumstances where it could be demonstrated that you gave the machine policies to be followed that made it a Charles Manson or some serial killer. -Furthermore, if the goal of the machine was to create a sentient and independent entity--the by definition the designer loses culpability if the machine had fulfilled this requirement.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
First Robot able to Show Emotion & develop bonds
by dhw, Monday, August 23, 2010, 17:59 (5205 days ago) @ xeno6696
MATT: Furthermore, if the goal of the machine was to create a sentient and independent entity ... then by definition the designer loses culpability if the machine had fulfilled this requirement.-Yes indeed, but that is the whole point of our discussion. I'm questioning whether such a machine can possibly be independent of its designer. But please don't misunderstand me. This is one of the many subjects I know nothing about, so I'm picking your brains to find out just what is and what isn't feasible. That means questioning whatever seems unclear to me, so I hope you'll remain patient. (I should add, though, that my main interest is not in culpability but in the implications of robotics for the concept of the soul. However, your explanations shed light on both subjects.)-MATT: All the things that make up an individual's personality are built from experience, and as a set of consequences from the actions they take to deal with those experiences.-ALL the things? I can do no more than repeat what I wrote earlier: "a child is born not only with instincts but also with a will of its own and a vast array of inborn characteristics: personal temperament, individualized intelligence, selective memory etc. How it responds to the impressions created by its sensory equipment [for which you can substitute experiences here] will largely depend on these inborn elements [...] Your robot is born with nothing except the programme its designer has given it." -You're quite right when you say that "only by moving through our world and experiencing ... both through what we taught him and he taught himself ... does his knowledge base grow." But our innate capabilities and leanings help to determine how great that knowledge base becomes, and they determine how we use it. Of course experience changes people, but nobody on this earth can tell you the degree to which inborn characteristics and outside circumstances are responsible for the evolution of personality. -Your robot has no inborn characteristics. You have said yourself that a true AI is a tabula rasa. Humans are not. You're again quite right when you say we do not punish the parents for the behaviour of the child, but no parent deliberately implants degrees of willpower, intelligence, memory, sensitivity etc. in the child. Its sentience might be called natural, whereas the robot's sentience has been designed. The parent may be culpable for the upbringing (external), but not for the response to the upbringing (internal), and so one child exposed to alcohol may turn into a drunkard, while another may become a teetotaller. If the designer starts hitting his robot with a hammer, will it just howl and let itself be hammered, or will it fight back? What will dictate its response?
First Robot able to Show Emotion & develop bonds
by xeno6696 , Sonoran Desert, Thursday, August 26, 2010, 12:08 (5202 days ago) @ dhw
dhw,-I could have sworn I responded to this days back... obviously not. -> Yes indeed, but that is the whole point of our discussion. I'm questioning whether such a machine can possibly be independent of its designer. But please don't misunderstand me. This is one of the many subjects I know nothing about, so I'm picking your brains to find out just what is and what isn't feasible. That means questioning whatever seems unclear to me, so I hope you'll remain patient. (I should add, though, that my main interest is not in culpability but in the implications of robotics for the concept of the soul. However, your explanations shed light on both subjects.) > > MATT: All the things that make up an individual's personality are built from experience, and as a set of consequences from the actions they take to deal with those experiences. > > ALL the things? I can do no more than repeat what I wrote earlier: "a child is born not only with instincts but also with a will of its own and a vast array of inborn characteristics: personal temperament, individualized intelligence, selective memory etc. How it responds to the impressions created by its sensory equipment [for which you can substitute experiences here] will largely depend on these inborn elements [...] Your robot is born with nothing except the programme its designer has given it." > -We have what I would call an inborn "filter." But generally this filter doesn't change--only our responses to it. I still get knee-jerks when I hear fire & brimstone street preachers. Many patterns I have as an adult I had when I was a kid... just with different items. -The program IS exactly that inborn characteristic (filter) that you're referring to. The typical programming paradigm has the designer thinking of as many ways that circumstances could break his code; that's what you seem to be referring to. The AI programmer solves that problem by making the machine figure out its own way. A Generalized AI starts with no knowledge; only with intuitions. The general algorithm for AI (and humans, for that matter) is this:-1. Identify a problem. 2. Identify possible responses, including ignoring the problem. 3. Execute. -The AI programmer only needs to come up with good, general, algorithms that can take at least 3 sets of sensory input and perform these three high-level tasks. -> You're quite right when you say that "only by moving through our world and experiencing ... both through what we taught him and he taught himself ... does his knowledge base grow." But our innate capabilities and leanings help to determine how great that knowledge base becomes, and they determine how we use it. Of course experience changes people, but nobody on this earth can tell you the degree to which inborn characteristics and outside circumstances are responsible for the evolution of personality. > -No; but my argument is that a person who's experienced nothing isn't too likely to have a very robust personality. -> Your robot has no inborn characteristics. You have said yourself that a true AI is a tabula rasa. -I was probably using the term incorrectly: I thought it just meant "no knowledge," or the "open book." -> Humans are not. You're again quite right when you say we do not punish the parents for the behaviour of the child, but no parent deliberately implants degrees of willpower, intelligence, memory, sensitivity etc. in the child. Its sentience might be called natural, whereas the robot's sentience has been designed. The parent may be culpable for the upbringing (external), but not for the response to the upbringing (internal), and so one child exposed to alcohol may turn into a drunkard, while another may become a teetotaller. If the designer starts hitting his robot with a hammer, will it just howl and let itself be hammered, or will it fight back? What will dictate its response?-I hope some of what I said above answers this, but to answer your last questions: A generalized AI will have to make a decision. It won't know in advance what to do. It's response will be dictated by any/all the input it had received[EDIT]--its past experiences. The scary part about AI would be its ability to have word-for-word access to all of humanity's collected knowledge and wisdom. But what if it had read Machiavelli and liked it?-[EDITED]
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
First Robot able to Show Emotion & develop bonds
by dhw, Saturday, August 28, 2010, 08:47 (5200 days ago) @ xeno6696
Matt is looking into the robotic future.-MATT: We have what I would call an inborn "filter". But generally this filter doesn't change ... only our responses to it. [...] The program IS exactly that inborn characteristic (filter) that you're referring to. [...] A generalized AI starts with no knowledge; only with intuitions. [...] ...my argument is that a person who's experienced NOTHING isn't too likely to have a very robust personality.-Please forgive my cherry-picking the quotes, but together they form a pattern with which I largely agree. Only your argument seems to me to confirm the DEPENDENCE of the robot on its designer, so at the risk of repeating arguments, let me try to put the bits and pieces together in my own way. -There are inevitably areas of our own nature (as well as other people's) that we know nothing about. I don't, for instance, know how brave I am. I've never ... fortunately! ... been confronted by a situation that demands extremes of courage. But I know that I'm conscientious, because I worry about even minor problems and can't rest till they're put right. I've always been like that, and I take this to be what you mean when you say you have the same patterns now as when you were a kid. So I would like to modify your statement that a person with no experience at all isn't likely to have a "robust personality". I think the basic foundations of the personality are already there, but neither we nor anyone else can know what they are until they're brought out by experience. Admittedly, some experiences may be so dramatic or traumatic that they can change these foundations, but I think the inborn base is generally pretty determinate. In your words, "generally this filter doesn't change".-These basic foundations are designed by the robot's programmer ... as you say, the programme IS the filter. Only when the "intuitions" have been deliberately put in place can the choices follow accordingly, just as ours do. You ask: "What if it had read Machiavelli and liked it?" Of course it can't like or dislike M until it's read his book, but my question to you would be: WHY would it like (or dislike) M.? Why would it like (or dislike) anything? Where do its predilections come from? An example I gave earlier was of exposure to alcohol (= experience). Within the same family, child X may become an alcoholic, and child Y a teetotaller. For me, one of the prime aims of early education should be to expose the learner to as many different fields as possible, in order to find out what the child has an aptitude for. In other words, experience doesn't create aptitudes but reveals them. The Machiavellian tendencies are not created by reading Machiavelli, but reading M. brings out the innate tendencies. And so although the designer of your robot will not have programmed the experiences, he will have programmed the responses. -However, perhaps we're getting way ahead of ourselves here. You specified the "general algorithm" as being the identification of a problem and its possible responses, and then "execution" [of the decision]. Robots, so far as I know, are currently created in order to perform specific tasks, or to solve specific problems. I have no trouble visualizing a machine solving problems and making decisions in accordance with the given data or with past experience. But a "sentient and independent entity" (i.e. one with self-awareness, willpower, imagination, fully developed emotions etc.) goes a great deal further. The first robot to show emotion and develop bonds ... albeit at the level of a one-year-old child ... is clearly a big leap in this direction, but regardless of my interpretation of the "filter" (which of course you may disagree with), do you think technology really can go all the way?
First Robot able to Show Emotion & develop bonds
by xeno6696 , Sonoran Desert, Sunday, August 29, 2010, 23:49 (5199 days ago) @ dhw
dhw, > > Please forgive my cherry-picking the quotes, but together they form a pattern with which I largely agree. Only your argument seems to me to confirm the DEPENDENCE of the robot on its designer, so at the risk of repeating arguments, let me try to put the bits and pieces together in my own way. > > There are inevitably areas of our own nature (as well as other people's) that we know nothing about. I don't, for instance, know how brave I am. I've never ... fortunately! ... been confronted by a situation that demands extremes of courage. But I know that I'm conscientious, because I worry about even minor problems and can't rest till they're put right. I've always been like that, and I take this to be what you mean when you say you have the same patterns now as when you were a kid. So I would like to modify your statement that a person with no experience at all isn't likely to have a "robust personality". I think the basic foundations of the personality are already there, but neither we nor anyone else can know what they are until they're brought out by experience. Admittedly, some experiences may be so dramatic or traumatic that they can change these foundations, but I think the inborn base is generally pretty determinate. In your words, "generally this filter doesn't change". > -Actually, so far it looks like you have a good grasp on my thinking. Eerily, heh. -> ...And so although the designer of your robot will not have programmed the experiences, he will have programmed the responses. > -Our disagreement here is probably just due to me having more familiarity with the act of commanding machines; I translate your final sentence as this:-"The designer will have to predict a various number of responses to an equally various number of stimuli, and predetermine the outcome." To me, you might be thinking that the machine would have to constantly come to the designer to help with some issue. Or you're thinking that if anything like a policy is built in, then it is completely owned by the designer? I don't think so, unless of course our designer makes a "policy" that every time he hears "God save the Queen" he runs out into the street to dance. The goal of general AI is to get away from defining a specific problem and a specific solution, such as what I demonstrated here. The goal is a single algorithm that can solve many problems. -I'm thinking that there will be general... "policies" if you will, that would be built into the machine that would allow the machine to solve problems on its own. These would of course come from the designer, but the machine would have the power to override policies if it deemed it necessary, or to adapt a solution method from a different intelligence type to another. I'm approaching this from the perspective that the designer built as much free will into the machine as possible; though some futurists disagree with me and think that Asimov's rules should be built into machines. (It is a minor aside, but it's pertinent to any talk of AI.) The policies don't dictate responses, only tendencies. And if the machine has the same power to override tendencies as we do... at what point can we say that the designer is necessary after pressing "on?"-> However, perhaps we're getting way ahead of ourselves here. You specified the "general algorithm" as being the identification of a problem and its possible responses, and then "execution" [of the decision]. Robots, so far as I know, are currently created in order to perform specific tasks, or to solve specific problems. I have no trouble visualizing a machine solving problems and making decisions in accordance with the given data or with past experience. But a "sentient and independent entity" (i.e. one with self-awareness, willpower, imagination, fully developed emotions etc.) goes a great deal further. The first robot to show emotion and develop bonds ... albeit at the level of a one-year-old child ... is clearly a big leap in this direction, but regardless of my interpretation of the "filter" (which of course you may disagree with), do you think technology really can go all the way?-I'm just ignorant enough about the AI field to not be able to say "yes" with certainty, but like Kurzweil, the gentleman who wrote the article I linked here most certainly believes it is possible. Though if pressed, I think that at present I don't see why we couldn't do it. Hard to commit. One method that could prove the entire endeavor futile is if someone writes a valid proof that it is impossible to create such a general algorithm.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
First Robot able to Show Emotion & develop bonds
by David Turell , Monday, August 30, 2010, 00:29 (5199 days ago) @ xeno6696
This is very impressive work.
First Robot able to Show Emotion & develop bonds
by dhw, Monday, August 30, 2010, 11:53 (5198 days ago) @ xeno6696
DHW: ...And so although the designer of your robot will not have programmed the experiences, he will have programmed the responses.-MATT: I translate your final sentence as this: "The designer will have to predict a various number of responses to an equally various number of stimuli, and predetermine the outcome." To me, you might be thinking that the machine would have to constantly come to the designer to help with some issue.-No, I'm afraid that is a complete mistranslation of my final sentence, and a misunderstanding of my (clumsy) attempts to delve into psychology. My point was that if the robot was to act independently, the programmer would have to build in those inborn characteristics which in us predetermine our responses. For example, I am conscientious, and if something is wrong I need to put it right straight away. These traits will determine my response to a vast range of experiences. So too will my limited range of intelligence and expertise. All inborn. I'm not a very practical person, and so I have taken out an insurance policy. When our lavatory began to leak, I rang the insurance company straight away. They sent a plumber. He put some sticky stuff on the lavatory. It stopped leaking. If I had had a different set of inborn characteristics, I might not have taken out an insurance policy, I myself might have put some sticky stuff on, I might have pretended not to notice (it was only a tiny leak) and hoped no-one else would notice, I might have stuck a cup underneath, I might not have rung the company right away....When your robot starts leaking lubricants all over your living-room floor, will it mop up the mess, tell you to do it, rush off to the pub, plug the leak itself, ring for a robot plumber...? My point is that the general characteristics determine the responses to the individual experiences. Once these traits are in place, your robot will no doubt act independently, just as I do, but the determining traits will have been put there by the designer.-You go on to say: "The policies don't dictate responses, only tendencies. And if the machine has the same power to override tendencies as we do...at what point can we say that the designer is necessary after pressing "on"?" This all links up very neatly to Romansh's preoccupation with free will. I really don't know what power we have to override our inborn characteristics (= your "tendencies"), but in the context of our discussion, I guess that would be the ultimate test ... can free will be built into the machine? Can technology really go all the way, and enable the robot to override the designer's programme of inborn characteristics? Your answer is a charmingly negative positive: "not [...] able to say "yes" with certainty [...] Though if pressed, I think that at present I don't see why we couldn't do it. Hard to commit." I'm with you all of the part of the way. Or maybe not.
First Robot able to Show Emotion & develop bonds
by xeno6696 , Sonoran Desert, Wednesday, September 01, 2010, 23:06 (5196 days ago) @ dhw
dhw, > MATT: I translate your final sentence as this: "The designer will have to predict a various number of responses to an equally various number of stimuli, and predetermine the outcome." To me, you might be thinking that the machine would have to constantly come to the designer to help with some issue. > > No, I'm afraid that is a complete mistranslation of my final sentence, and a misunderstanding of my (clumsy) attempts to delve into psychology. My point was that if the robot was to act independently, the programmer would have to build in those inborn characteristics which in us predetermine our responses. For example, I am conscientious, and if something is wrong I need to put it right straight away. These traits will determine my response to a vast range of experiences. So too will my limited range of intelligence and expertise. All inborn. I'm not a very practical person, and so I have taken out an insurance policy. When our lavatory began to leak, I rang the insurance company straight away. They sent a plumber. He put some sticky stuff on the lavatory. It stopped leaking. If I had had a different set of inborn characteristics, I might not have taken out an insurance policy, I myself might have put some sticky stuff on, I might have pretended not to notice (it was only a tiny leak) and hoped no-one else would notice, I might have stuck a cup underneath, I might not have rung the company right away....When your robot starts leaking lubricants all over your living-room floor, will it mop up the mess, tell you to do it, rush off to the pub, plug the leak itself, ring for a robot plumber...? My point is that the general characteristics determine the responses to the individual experiences. Once these traits are in place, your robot will no doubt act independently, just as I do, but the determining traits will have been put there by the designer. > -Okay, to see if I'm reading this correctly: Because tendencies (even with a degree or ten of freedom) had to be put down by a designer, the robot is forever... linked is the only word I can think of--to the notions and whims of its designer? I don't really know what to do with this, it; seems like the argument boils down to "Bob was designed." If we go back to the original question I posed in terms of culpability, if Bob murders someone, the only way the designer would be held responsible is if it could be demonstrated that the robot did not commit the crime in self-defense, and that the robot responded in a way unintended. If the intention of the program was for the robot to act in potentially unpredictable ways, I simply don't see how the designer would be held accountable. -I would have to ask what exactly you think the ramifications are that the robot's internal "filter" or "inherited personality traits" were built-in? It doesn't seem to change much to me... Maybe a good question would be, what if Bob builds a friend with a different personality? If we're talking generalized AI, this would be easy...-This has no immediate connection to our discussion but: http://cordis.europa.eu/ictresults/index.cfm?section=news&tpl=article&BrowsingType=Features&ID=91421 (Not immediately pertinent, but you'll find it a good read.) -> You go on to say: "The policies don't dictate responses, only tendencies. And if the machine has the same power to override tendencies as we do...at what point can we say that the designer is necessary after pressing "on"?" This all links up very neatly to Romansh's preoccupation with free will. I really don't know what power we have to override our inborn characteristics (= your "tendencies"), but in the context of our discussion, I guess that would be the ultimate test ... can free will be built into the machine? Can technology really go all the way, and enable the robot to override the designer's programme of inborn characteristics? Your answer is a charmingly negative positive: "not [...] able to say "yes" with certainty [...] Though if pressed, I think that at present I don't see why we couldn't do it. Hard to commit." I'm with you all of the part of the way. Or maybe not.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
First Robot able to Show Emotion & develop bonds
by dhw, Thursday, September 02, 2010, 20:07 (5195 days ago) @ xeno6696
MATT: Okay, to see if I'm reading this correctly: Because tendencies (even with a degree or ten of freedom) had to be put down by a designer, the robot is forever... linked is the only word I can think of--to the notions and whims of its designer? I don't really know what to do with this; it seems like the argument boils down to "Bob was designed." -That is exactly my argument, except that I'm not happy with the word "tendencies", which seems too weak to me. Inborn characteristics are far more binding and restrictive. We don't even know the extent of our own free will (see the discussion on the Intelligence thread), but every bit of the robot has to be deliberately designed from scratch, including and especially its programme. But ... and it's a huge "but" ... my moral argument depends on how feasible it is to build a programme that gives the robot complete autonomy. I think that's what lies behind your statement that "if the intention of the program was for the robot to act in potentially unpredictable ways, I simply don't see how the designer would be held accountable". My turn to interpret: I read this as saying that if the programme allows for attitudes, character traits, preferences, modes of thought and behaviour not programmed by the designer, Bob and not the designer is culpable. Agreed. However, while my "but" was huge, your "if" is colossal. That's why I asked if you thought robotic technology could go all the way, and your response was a definite maybe! On reflection, our discussion may have been at cross purposes (probably my fault). You're saying that if the robot is autonomous, the designer will not be culpable (correct), and I'm saying I don't see how a robot can be autonomous, and if it isn't, the designer will be culpable. It all hinges on the "ifs". As I said earlier, though, my main interest is not moral, but concerns the evidence such a robot would provide that consciousness, emotion, imagination etc. are all the product of materials ... in which case we can dismiss the notion of a soul. You have already done so, but I have not. You go on to ask what exactly I think "the ramifications are that the robot's internal "filter" or "inherited personality traits" were built-in". It doesn't seem to change much to me... Maybe a good question would be, what if Bob builds a friend with a different personality? If we're talking generalized AI, this would be easy..." As I've tried to explain above, the ramifications are both moral (culpability) and ... for want of a better word ... spiritual, since a completely manufactured, totally independent, self-willed identity would preclude the soul. If Bob built a completely different robot, which had its own independent set of characteristics, I'd say that was the same as our designer building an independent Bob, but in both cases it's a bit like arguing that if we can prove there are other universes, there are other universes. Thank you very much for putting me onto the robot article. I did indeed find it a good read, and also thought it very pertinent to our discussion. Initially, I gasped at the achievements and the immediate plans, because these already sounded way beyond what I'd expected. But then came the anti-climax: "But while Xpero advances machine learning, it is still far short of the capabilities of a baby," says Kahl. "Of course, the robot can now learn the concept of movability. But it does not understand in the human sense what movability means." It's early days but, like yourself, at this stage I find it "hard to commit" to the belief that we can ever create an autonomous, sentient machine with a human mind.
First Robot able to Show Emotion & develop bonds
by xeno6696 , Sonoran Desert, Friday, September 03, 2010, 04:09 (5194 days ago) @ dhw
dhw, > That is exactly my argument, except that I'm not happy with the word "tendencies", which seems too weak to me. Inborn characteristics are far more binding and restrictive. We don't even know the extent of our own free will (see the discussion on the Intelligence thread), but every bit of the robot has to be deliberately designed from scratch, including and especially its programme. But ... and it's a huge "but" ... my moral argument depends on how feasible it is to build a programme that gives the robot complete autonomy. I think that's what lies behind your statement that "if the intention of the program was for the robot to act in potentially unpredictable ways, I simply don't see how the designer would be held accountable". My turn to interpret: I read this as saying that if the programme allows for attitudes, character traits, preferences, modes of thought and behaviour not programmed by the designer, Bob and not the designer is culpable. Agreed. However, while my "but" was huge, your "if" is colossal. That's why I asked if you thought robotic technology could go all the way, and your response was a definite maybe! On reflection, our discussion may have been at cross purposes (probably my fault). You're saying that if the robot is autonomous, the designer will not be culpable (correct), and I'm saying I don't see how a robot can be autonomous, and if it isn't, the designer will be culpable. It all hinges on the "ifs". As I said earlier, though, my main interest is not moral, but concerns the evidence such a robot would provide that consciousness, emotion, imagination etc. are all the product of materials ... in which case we can dismiss the notion of a soul. You have already done so, but I have not. > -So: lets explore your question. Culpability is boring anyway! First, let me get my usual nitpicks out of the way: I haven't thrown out the idea of a soul; I think the question is... misguided. If I found out tomorrow that consciousness comes purely from matter, it wouldn't change the way I think anymore than if I found out it came from a divine essence: The fact that I can sit here and declare "I am," is irrelevant to (and supersedes) the idea of a soul in my book. But that's my Buddhist tendencies creeping in; the idea of a soul might be a more powerful question for you. -Maybe you should fill the void; I find it difficult to see what differences it would make. Maybe I could make a good Glaucon to your Plato?
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
First Robot able to Show Emotion & develop bonds
by dhw, Friday, September 03, 2010, 12:43 (5194 days ago) @ xeno6696
MATT: First, let me get my usual nitpicks out of the way: I haven't thrown out the idea of a soul; I think the question is... misguided. If I found out tomorrow that consciousness comes purely from matter, it wouldn't change the way I think anymore than if I found out it came from a divine essence: The fact that I can sit here and declare "I am," is irrelevant to (and supersedes) the idea of a soul in my book. But that's my Buddhist tendencies creeping in; the idea of a soul might be a more powerful question for you. Maybe you should fill the void; I find it difficult to see what differences it would make. Maybe I could make a good Glaucon to your Plato?-I'm flattered, but hey, didn't Xenophon describe Glaucon as an ignoramus? Besides, I'm the older brother (more like grandfather actually), and have learnt a lot more from you than you will ever learn from me! However, let me try to fill the void. I should put "soul" in inverted commas ... it's just a word to describe that part of us which we can't explain ... the mind, if you like, as opposed to the brain. If there is a dimension beyond the material one we know ... a dimension in which our identity exists independently of our body ... that will be the dimension in which David's Universal Intelligence exists, and in which we ourselves may survive physical death. You must remember that I have an open mind on NDEs and OBEs. If we found out tomorrow that consciousness came purely from matter, I would be 99% certain that there was no life after death, in which case the question of God's existence would be purely academic. If he does exist, I don't need him to give my life meaning, and he certainly doesn't need me, so as you say, it wouldn't change the way I think. But so long as there is a possibility of life after death, there remains the possibility that some aspects of religion may be true, and God's nature may become directly relevant to us. So if you do succeed in building your autonomous Bob, and if I'm still around, I shall have mixed feelings: sad that it'll all be over soon, sad that I shall never know what power created all this beauty, and relieved that I shall never know what power created all this suffering.