First Robot able to Show Emotion & develop bonds (Humans)

by xeno6696 @, Sonoran Desert, Tuesday, August 10, 2010, 14:17 (5027 days ago)

http://www.guardian.co.uk/technology/2010/aug/09/nao-robot-develop-display-emotions

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"

First Robot able to Show Emotion & develop bonds

by dhw, Tuesday, August 10, 2010, 17:00 (5027 days ago) @ xeno6696

Matt has given us a link to an article in yesterday's Guardian, reporting the unveiling of the emotional robot Nao. (I'd drafted a post about this before I saw the link, which actually goes into more detail than the newspaper article.)-If robots can learn from the environment, can form relationships, and can be individualized in their responses, obviously more advanced programmes will enable them to expand their skills. (Nao has the emotional level of a one-year-old child.) I'd be very interested to know, Matt, what light you think this sheds on the nature of consciousness and identity, and also if theoretically you envisage any limits to the range of mental activity robots might eventually cover. If so, why, and what are they? Sorry to put you in the hot seat, but you are our "resident" expert on the subject!

First Robot able to Show Emotion & develop bonds

by xeno6696 @, Sonoran Desert, Tuesday, August 10, 2010, 17:31 (5027 days ago) @ dhw

Matt has given us a link to an article in yesterday's Guardian, reporting the unveiling of the emotional robot Nao. (I'd drafted a post about this before I saw the link, which actually goes into more detail than the newspaper article.)
> 
> If robots can learn from the environment, can form relationships, and can be individualized in their responses, obviously more advanced programmes will enable them to expand their skills. (Nao has the emotional level of a one-year-old child.) I'd be very interested to know, Matt, what light you think this sheds on the nature of consciousness and identity, and also if theoretically you envisage any limits to the range of mental activity robots might eventually cover. If so, why, and what are they? Sorry to put you in the hot seat, but you are our "resident" expert on the subject!-I. Consciousness and Identity
Star Trek and Star Wars invariably allow this question to be asked. Some people would say that machines are ultimately resting on a man-made consciousness and therefore are at base only carrying out instructions. I think that one of the true abilities unique to consciousness would be the "infectious" nature of ideas... if innovation on those ideas is then demonstrated, coupled with a distinct sense of self, this would raise a tremendous amount of evidence for a case that these machines should be treated as fully sentient humans. -What does that say about us? It would make ME think that our consciousness truly is more a collection of our experiences; if machines can do the same thing (even on a rudimentary level) than it would suggest that the mechanism for consciousness must lie not in the mechanics of the brain (neurons, synapses, etc.) but in their collective ability to process information. (The whole is greater than the sum of its parts.) As for identity... I think it would perhaps relegate identity to a relative idea; you are only "self" when compared to things that are "not you." Experience then molds this simple concept over time into a distinct entity; not the machinery itself but an emergent property of the whole; you cannot break it down or separate it. -II. Limits on ability for machines to process "humanly." This will be mere speculation on my part...-It depends heavily on how these early robots successfully process emotion. A good argument could be that they learn little differently than animals--responding to stimulus instead of say, "reading, as if from a book." But a counter argument could be that being able to "read" a face is an even more important and "human" type of abstraction. A year ago I probably would have said that machine intelligence would be limited to computational type-chores. But the explosion in robotics over the past year is culminating in many things that make me question emotions as being purely a human thing. -Human intelligence is a combination of computational and emotional intelligence. We have competing drives, which in the simplified world of machine intelligence, hasn't been attempted yet. Nietzsche hypothesized that our consciousness was exactly the "entity" that sat on the very edge where the competing drives meet and battle. So there's that possibility.-Does this answer your question sufficiently?

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"

First Robot able to Show Emotion & develop bonds

by dhw, Wednesday, August 11, 2010, 14:07 (5026 days ago) @ xeno6696

A robot can learn from the environment, register emotions, form relationships, and be given an individualized set of responses. I asked Matt what light this shed on consciousness and identity, and whether he thought there was any limit to the range of mental activities such robots might eventually achieve.-Many thanks, Matt, for your reply. You think that the mechanism for consciousness lies "not in the mechanics of the brain (neurons, synapses, etc.) but in their collective ability to process information. (The whole is greater than the sum of its parts.)" This is probably the nub of the matter, and links up with both remaining elements of my question. If (a huge "if") a robot could produce all the mental activities ... emotional, intellectual, imaginative, behavioural ... of a human, that would in my view prove that there is no such thing as a "soul", the case for which depends on the neurons, synapses etc. being the receivers and not the producers of consciousness. -If consciousness is the product of our materials, then presumably so too is identity, as what you call "an emergent property of the whole". By "identity", though, I don't just mean what makes you you and me me, but the mechanism that governs the way each of us uses the neurons ... the individual self that both controls and is controlled by the body. You sort of answered my question about the limits when you envisaged a possible scenario in which machines might emulate humans by innovating ideas and developing a sense of self. Even if in the "simplified world of machine intelligence", the "combination of computational and emotional intelligence" has not yet been attempted, it seems to me that Nao is very much a step in that direction. The logical progression would indeed be for machines eventually to become fully sentient, and that would prove that identity is not only dictated by materials, but ceases to exist when they cease to function. (The alternative would be to believe that machines have souls, which I for one would find hard to swallow!) -Of course, this hypothetical scenario would not settle the chance v. design debate, since the robots have been designed, but it would have an enormous impact on the God issue. Without a "soul", there can be no afterlife, and we would be in the same situation as our robots: functioning while the power is on, and thrown on the scrapheap when our various parts are no longer repairable. The existence of a god in a psychic dimension beyond the material world would then become virtually irrelevant to us, except for those who believe that such a being is actively interested in our earthly lives. -With regard to machines being treated as sentient beings, the ethical ramifications are vast. Robot rights are inseparable from robot responsibilities, but is it possible to separate the programme from the programmer? (Current theologians may ponder the same question, and in any case we have never really established the parameters of human responsibility, given the impact of heredity and environment on our identity.) As I said, it's all a huge "if", and perhaps it will remain indefinitely in the realm of science fiction. I'm just trying to clarify the implications, but who knows ... the science and technology of robotics may yet provide the biggest philosophical revolution of them all.

First Robot able to Show Emotion & develop bonds

by xeno6696 @, Sonoran Desert, Thursday, August 12, 2010, 03:37 (5026 days ago) @ dhw

A robot can learn from the environment, register emotions, form relationships, and be given an individualized set of responses. I asked Matt what light this shed on consciousness and identity, and whether he thought there was any limit to the range of mental activities such robots might eventually achieve.
> 
> Many thanks, Matt, for your reply. You think that the mechanism for consciousness lies "not in the mechanics of the brain (neurons, synapses, etc.) but in their collective ability to process information. (The whole is greater than the sum of its parts.)" This is probably the nub of the matter, and links up with both remaining elements of my question. If (a huge "if") a robot could produce all the mental activities ... emotional, intellectual, imaginative, behavioural ... of a human, that would in my view prove that there is no such thing as a "soul", the case for which depends on the neurons, synapses etc. being the receivers and not the producers of consciousness. 
> -I guess I might disagree; to me Phineas Gage pretty much destroyed any hope of a soul in my book. One could perhaps argue that the damage to the brain "only disallowed the man's soul from interfacing properly with the body," but to me that seems no different than the discussion of "body thetans." (Google, if you don't know...)-> If consciousness is the product of our materials, then presumably so too is identity, as what you call "an emergent property of the whole". By "identity", though, I don't just mean what makes you you and me me, but the mechanism that governs the way each of us uses the neurons ... the individual self that both controls and is controlled by the body. You sort of answered my question about the limits when you envisaged a possible scenario in which machines might emulate humans by innovating ideas and developing a sense of self. Even if in the "simplified world of machine intelligence", the "combination of computational and emotional intelligence" has not yet been attempted, it seems to me that Nao is very much a step in that direction. The logical progression would indeed be for machines eventually to become fully sentient, and that would prove that identity is not only dictated by materials, but ceases to exist when they cease to function. (The alternative would be to believe that machines have souls, which I for one would find hard to swallow!) 
> -I guess on part of this I should clarify: the emotional machine discussed in the guardian article is essentially built to learn emotions; they didn't bother to teach it other things that 1yr olds might learn such as language skills, nor does it have the innate capacity for intuitive physics. It's a one-trick pony. A truer test will be to integrate this piece with say, the piece MIT physicists made last year that was able to deduce mathematical laws of physics by simply observing phenomenon. (Newton's basic laws of motion.) The human mind seems to be an inference machine; it's what it does best, and it can do it with anything (with varying degrees of accuracy.) -> Of course, this hypothetical scenario would not settle the chance v. design debate, since the robots have been designed, but it would have an enormous impact on the God issue. Without a "soul", there can be no afterlife, and we would be in the same situation as our robots: functioning while the power is on, and thrown on the scrapheap when our various parts are no longer repairable. The existence of a god in a psychic dimension beyond the material world would then become virtually irrelevant to us, except for those who believe that such a being is actively interested in our earthly lives. 
> -No... design advocates would simply take the invention as proof that something as complex as human intelligence could only arise by intervention on behalf of an intelligent entity. Atheists would take it as proof that -> With regard to machines being treated as sentient beings, the ethical ramifications are vast. Robot rights are inseparable from robot responsibilities, -Is it really? What's the responsibility of a human--or a dog? If robots become somehow sentient, to me rights would trump even their designed purpose. ->but is it possible to separate the programme from the programmer? (Current theologians may ponder the same question, and in any case we have never really established the parameters of human responsibility, given the impact of heredity and environment on our identity.) As I said, it's all a huge "if", and perhaps it will remain indefinitely in the realm of science fiction. I'm just trying to clarify the implications, but who knows ... the science and technology of robotics may yet provide the biggest philosophical revolution of them all.-It's what Ray Kurzweil spends his life studying.

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"

First Robot able to Show Emotion & develop bonds

by dhw, Thursday, August 12, 2010, 14:59 (5025 days ago) @ xeno6696

I wrote that if a robot could produce all the mental activities of a human, it would in my view prove that there is no such thing as a "soul", the case for which depends on the neurons, synapses etc. being the receivers and not the producers of consciousness.-MATT: I guess I might disagree; to me Phineas Gage pretty much destroyed any hope of a soul in my book.-I think the only disagreement here is that you have already decided there is no such thing as a soul, and I haven't. That doesn't affect the argument that a fully sentient robot would provide proof that the material brain is the producer, not the receiver, of consciousness (and by extension identity), in which case there is no soul.-You have pointed out that Nao is a "one-trick pony". Yes, it has the emotional capacity of a one-year-old. My post was based on the possibility of further developments.-I wrote that a fully sentient robot would not settle the chance v. design debate, since the robots have been designed, but it would mean there was no soul and hence no afterlife, so God would become largely irrelevant. You responded: "No...design advocates would simply take the invention as proof that something as complex as human intelligence could only arise by intervention on behalf of an intelligent entity." That is precisely the point of my saying it would NOT settle the chance v. design debate.-I wrote that robot rights would be inseparable from robot responsibilities, but asked if one could separate the programme from the programmer. You question this, and ask what is the responsibility of a human ... or a dog. Perhaps my argument was not clear. If a sentient robot ran amok and killed a dozen people, presumably it would have the same rights as a human to a fair trial, but to what extent would we blame the robot, and to what extent the person who designed its programme? (In the case of a dog, we would hold the owner responsible.) As I said, the ethical ramifications are vast, and also extend to the sphere of our own responsibility for our actions ... see my parenthesis in yesterday's post on heredity and environment.-This is a complex and exciting subject, and I appreciate your keeping us updated both with the new developments and with your own interpretation of them.

First Robot able to Show Emotion & develop bonds

by xeno6696 @, Sonoran Desert, Saturday, August 14, 2010, 04:41 (5024 days ago) @ dhw

dhw,-First part duly noted, and I apologize for... not reading carefully I guess. >_<&#13;&#10;> I wrote that robot rights would be inseparable from robot responsibilities, but asked if one could separate the programme from the programmer. You question this, and ask what is the responsibility of a human ... or a dog. Perhaps my argument was not clear. If a sentient robot ran amok and killed a dozen people, presumably it would have the same rights as a human to a fair trial, but to what extent would we blame the robot, and to what extent the person who designed its programme? (In the case of a dog, we would hold the owner responsible.) As I said, the ethical ramifications are vast, and also extend to the sphere of our own responsibility for our actions ... see my parenthesis in yesterday&apos;s post on heredity and environment.&#13;&#10;> -There&apos;s a book that Adler cites in &quot;The Difference of Man and the Difference it Makes&quot; that talks about just such a court trial. I... would appeal to David for the name of the books as I no longer have my copy. (Library.) -I would say that for argument&apos;s sake... once a machine becomes sentient, the original designer loses any culpability. You can raise your child to be a mean, nasty, thieving S.O.B., but in our legal system, we do not hold the parents culpable for bad parenting. (Only physical neglect, sometimes mental.) -So... I would think that the legal precedent of designer-machine would be parent-child. -http://en.wikipedia.org/wiki/Ray_Kurzweil-I first learned about him by buying one of his synthesizers... in the flagship line the technology is from 1996 and is still considered as good as you can get.-Especially his concepts of the Singularity... he&apos;s probably the most interesting techno-sopher of our time. -> This is a complex and exciting subject, and I appreciate your keeping us updated both with the new developments and with your own interpretation of them.

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"

First Robot able to Show Emotion & develop bonds

by dhw, Saturday, August 14, 2010, 12:42 (5023 days ago) @ xeno6696

MATT: First part duly noted. I apologize for...not reading carefully, I guess.-Thank you. This happens to all of us, and may also be the reason for George&apos;s occasional misreading of my posts. I always print out the text I&apos;m responding to, as I find it far easier to check.-You say that &quot;once a machine becomes sentient, the original designer loses any culpability. [...] I would think that the legal precedent of designer-machine would be parent-child.&quot;-That sounds like a fair analogy, and my thoughts are probably based on ignorance of how it all works, but you will set me right if that is so. Doesn&apos;t a robot have to be programmed? To what extent would its sentience be controlled by a will of its own, and to what extent by the preparatory work done by the designer? Even if it appears to have free will, how would we know if the designer had not deliberately built in, say, a propensity for charity or conversely a killer instinct? In relation to responsibility (and ignoring the &quot;designer&quot; element for obvious reasons), one can ask similar questions about human genes and, as you say, the influence of upbringing, but so long as robots are deliberately designed and manufactured (in contrast to human reproduction), perhaps we can say that these questions take on an even sharper profile.-Thank you for the three different website references (maybe we could keep future links on this thread, as they&apos;re all interconnected). I found the Kurzweil one particularly fascinating, as it gives a pretty clear answer to my earlier question of just how far robot technology might be developed. An amazing man! I&apos;d be very interested to know what David, George and, of course, any other contributors think of his &quot;technosophy&quot; and its implications. In the context of science and religion, as I pointed out in my earlier post, the impact on the concept of &quot;soul&quot; would be massive ... even if at the moment the fully sentient robot remains a product of science fiction.

First Robot able to Show Emotion & develop bonds

by xeno6696 @, Sonoran Desert, Monday, August 16, 2010, 01:08 (5022 days ago) @ dhw

MATT: First part duly noted. I apologize for...not reading carefully, I guess.&#13;&#10;> &#13;&#10;> Thank you. This happens to all of us, and may also be the reason for George&apos;s occasional misreading of my posts. I always print out the text I&apos;m responding to, as I find it far easier to check.&#13;&#10;> &#13;&#10;> You say that &quot;once a machine becomes sentient, the original designer loses any culpability. [...] I would think that the legal precedent of designer-machine would be parent-child.&quot;&#13;&#10;> &#13;&#10;> That sounds like a fair analogy, and my thoughts are probably based on ignorance of how it all works, but you will set me right if that is so. Doesn&apos;t a robot have to be programmed? To what extent would its sentience be controlled by a will of its own, and to what extent by the preparatory work done by the designer? Even if it appears to have free will, how would we know if the designer had not deliberately built in, say, a propensity for charity or conversely a killer instinct? In relation to responsibility (and ignoring the &quot;designer&quot; element for obvious reasons), one can ask similar questions about human genes and, as you say, the influence of upbringing, but so long as robots are deliberately designed and manufactured (in contrast to human reproduction), perhaps we can say that these questions take on an even sharper profile.&#13;&#10;> -Again, this confusion comes down the completely different paradigms that exist for machine programming. Most people who work with computers (myself included) program a machine to do a very limited and confined set of tasks. The machine will do nothing we won&apos;t tell it to. Or rather, it cannot display any behavior that we as humans haven&apos;t planned for. However, AI programming starts from a completely different viewpoint. David grew up in the world of computers as punch-cards; it&apos;s very difficult to reason how such a machine could become sentient.-AI programming shares one thing in common with imperative programming; the machine will only do what its told to do. However, what an AI is told to do is learn, and make decisions based on what it has learned. A TRUE AI is tabula rasa. It is connected to some kind of sensory equipment and it&apos;s programming is purely to make sense of something(s) in its environment. The designer of the machine would have to train the AI from the ground up--just as if it was a child. -If a sentient robot killed someone, we would have the benefit of being able to access its program to see if it had been tampered with. This itself would be a whole new set of crimes--and legal systems would be in a shock. If you infected a sentient machine with an imperative virus to kill someone, how could we hold the machine accountable at all? Questions like this leave many people claiming that this is why AI can never be human--because we can&apos;t do something similar. There would likely have to be something like Asimov&apos;s imperatives of robotics built in, but then we would have sentience with limited free-will. Would THAT be right, from a philosophical or humanistic standpoint?

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"

First Robot able to Show Emotion & develop bonds

by dhw, Wednesday, August 18, 2010, 10:43 (5019 days ago) @ xeno6696

I am trying to find out just how &quot;human&quot; a robot can become, and how responsible it might be for its actions. I asked Matt to what extent its sentience would be controlled by a will of its own, and to what extent by the preparatory work of the designer.-MATT: AI programming shares one thing in common with imperative programming; the machine will only do what it&apos;s told to do. However, what an AI is told to do is learn, and make decisions based on what it has learned. A TRUE AI is tabula rasa. It is connected to some kind of sensory equipment and its programming is purely to make sense of something(s) in its environment. The designer of the machine would have to train the AI from the ground up--just as if it was a child.-If a true AI is a tabula rasa, I don&apos;t see how it can possibly be independent of its designer in the way humans are of their parents. A child is born not only with instincts but also with a will of its own and a vast array of inborn characteristics: personal temperament, individualized intelligence, selective memory etc. How it responds to the impressions created by its sensory equipment will largely depend on these inborn elements, even if its decisions and the &quot;sense&quot; it makes of its environment may be influenced by training. Your robot is born with nothing except the programme its designer has given it. If the designer endows it with its own temperament, degree of intelligence, selective memory, it may appear to behave like a human, but its will (i.e. the degree of control it has over its own actions) and character will still be the product of the programme. In my book, that makes the designer 100% responsible. You wrote, however, that &quot;this confusion comes down to the completely different paradigms that exist for machine programming&quot; in which the machine &quot;cannot display any behavior that we as humans haven&apos;t planned for.&quot; I can&apos;t see the difference, so what have I overlooked?

First Robot able to Show Emotion & develop bonds

by xeno6696 @, Sonoran Desert, Sunday, August 22, 2010, 22:31 (5015 days ago) @ dhw

I am trying to find out just how &quot;human&quot; a robot can become, and how responsible it might be for its actions. I asked Matt to what extent its sentience would be controlled by a will of its own, and to what extent by the preparatory work of the designer.&#13;&#10;> &#13;&#10;> MATT: AI programming shares one thing in common with imperative programming; the machine will only do what it&apos;s told to do. However, what an AI is told to do is learn, and make decisions based on what it has learned. A TRUE AI is tabula rasa. It is connected to some kind of sensory equipment and its programming is purely to make sense of something(s) in its environment. The designer of the machine would have to train the AI from the ground up--just as if it was a child.&#13;&#10;> &#13;&#10;> If a true AI is a tabula rasa, I don&apos;t see how it can possibly be independent of its designer in the way humans are of their parents. A child is born not only with instincts but also with a will of its own and a vast array of inborn characteristics: personal temperament, individualized intelligence, selective memory etc. How it responds to the impressions created by its sensory equipment will largely depend on these inborn elements, even if its decisions and the &quot;sense&quot; it makes of its environment may be influenced by training. Your robot is born with nothing except the programme its designer has given it. If the designer endows it with its own temperament, degree of intelligence, selective memory, it may appear to behave like a human, but its will (i.e. the degree of control it has over its own actions) and character will still be the product of the programme. In my book, that makes the designer 100% responsible. You wrote, however, that &quot;this confusion comes down to the completely different paradigms that exist for machine programming&quot; in which the machine &quot;cannot display any behavior that we as humans haven&apos;t planned for.&quot; I can&apos;t see the difference, so what have I overlooked?-AI works by setting high-level policies and letting the machine do the rest. Current AI is as independent as it can get for the job(s) it is asked to do. -Okay, lets say we build Bob. Bob is the first ever General-AI; he can process emotions, he can feel pain. Bob started out in our lab knowing nothing, he was only pure potential. Only by moving through our world and experiencing--both through what we taught him and he taught himself--does his knowledge base grow. He learned language similar to how a child learns--by making associations and inferences. -It has been ten years. -Now my point is that AI programming works only to give the machine tools to do its job; as a designer, I&apos;m only culpable up until the moment I have decided on a final learning program. But what the machine chooses to learn on his own isn&apos;t anything I can be held accountable for, just as anything your children choose to learn you aren&apos;t held accountable for. If your kids learn how to hotwire cars--the won&apos;t send you to prison! What kind of &quot;learning&quot; boundaries can we set for ourselves or our children? What can we enforce? In some instances I think I&apos;ve seen anecdotal evidence of parents getting into trouble when say, they expose their kids to bad habits they have such as drugs, alcohol, etc. But we still don&apos;t punish the parents if the kids act out. -All the things that make up an individual&apos;s personality are built from experience, and as a set of consequence(s) from the actions they take to deal with those experiences. As to what extent the AI&apos;s personality is designed; to me personalities would be like &quot;general policies&quot; that the machine follows in the world. You could make a case of designer culpability in circumstances where it could be demonstrated that you gave the machine policies to be followed that made it a Charles Manson or some serial killer. -Furthermore, if the goal of the machine was to create a sentient and independent entity--the by definition the designer loses culpability if the machine had fulfilled this requirement.

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"

First Robot able to Show Emotion & develop bonds

by dhw, Monday, August 23, 2010, 17:59 (5014 days ago) @ xeno6696

MATT: Furthermore, if the goal of the machine was to create a sentient and independent entity ... then by definition the designer loses culpability if the machine had fulfilled this requirement.-Yes indeed, but that is the whole point of our discussion. I&apos;m questioning whether such a machine can possibly be independent of its designer. But please don&apos;t misunderstand me. This is one of the many subjects I know nothing about, so I&apos;m picking your brains to find out just what is and what isn&apos;t feasible. That means questioning whatever seems unclear to me, so I hope you&apos;ll remain patient. (I should add, though, that my main interest is not in culpability but in the implications of robotics for the concept of the soul. However, your explanations shed light on both subjects.)-MATT: All the things that make up an individual&apos;s personality are built from experience, and as a set of consequences from the actions they take to deal with those experiences.-ALL the things? I can do no more than repeat what I wrote earlier: &quot;a child is born not only with instincts but also with a will of its own and a vast array of inborn characteristics: personal temperament, individualized intelligence, selective memory etc. How it responds to the impressions created by its sensory equipment [for which you can substitute experiences here] will largely depend on these inborn elements [...] Your robot is born with nothing except the programme its designer has given it.&quot; -You&apos;re quite right when you say that &quot;only by moving through our world and experiencing ... both through what we taught him and he taught himself ... does his knowledge base grow.&quot; But our innate capabilities and leanings help to determine how great that knowledge base becomes, and they determine how we use it. Of course experience changes people, but nobody on this earth can tell you the degree to which inborn characteristics and outside circumstances are responsible for the evolution of personality. -Your robot has no inborn characteristics. You have said yourself that a true AI is a tabula rasa. Humans are not. You&apos;re again quite right when you say we do not punish the parents for the behaviour of the child, but no parent deliberately implants degrees of willpower, intelligence, memory, sensitivity etc. in the child. Its sentience might be called natural, whereas the robot&apos;s sentience has been designed. The parent may be culpable for the upbringing (external), but not for the response to the upbringing (internal), and so one child exposed to alcohol may turn into a drunkard, while another may become a teetotaller. If the designer starts hitting his robot with a hammer, will it just howl and let itself be hammered, or will it fight back? What will dictate its response?

First Robot able to Show Emotion & develop bonds

by xeno6696 @, Sonoran Desert, Thursday, August 26, 2010, 12:08 (5011 days ago) @ dhw

dhw,-I could have sworn I responded to this days back... obviously not. -> Yes indeed, but that is the whole point of our discussion. I&apos;m questioning whether such a machine can possibly be independent of its designer. But please don&apos;t misunderstand me. This is one of the many subjects I know nothing about, so I&apos;m picking your brains to find out just what is and what isn&apos;t feasible. That means questioning whatever seems unclear to me, so I hope you&apos;ll remain patient. (I should add, though, that my main interest is not in culpability but in the implications of robotics for the concept of the soul. However, your explanations shed light on both subjects.)&#13;&#10;> &#13;&#10;> MATT: All the things that make up an individual&apos;s personality are built from experience, and as a set of consequences from the actions they take to deal with those experiences.&#13;&#10;> &#13;&#10;> ALL the things? I can do no more than repeat what I wrote earlier: &quot;a child is born not only with instincts but also with a will of its own and a vast array of inborn characteristics: personal temperament, individualized intelligence, selective memory etc. How it responds to the impressions created by its sensory equipment [for which you can substitute experiences here] will largely depend on these inborn elements [...] Your robot is born with nothing except the programme its designer has given it.&quot; &#13;&#10;> -We have what I would call an inborn &quot;filter.&quot; But generally this filter doesn&apos;t change--only our responses to it. I still get knee-jerks when I hear fire & brimstone street preachers. Many patterns I have as an adult I had when I was a kid... just with different items. -The program IS exactly that inborn characteristic (filter) that you&apos;re referring to. The typical programming paradigm has the designer thinking of as many ways that circumstances could break his code; that&apos;s what you seem to be referring to. The AI programmer solves that problem by making the machine figure out its own way. A Generalized AI starts with no knowledge; only with intuitions. The general algorithm for AI (and humans, for that matter) is this:-1. Identify a problem. &#13;&#10;2. Identify possible responses, including ignoring the problem. &#13;&#10;3. Execute. -The AI programmer only needs to come up with good, general, algorithms that can take at least 3 sets of sensory input and perform these three high-level tasks. -> You&apos;re quite right when you say that &quot;only by moving through our world and experiencing ... both through what we taught him and he taught himself ... does his knowledge base grow.&quot; But our innate capabilities and leanings help to determine how great that knowledge base becomes, and they determine how we use it. Of course experience changes people, but nobody on this earth can tell you the degree to which inborn characteristics and outside circumstances are responsible for the evolution of personality. &#13;&#10;> -No; but my argument is that a person who&apos;s experienced nothing isn&apos;t too likely to have a very robust personality. -> Your robot has no inborn characteristics. You have said yourself that a true AI is a tabula rasa. -I was probably using the term incorrectly: I thought it just meant &quot;no knowledge,&quot; or the &quot;open book.&quot; ->&#13;&#10;Humans are not. You&apos;re again quite right when you say we do not punish the parents for the behaviour of the child, but no parent deliberately implants degrees of willpower, intelligence, memory, sensitivity etc. in the child. Its sentience might be called natural, whereas the robot&apos;s sentience has been designed. The parent may be culpable for the upbringing (external), but not for the response to the upbringing (internal), and so one child exposed to alcohol may turn into a drunkard, while another may become a teetotaller. If the designer starts hitting his robot with a hammer, will it just howl and let itself be hammered, or will it fight back? What will dictate its response?-I hope some of what I said above answers this, but to answer your last questions: A generalized AI will have to make a decision. It won&apos;t know in advance what to do. It&apos;s response will be dictated by any/all the input it had received[EDIT]--its past experiences. The scary part about AI would be its ability to have word-for-word access to all of humanity&apos;s collected knowledge and wisdom. But what if it had read Machiavelli and liked it?-[EDITED]

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"

First Robot able to Show Emotion & develop bonds

by dhw, Saturday, August 28, 2010, 08:47 (5009 days ago) @ xeno6696

Matt is looking into the robotic future.-MATT: We have what I would call an inborn &quot;filter&quot;. But generally this filter doesn&apos;t change ... only our responses to it. [...] The program IS exactly that inborn characteristic (filter) that you&apos;re referring to. [...] A generalized AI starts with no knowledge; only with intuitions. [...] ...my argument is that a person who&apos;s experienced NOTHING isn&apos;t too likely to have a very robust personality.-Please forgive my cherry-picking the quotes, but together they form a pattern with which I largely agree. Only your argument seems to me to confirm the DEPENDENCE of the robot on its designer, so at the risk of repeating arguments, let me try to put the bits and pieces together in my own way. -There are inevitably areas of our own nature (as well as other people&apos;s) that we know nothing about. I don&apos;t, for instance, know how brave I am. I&apos;ve never ... fortunately! ... been confronted by a situation that demands extremes of courage. But I know that I&apos;m conscientious, because I worry about even minor problems and can&apos;t rest till they&apos;re put right. I&apos;ve always been like that, and I take this to be what you mean when you say you have the same patterns now as when you were a kid. So I would like to modify your statement that a person with no experience at all isn&apos;t likely to have a &quot;robust personality&quot;. I think the basic foundations of the personality are already there, but neither we nor anyone else can know what they are until they&apos;re brought out by experience. Admittedly, some experiences may be so dramatic or traumatic that they can change these foundations, but I think the inborn base is generally pretty determinate. In your words, &quot;generally this filter doesn&apos;t change&quot;.-These basic foundations are designed by the robot&apos;s programmer ... as you say, the programme IS the filter. Only when the &quot;intuitions&quot; have been deliberately put in place can the choices follow accordingly, just as ours do. You ask: &quot;What if it had read Machiavelli and liked it?&quot; Of course it can&apos;t like or dislike M until it&apos;s read his book, but my question to you would be: WHY would it like (or dislike) M.? Why would it like (or dislike) anything? Where do its predilections come from? An example I gave earlier was of exposure to alcohol (= experience). Within the same family, child X may become an alcoholic, and child Y a teetotaller. For me, one of the prime aims of early education should be to expose the learner to as many different fields as possible, in order to find out what the child has an aptitude for. In other words, experience doesn&apos;t create aptitudes but reveals them. The Machiavellian tendencies are not created by reading Machiavelli, but reading M. brings out the innate tendencies. And so although the designer of your robot will not have programmed the experiences, he will have programmed the responses. -However, perhaps we&apos;re getting way ahead of ourselves here. You specified the &quot;general algorithm&quot; as being the identification of a problem and its possible responses, and then &quot;execution&quot; [of the decision]. Robots, so far as I know, are currently created in order to perform specific tasks, or to solve specific problems. I have no trouble visualizing a machine solving problems and making decisions in accordance with the given data or with past experience. But a &quot;sentient and independent entity&quot; (i.e. one with self-awareness, willpower, imagination, fully developed emotions etc.) goes a great deal further. The first robot to show emotion and develop bonds ... albeit at the level of a one-year-old child ... is clearly a big leap in this direction, but regardless of my interpretation of the &quot;filter&quot; (which of course you may disagree with), do you think technology really can go all the way?

First Robot able to Show Emotion & develop bonds

by xeno6696 @, Sonoran Desert, Sunday, August 29, 2010, 23:49 (5008 days ago) @ dhw

dhw,&#13;&#10;> &#13;&#10;> Please forgive my cherry-picking the quotes, but together they form a pattern with which I largely agree. Only your argument seems to me to confirm the DEPENDENCE of the robot on its designer, so at the risk of repeating arguments, let me try to put the bits and pieces together in my own way. &#13;&#10;> &#13;&#10;> There are inevitably areas of our own nature (as well as other people&apos;s) that we know nothing about. I don&apos;t, for instance, know how brave I am. I&apos;ve never ... fortunately! ... been confronted by a situation that demands extremes of courage. But I know that I&apos;m conscientious, because I worry about even minor problems and can&apos;t rest till they&apos;re put right. I&apos;ve always been like that, and I take this to be what you mean when you say you have the same patterns now as when you were a kid. So I would like to modify your statement that a person with no experience at all isn&apos;t likely to have a &quot;robust personality&quot;. I think the basic foundations of the personality are already there, but neither we nor anyone else can know what they are until they&apos;re brought out by experience. Admittedly, some experiences may be so dramatic or traumatic that they can change these foundations, but I think the inborn base is generally pretty determinate. In your words, &quot;generally this filter doesn&apos;t change&quot;.&#13;&#10;> -Actually, so far it looks like you have a good grasp on my thinking. Eerily, heh. -> ...And so although the designer of your robot will not have programmed the experiences, he will have programmed the responses. &#13;&#10;> -Our disagreement here is probably just due to me having more familiarity with the act of commanding machines; I translate your final sentence as this:-&quot;The designer will have to predict a various number of responses to an equally various number of stimuli, and predetermine the outcome.&quot; To me, you might be thinking that the machine would have to constantly come to the designer to help with some issue. Or you&apos;re thinking that if anything like a policy is built in, then it is completely owned by the designer? I don&apos;t think so, unless of course our designer makes a &quot;policy&quot; that every time he hears &quot;God save the Queen&quot; he runs out into the street to dance. The goal of general AI is to get away from defining a specific problem and a specific solution, such as what I demonstrated here. The goal is a single algorithm that can solve many problems. -I&apos;m thinking that there will be general... &quot;policies&quot; if you will, that would be built into the machine that would allow the machine to solve problems on its own. These would of course come from the designer, but the machine would have the power to override policies if it deemed it necessary, or to adapt a solution method from a different intelligence type to another. I&apos;m approaching this from the perspective that the designer built as much free will into the machine as possible; though some futurists disagree with me and think that Asimov&apos;s rules should be built into machines. (It is a minor aside, but it&apos;s pertinent to any talk of AI.) The policies don&apos;t dictate responses, only tendencies. And if the machine has the same power to override tendencies as we do... at what point can we say that the designer is necessary after pressing &quot;on?&quot;-> However, perhaps we&apos;re getting way ahead of ourselves here. You specified the &quot;general algorithm&quot; as being the identification of a problem and its possible responses, and then &quot;execution&quot; [of the decision]. Robots, so far as I know, are currently created in order to perform specific tasks, or to solve specific problems. I have no trouble visualizing a machine solving problems and making decisions in accordance with the given data or with past experience. But a &quot;sentient and independent entity&quot; (i.e. one with self-awareness, willpower, imagination, fully developed emotions etc.) goes a great deal further. The first robot to show emotion and develop bonds ... albeit at the level of a one-year-old child ... is clearly a big leap in this direction, but regardless of my interpretation of the &quot;filter&quot; (which of course you may disagree with), do you think technology really can go all the way?-I&apos;m just ignorant enough about the AI field to not be able to say &quot;yes&quot; with certainty, but like Kurzweil, the gentleman who wrote the article I linked here most certainly believes it is possible. Though if pressed, I think that at present I don&apos;t see why we couldn&apos;t do it. Hard to commit. One method that could prove the entire endeavor futile is if someone writes a valid proof that it is impossible to create such a general algorithm.

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"

First Robot able to Show Emotion & develop bonds

by David Turell @, Monday, August 30, 2010, 00:29 (5008 days ago) @ xeno6696

This is very impressive work.

First Robot able to Show Emotion & develop bonds

by dhw, Monday, August 30, 2010, 11:53 (5007 days ago) @ xeno6696

DHW: ...And so although the designer of your robot will not have programmed the experiences, he will have programmed the responses.-MATT: I translate your final sentence as this: &quot;The designer will have to predict a various number of responses to an equally various number of stimuli, and predetermine the outcome.&quot; To me, you might be thinking that the machine would have to constantly come to the designer to help with some issue.-No, I&apos;m afraid that is a complete mistranslation of my final sentence, and a misunderstanding of my (clumsy) attempts to delve into psychology. My point was that if the robot was to act independently, the programmer would have to build in those inborn characteristics which in us predetermine our responses. For example, I am conscientious, and if something is wrong I need to put it right straight away. These traits will determine my response to a vast range of experiences. So too will my limited range of intelligence and expertise. All inborn. I&apos;m not a very practical person, and so I have taken out an insurance policy. When our lavatory began to leak, I rang the insurance company straight away. They sent a plumber. He put some sticky stuff on the lavatory. It stopped leaking. If I had had a different set of inborn characteristics, I might not have taken out an insurance policy, I myself might have put some sticky stuff on, I might have pretended not to notice (it was only a tiny leak) and hoped no-one else would notice, I might have stuck a cup underneath, I might not have rung the company right away....When your robot starts leaking lubricants all over your living-room floor, will it mop up the mess, tell you to do it, rush off to the pub, plug the leak itself, ring for a robot plumber...? My point is that the general characteristics determine the responses to the individual experiences. Once these traits are in place, your robot will no doubt act independently, just as I do, but the determining traits will have been put there by the designer.-You go on to say: &quot;The policies don&apos;t dictate responses, only tendencies. And if the machine has the same power to override tendencies as we do...at what point can we say that the designer is necessary after pressing &quot;on&quot;?&quot; This all links up very neatly to Romansh&apos;s preoccupation with free will. I really don&apos;t know what power we have to override our inborn characteristics (= your &quot;tendencies&quot;), but in the context of our discussion, I guess that would be the ultimate test ... can free will be built into the machine? Can technology really go all the way, and enable the robot to override the designer&apos;s programme of inborn characteristics? Your answer is a charmingly negative positive: &quot;not [...] able to say &quot;yes&quot; with certainty [...] Though if pressed, I think that at present I don&apos;t see why we couldn&apos;t do it. Hard to commit.&quot; I&apos;m with you all of the part of the way. Or maybe not.

First Robot able to Show Emotion & develop bonds

by xeno6696 @, Sonoran Desert, Wednesday, September 01, 2010, 23:06 (5005 days ago) @ dhw

dhw,&#13;&#10;> MATT: I translate your final sentence as this: &quot;The designer will have to predict a various number of responses to an equally various number of stimuli, and predetermine the outcome.&quot; To me, you might be thinking that the machine would have to constantly come to the designer to help with some issue.&#13;&#10;> &#13;&#10;> No, I&apos;m afraid that is a complete mistranslation of my final sentence, and a misunderstanding of my (clumsy) attempts to delve into psychology. My point was that if the robot was to act independently, the programmer would have to build in those inborn characteristics which in us predetermine our responses. For example, I am conscientious, and if something is wrong I need to put it right straight away. These traits will determine my response to a vast range of experiences. So too will my limited range of intelligence and expertise. All inborn. I&apos;m not a very practical person, and so I have taken out an insurance policy. When our lavatory began to leak, I rang the insurance company straight away. They sent a plumber. He put some sticky stuff on the lavatory. It stopped leaking. If I had had a different set of inborn characteristics, I might not have taken out an insurance policy, I myself might have put some sticky stuff on, I might have pretended not to notice (it was only a tiny leak) and hoped no-one else would notice, I might have stuck a cup underneath, I might not have rung the company right away....When your robot starts leaking lubricants all over your living-room floor, will it mop up the mess, tell you to do it, rush off to the pub, plug the leak itself, ring for a robot plumber...? My point is that the general characteristics determine the responses to the individual experiences. Once these traits are in place, your robot will no doubt act independently, just as I do, but the determining traits will have been put there by the designer.&#13;&#10;> -Okay, to see if I&apos;m reading this correctly: Because tendencies (even with a degree or ten of freedom) had to be put down by a designer, the robot is forever... linked is the only word I can think of--to the notions and whims of its designer? I don&apos;t really know what to do with this, it; seems like the argument boils down to &quot;Bob was designed.&quot; If we go back to the original question I posed in terms of culpability, if Bob murders someone, the only way the designer would be held responsible is if it could be demonstrated that the robot did not commit the crime in self-defense, and that the robot responded in a way unintended. If the intention of the program was for the robot to act in potentially unpredictable ways, I simply don&apos;t see how the designer would be held accountable. -I would have to ask what exactly you think the ramifications are that the robot&apos;s internal &quot;filter&quot; or &quot;inherited personality traits&quot; were built-in? It doesn&apos;t seem to change much to me... Maybe a good question would be, what if Bob builds a friend with a different personality? If we&apos;re talking generalized AI, this would be easy...-This has no immediate connection to our discussion but:&#13;&#10;http://cordis.europa.eu/ictresults/index.cfm?section=news&tpl=article&BrowsingType=Features&ID=91421&#13;&#10;(Not immediately pertinent, but you&apos;ll find it a good read.) -> You go on to say: &quot;The policies don&apos;t dictate responses, only tendencies. And if the machine has the same power to override tendencies as we do...at what point can we say that the designer is necessary after pressing &quot;on&quot;?&quot; This all links up very neatly to Romansh&apos;s preoccupation with free will. I really don&apos;t know what power we have to override our inborn characteristics (= your &quot;tendencies&quot;), but in the context of our discussion, I guess that would be the ultimate test ... can free will be built into the machine? Can technology really go all the way, and enable the robot to override the designer&apos;s programme of inborn characteristics? Your answer is a charmingly negative positive: &quot;not [...] able to say &quot;yes&quot; with certainty [...] Though if pressed, I think that at present I don&apos;t see why we couldn&apos;t do it. Hard to commit.&quot; I&apos;m with you all of the part of the way. Or maybe not.

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"

First Robot able to Show Emotion & develop bonds

by dhw, Thursday, September 02, 2010, 20:07 (5004 days ago) @ xeno6696

MATT: Okay, to see if I&apos;m reading this correctly: Because tendencies (even with a degree or ten of freedom) had to be put down by a designer, the robot is forever... linked is the only word I can think of--to the notions and whims of its designer? I don&apos;t really know what to do with this; it seems like the argument boils down to &quot;Bob was designed.&quot; -That is exactly my argument, except that I&apos;m not happy with the word &quot;tendencies&quot;, which seems too weak to me. Inborn characteristics are far more binding and restrictive. We don&apos;t even know the extent of our own free will (see the discussion on the Intelligence thread), but every bit of the robot has to be deliberately designed from scratch, including and especially its programme. But ... and it&apos;s a huge &quot;but&quot; ... my moral argument depends on how feasible it is to build a programme that gives the robot complete autonomy. I think that&apos;s what lies behind your statement that &quot;if the intention of the program was for the robot to act in potentially unpredictable ways, I simply don&apos;t see how the designer would be held accountable&quot;. My turn to interpret: I read this as saying that if the programme allows for attitudes, character traits, preferences, modes of thought and behaviour not programmed by the designer, Bob and not the designer is culpable. Agreed. However, while my &quot;but&quot; was huge, your &quot;if&quot; is colossal. That&apos;s why I asked if you thought robotic technology could go all the way, and your response was a definite maybe! On reflection, our discussion may have been at cross purposes (probably my fault). You&apos;re saying that if the robot is autonomous, the designer will not be culpable (correct), and I&apos;m saying I don&apos;t see how a robot can be autonomous, and if it isn&apos;t, the designer will be culpable. It all hinges on the &quot;ifs&quot;. As I said earlier, though, my main interest is not moral, but concerns the evidence such a robot would provide that consciousness, emotion, imagination etc. are all the product of materials ... in which case we can dismiss the notion of a soul. You have already done so, but I have not.&#13;&#10; &#13;&#10;You go on to ask what exactly I think &quot;the ramifications are that the robot&apos;s internal &quot;filter&quot; or &quot;inherited personality traits&quot; were built-in&quot;. It doesn&apos;t seem to change much to me... Maybe a good question would be, what if Bob builds a friend with a different personality? If we&apos;re talking generalized AI, this would be easy...&quot; As I&apos;ve tried to explain above, the ramifications are both moral (culpability) and ... for want of a better word ... spiritual, since a completely manufactured, totally independent, self-willed identity would preclude the soul. If Bob built a completely different robot, which had its own independent set of characteristics, I&apos;d say that was the same as our designer building an independent Bob, but in both cases it&apos;s a bit like arguing that if we can prove there are other universes, there are other universes.&#13;&#10; &#13;&#10;Thank you very much for putting me onto the robot article. I did indeed find it a good read, and also thought it very pertinent to our discussion. Initially, I gasped at the achievements and the immediate plans, because these already sounded way beyond what I&apos;d expected. But then came the anti-climax: &quot;But while Xpero advances machine learning, it is still far short of the capabilities of a baby,&quot; says Kahl. &quot;Of course, the robot can now learn the concept of movability. But it does not understand in the human sense what movability means.&quot; It&apos;s early days but, like yourself, at this stage I find it &quot;hard to commit&quot; to the belief that we can ever create an autonomous, sentient machine with a human mind.

First Robot able to Show Emotion & develop bonds

by xeno6696 @, Sonoran Desert, Friday, September 03, 2010, 04:09 (5004 days ago) @ dhw

dhw,&#13;&#10;> That is exactly my argument, except that I&apos;m not happy with the word &quot;tendencies&quot;, which seems too weak to me. Inborn characteristics are far more binding and restrictive. We don&apos;t even know the extent of our own free will (see the discussion on the Intelligence thread), but every bit of the robot has to be deliberately designed from scratch, including and especially its programme. But ... and it&apos;s a huge &quot;but&quot; ... my moral argument depends on how feasible it is to build a programme that gives the robot complete autonomy. I think that&apos;s what lies behind your statement that &quot;if the intention of the program was for the robot to act in potentially unpredictable ways, I simply don&apos;t see how the designer would be held accountable&quot;. My turn to interpret: I read this as saying that if the programme allows for attitudes, character traits, preferences, modes of thought and behaviour not programmed by the designer, Bob and not the designer is culpable. Agreed. However, while my &quot;but&quot; was huge, your &quot;if&quot; is colossal. That&apos;s why I asked if you thought robotic technology could go all the way, and your response was a definite maybe! On reflection, our discussion may have been at cross purposes (probably my fault). You&apos;re saying that if the robot is autonomous, the designer will not be culpable (correct), and I&apos;m saying I don&apos;t see how a robot can be autonomous, and if it isn&apos;t, the designer will be culpable. It all hinges on the &quot;ifs&quot;. As I said earlier, though, my main interest is not moral, but concerns the evidence such a robot would provide that consciousness, emotion, imagination etc. are all the product of materials ... in which case we can dismiss the notion of a soul. You have already done so, but I have not.&#13;&#10;> -So: lets explore your question. Culpability is boring anyway! First, let me get my usual nitpicks out of the way: I haven&apos;t thrown out the idea of a soul; I think the question is... misguided. If I found out tomorrow that consciousness comes purely from matter, it wouldn&apos;t change the way I think anymore than if I found out it came from a divine essence: The fact that I can sit here and declare &quot;I am,&quot; is irrelevant to (and supersedes) the idea of a soul in my book. But that&apos;s my Buddhist tendencies creeping in; the idea of a soul might be a more powerful question for you. -Maybe you should fill the void; I find it difficult to see what differences it would make. Maybe I could make a good Glaucon to your Plato?

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"

First Robot able to Show Emotion & develop bonds

by dhw, Friday, September 03, 2010, 12:43 (5003 days ago) @ xeno6696

MATT: First, let me get my usual nitpicks out of the way: I haven&apos;t thrown out the idea of a soul; I think the question is... misguided. If I found out tomorrow that consciousness comes purely from matter, it wouldn&apos;t change the way I think anymore than if I found out it came from a divine essence: The fact that I can sit here and declare &quot;I am,&quot; is irrelevant to (and supersedes) the idea of a soul in my book. But that&apos;s my Buddhist tendencies creeping in; the idea of a soul might be a more powerful question for you. Maybe you should fill the void; I find it difficult to see what differences it would make. Maybe I could make a good Glaucon to your Plato?-I&apos;m flattered, but hey, didn&apos;t Xenophon describe Glaucon as an ignoramus? Besides, I&apos;m the older brother (more like grandfather actually), and have learnt a lot more from you than you will ever learn from me!&#13;&#10; &#13;&#10;However, let me try to fill the void. I should put &quot;soul&quot; in inverted commas ... it&apos;s just a word to describe that part of us which we can&apos;t explain ... the mind, if you like, as opposed to the brain. If there is a dimension beyond the material one we know ... a dimension in which our identity exists independently of our body ... that will be the dimension in which David&apos;s Universal Intelligence exists, and in which we ourselves may survive physical death. You must remember that I have an open mind on NDEs and OBEs. If we found out tomorrow that consciousness came purely from matter, I would be 99% certain that there was no life after death, in which case the question of God&apos;s existence would be purely academic. If he does exist, I don&apos;t need him to give my life meaning, and he certainly doesn&apos;t need me, so as you say, it wouldn&apos;t change the way I think. But so long as there is a possibility of life after death, there remains the possibility that some aspects of religion may be true, and God&apos;s nature may become directly relevant to us. So if you do succeed in building your autonomous Bob, and if I&apos;m still around, I shall have mixed feelings: sad that it&apos;ll all be over soon, sad that I shall never know what power created all this beauty, and relieved that I shall never know what power created all this suffering.

RSS Feed of thread
powered by my little forum