First Robot able to Show Emotion & develop bonds (Humans)

by xeno6696 @, Sonoran Desert, Monday, August 16, 2010, 01:08 (5022 days ago) @ dhw

MATT: First part duly noted. I apologize for...not reading carefully, I guess.
> 
> Thank you. This happens to all of us, and may also be the reason for George's occasional misreading of my posts. I always print out the text I'm responding to, as I find it far easier to check.
> 
> You say that "once a machine becomes sentient, the original designer loses any culpability. [...] I would think that the legal precedent of designer-machine would be parent-child."
> 
> That sounds like a fair analogy, and my thoughts are probably based on ignorance of how it all works, but you will set me right if that is so. Doesn't a robot have to be programmed? To what extent would its sentience be controlled by a will of its own, and to what extent by the preparatory work done by the designer? Even if it appears to have free will, how would we know if the designer had not deliberately built in, say, a propensity for charity or conversely a killer instinct? In relation to responsibility (and ignoring the "designer" element for obvious reasons), one can ask similar questions about human genes and, as you say, the influence of upbringing, but so long as robots are deliberately designed and manufactured (in contrast to human reproduction), perhaps we can say that these questions take on an even sharper profile.
> -Again, this confusion comes down the completely different paradigms that exist for machine programming. Most people who work with computers (myself included) program a machine to do a very limited and confined set of tasks. The machine will do nothing we won't tell it to. Or rather, it cannot display any behavior that we as humans haven't planned for. However, AI programming starts from a completely different viewpoint. David grew up in the world of computers as punch-cards; it's very difficult to reason how such a machine could become sentient.-AI programming shares one thing in common with imperative programming; the machine will only do what its told to do. However, what an AI is told to do is learn, and make decisions based on what it has learned. A TRUE AI is tabula rasa. It is connected to some kind of sensory equipment and it's programming is purely to make sense of something(s) in its environment. The designer of the machine would have to train the AI from the ground up--just as if it was a child. -If a sentient robot killed someone, we would have the benefit of being able to access its program to see if it had been tampered with. This itself would be a whole new set of crimes--and legal systems would be in a shock. If you infected a sentient machine with an imperative virus to kill someone, how could we hold the machine accountable at all? Questions like this leave many people claiming that this is why AI can never be human--because we can't do something similar. There would likely have to be something like Asimov's imperatives of robotics built in, but then we would have sentience with limited free-will. Would THAT be right, from a philosophical or humanistic standpoint?

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"


Complete thread:

 RSS Feed of thread

powered by my little forum