building AI: never with consciousness (Humans)

by David Turell @, Thursday, May 28, 2015, 19:50 (3466 days ago) @ xeno6696
edited by David Turell, Thursday, May 28, 2015, 19:59


> Matt: There's a fallacy in here, mainly that we can't *know* whether or not something is conscious. Its a brain-in-the-vat kind of problem: We just assume that we're conscious, and that other people we meet are conscious. -I know my consciousness, and cannot know yours. I agree. Just as I know pornography when I see it, but it defies definition for everyone's satisfaction. 
> 
> Matt: That assumption of consciousness will necessarily hold true for machines. Why do you think Turing wrote the test for AI as one where a machine could convice people? -Facts and lying are not same as the development of new concepts. I can do that. A computer cannot. Can a computer ever develop the concept of relativity?
 
> Matt: So if a machine can manage to trick enough people, we have no choice but to assume it is conscious. -That is absolutely a false assumption. Tricks are not facts. Can your computer have this discussion with me? -> 
> Matt: 1. We don't know what consciousness *is.* -We know what we experience.-> Matt: If you can't define it without controversy, then you have no right to claim you possess knowledge about it, and have no right to claim a machine is incapable of being conscious--especially when our day-to-day operating principle of consciousness to date is "I have no reason to believe I'm a brain in a vat, so I'll assume I'm not."-I still maintain that is a false premise. It may satisfy you but not me. I may not give you a satisfactory definition, but I accept I fully know what I experience.
> 
> Matt: 2. You're betting against human ingenuity, and I'll repeat this again: AI researchers know more about how human beings learn about the world than any other researcher, neuroscientists included.-Learning about the world is not conceptualizing, although I agree that what we learn helps to create what conclusions we reach. 
> 
> Matt: A lesser thrust, but it hasn't been demonstrated sufficiently to me that we're not machines ourselves. What I mean by that, is to date, I've still witnessed nothing about the world that makes me believe my assumption of materialism is false. (Dennett holds sway for me here.) -I don't buy Dennett's approach to the brain, but you are right, most of our bodily fuctions are very automatic. The brain is a whole different story, with its plasticity abilities, and its relationship to intelligence so responsive to appropriate stimulation. See the articles on juvenile wiring as prime examples.-> Matt; We can program a machine to learn on its own. That isn't automata. It isn't consciousness, but it isn't automata. Our bodies are ruled by the laws of chemistry, which are finite--but with an amazing complexity. Our bodies may just be a biological equivalent to a machine. Our minds--an emergent property of that complex milieu. -Exactly, "emergent" and more than the sum of the parts. -> Matt: If a machine that can learn isn't automata, it doesn't stand to reason that just because we're machines by the laws of biology & chemistry we're automata ourselves. We have free will--just an extremely limited free will.-Really limited by what? The constraints of the biologic electricity? Billions of neurons, and trillions of adaptable synapses? New synapses throughout a lifetime?


Complete thread:

 RSS Feed of thread

powered by my little forum