building AI: never with consciousness (Humans)

by xeno6696 @, Sonoran Desert, Thursday, May 28, 2015, 15:57 (3466 days ago) @ David Turell


> > Matt: Human thinking is broadened by contact with other people, but we're still limited by all the amassed knowledge that came before us... there is no real *genius* in this world... only a limited creativity based upon what we've managed to stumble across.
> > 
> > We are a bumbling species, intellectually.
> 
> Welcome back! And you have neatly skipped consciousness, which computers will never have. Computers are simply machines.-There's a fallacy in here, mainly that we can't *know* whether or not something is conscious. Its a brain-in-the-vat kind of problem: We just assume that we're conscious, and that other people we meet are conscious. -That assumption of consciousness will necessarily hold true for machines. Why do you think Turing wrote the test for AI as one where a machine could convice people? Both because lying requires a dynamic of thought that only a conscious being could possess. Also, that people don't *know* what consciousness really is--its kind of a gray "it looks like its conscious" kind of thing. -So if a machine can manage to trick enough people, we have no choice but to assume it is conscious. -So my main disagreement when you say a machine will *never* be conscious is the two-pronged thrust:-1. We don't know what consciousness *is.* If you can't define it without controversy, then you have no right to claim you possess knowledge about it, and have no right to claim a machine is incapable of being conscious--especially when our day-to-day operating principle of consciousness to date is "I have no reason to believe I'm a brain in a vat, so I'll assume I'm not."-2. You're betting against human ingenuity, and I'll repeat this again: AI researchers know more about how human beings learn about the world than any other researcher, neuroscientists included. -A lesser thrust, but it hasn't been demonstrated sufficiently to me that we're not machines ourselves. What I mean by that, is to date, I've still witnessed nothing about the world that makes me believe my assumption of materialism is false. (Dennett holds sway for me here.) We can program a machine to learn on its own. That isn't automata. It isn't consciousness, but it isn't automata. Our bodies are ruled by the laws of chemistry, which are finite--but with an amazing complexity. Our bodies may just be a biological equivalent to a machine. Our minds--an emergent property of that complex milieu. If a machine that can learn isn't automata, it doesn't stand to reason that just because we're machines by the laws of biology & chemistry we're automata ourselves. We have free will--just an extremely limited free will.

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"


Complete thread:

 RSS Feed of thread

powered by my little forum