Matt: Dangerous AI? (Introduction)
> > Matt: It isn't necessary to COPY the human brain via neurons, (as in all biochemical processes) it's only necessary to exhibit the same behavior with a given input. [/color] > > But will it think like I do and develop thoughts that are not anticipated in an analysis of data or history?-I don't know, I haven't tried building one yet. -> > > > > > Matt: It isn't necessary to make new hardware connections because of software. Let me state it another way: programs that rewrite themselves WHILE THEY ARE BEING RUN are trivial. > > That is fascinating. -It's the very basis of computer hacking. There's an attack vector called "Buffer Overflow" that allows you to overwrite an address in memory, and force the next instruction to start executing at that address. That address can be arbitrary code of any kind. It is usually code constructed from data already existing in known locations. This technique is being used (for good) in the field of self-healing systems, which are systems that are capable of being attacked, but still fulfilling critical functions safely. -Additionally, the actual techniques of self-modification at runtime are precisely how computer viruses are built to evade defense mechanisms like your antivirus software. I'll know alot more about this in May. I'm finally getting to take the class I wanted this master's degree for: viral analysis, packing and payload construction. -For the final nail in the coffin in the "self-modification is HARD" myth: -http://www.amazon.com/Field-Guide-Genetic-Programming/dp/1409200736/ref=sr_1_2?s=books&ie=UTF8&qid=1387817301&sr=1-2&keywords=genetic+algorithms-I've posted this here before, but I point to it again. All of these algorithms are "write-once, and then the machine adapts and learns on its own." - > > > Matt: As for the final sentence: "No two brains are the same in finer detail, so it would have to be a generic AI brain." > > It will have an average human personality?-No clue. But judging from that Japanese research, looks like they have identical personalities, just different dress, wigs, and speech boxes. -> > > > Matt: Basically, a generic AI brain dressed differently will be interpreted appropriately. > > Won't it think thoughts programmed into it? Do you expect it to have independent thought?-No to the first, yes to the second. -Expanding on the first: -As I showed above, writing code that modifies itself is trivial, and in fact is the key entry point in a multibillion dollar per-year black market. And I'll remind you of an old point I made: I don't think you're aware that AI programming *is not like the programming I do on a day to day basis.* -A machine that is programmed to learn, quite literally is *trained* like an animal or a child using behavioral techniques. And some of the more sophisticated systems really do generate their own unique trains of thought, even with proper context. -http://www.youtube.com/watch?v=ZS9sIwH0m8k-Brits are leading the cutting-edge here. This one even understands philosophical dilemma. Whether or not he really "feels," I don't think so. But the fact is that he can actually handle a real-conversation, and he wasn't pre-programmed with speech, he learned how to make the sounds and the meaning behind the sounds was taught gradually. This isn't a machine that has a word bank and just constructs sentences. He had to be taught to talk--just like you and me. -Expanding on the Second: -Well, I kind of already expanded on that. We're not at the point yet where we've got any real answer to "creativity" but I consider that a rather distant problem. What we've been able to accomplish so far, is creating an AI that can learn to talk, and can actually hold a decent conversation. That's only one aspect. But the fact that these robots *learn* something that is not pre-defined in their programming (beyond a desire to communicate, akin to our own instinct) is evidence in itself that yes, a machine based on learning algorithms is capable of exceeding its programming, in fact, rewrites its own programming with new knowledge.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
Complete thread:
- Matt: Dangerous AI? -
David Turell,
2013-12-19, 14:22
- Matt: Dangerous AI? -
xeno6696,
2013-12-21, 19:06
- Matt: Dangerous AI? -
David Turell,
2013-12-21, 19:56
- Matt: Dangerous AI? -
xeno6696,
2013-12-23, 04:26
- Matt: Dangerous AI? -
David Turell,
2013-12-23, 15:48
- Matt: Dangerous AI? -
xeno6696,
2013-12-23, 17:10
- Matt: Dangerous AI? - David Turell, 2013-12-24, 01:08
- Matt: Dangerous AI? -
xeno6696,
2013-12-23, 17:10
- Matt: Dangerous AI? -
David Turell,
2013-12-26, 05:16
- Matt: Dangerous AI? -
xeno6696,
2013-12-26, 20:26
- Matt: Dangerous AI? -
David Turell,
2013-12-27, 00:12
- Matt: Dangerous AI? -
xeno6696,
2013-12-27, 03:40
- Matt: Dangerous AI? -
David Turell,
2013-12-29, 15:00
- Matt: Negative thoughts about AI - David Turell, 2014-01-02, 15:08
- Matt: Dangerous AI? -
David Turell,
2013-12-29, 15:00
- Matt: Dangerous AI? -
xeno6696,
2013-12-27, 03:40
- Matt: Dangerous AI? -
David Turell,
2013-12-27, 00:12
- Matt: Dangerous AI? -
xeno6696,
2013-12-26, 20:26
- Matt: Dangerous AI? -
David Turell,
2013-12-23, 15:48
- Matt: Dangerous AI? -
xeno6696,
2013-12-23, 04:26
- Matt: Dangerous AI? -
David Turell,
2013-12-21, 19:56
- Matt: Dangerous AI? -
xeno6696,
2013-12-21, 19:06