Matt: Dangerous AI? (Introduction)
by David Turell , Thursday, December 19, 2013, 14:22 (3992 days ago)
Do you lie awake at night wondering about this?:-http://www.washingtonpost.com/opinions/matt-miller-artificial-intelligence-our-final-invention/2013/12/18/26ed6be8-67e6-11e3-8b5b-a77187b716a3_story.html?wpisrc=nl_opinions
Matt: Dangerous AI?
by xeno6696 , Sonoran Desert, Saturday, December 21, 2013, 19:06 (3990 days ago) @ David Turell
Do you lie awake at night wondering about this?: > > http://www.washingtonpost.com/opinions/matt-miller-artificial-intelligence-our-final-in... who works in the field of programming has thought about it. And its a ripe idea for movies, Terminator and The Matrix both saw AI as the ultimate apocalypse for our species, with the interesting twist that in the Terminator, one of the machines was co-opted to save it. And older greats, such as "A Space Odyssey." Battlestar Galactica. -To answer the main question, No, it doesn't keep me up at night. -Do I think there's a great potential for a problem? If we mismanage, yes. -My time in working in information security has shown me that typically, systems designers *only* tend to consider two factors of their solutions: -1. Solving the problem at hand. 2. Solving it within given time constraints. 3. Solving it as elegantly as possible given 1 and 2. This factor is optional.-Security (if its even considered at all) tends to take a back seat. Look at Target, who potentially lost the account numbers of 40M customers. Long-term maintainability is something that is also often-missed. -If we really do get to the point of even limitedly-intelligent robotics, I'd be far more concerned that the designers left open security-holes that could result in extremely destructive consequences. Imagine a home-health robot, connected to the internet, and getting hacked in such a way that for the person its supposed to care for, the insulin dose is upped by two orders of magnitude. How would an intelligent robot be able to tell the difference between instructions IT generated, and instructions generated by an outside, malicious actor? -http://www.youtube.com/watch?v=MaTfzYDZG8c-I think that in the next 100 years or so, we'll be able to mimic a great majority of human behaviors, but "true" AI is something that I do not believe will be possible until the underlying hardware more accurately models a brain.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
Matt: Dangerous AI?
by David Turell , Saturday, December 21, 2013, 19:56 (3990 days ago) @ xeno6696
> Matt: I think that in the next 100 years or so, we'll be able to mimic a great majority of human behaviors, but "true" AI is something that I do not believe will be possible until the underlying hardware more accurately models a brain.-That will require the computing ability of 100 billion neurons with trillions of synapses, which themselves can self-modify electrical impulses received. Modelling the brain in humans raises ergical problems because it will require electrodes in bran aread as well as very careful 3-d depictions of axon/synapse mapping, notwithstanding the problem taht brain plasticity throws another curve at the results. No two brains are the same in finer detail, so it would have to be a generic AI brain.
Matt: Dangerous AI?
by xeno6696 , Sonoran Desert, Monday, December 23, 2013, 04:26 (3989 days ago) @ David Turell
> > Matt: I think that in the next 100 years or so, we'll be able to mimic a great majority of human behaviors, but "true" AI is something that I do not believe will be possible until the underlying hardware more accurately models a brain. > > That will require the computing ability of 100 billion neurons with trillions of synapses, which themselves can self-modify electrical impulses received. Modelling the brain in humans raises ergical problems because it will require electrodes in bran aread as well as very careful 3-d depictions of axon/synapse mapping, notwithstanding the problem taht brain plasticity throws another curve at the results. No two brains are the same in finer detail, so it would have to be a generic AI brain.-From an engineering perspective, this really isn't terribly difficult, just costly. Von Neumann architecture is what prevents us from really implementing this, and as I've stated previously, HP has a "memristor" architecture that is a huge step towards a hardware-based neuron. And while the notion of "billions of connections" may flummox you, the history of computer science is littered with the remains of "that can't be done!" With smart selections, we can set aside some parts for hardware computing and do other parts in software. It means that the result will be either too slow or too simplistic. But the key observation is this:-It isn't necessary to COPY the human brain via neurons, (as in all biochemical processes) it's only necessary to exhibit the same behavior with a given input. - And this is downright trivial. Self-modifying programs have been around since the Harvard Mark I, which was the first ever stored-program computer. -If the human brain is a computer (and I advocate that it IS) then it means directly that it is capable of emulation by *any* turing-complete machine. -See above note on behavior. - Not at all. It isn't necessary to make new hardware connections because of software. Let me state it another way: programs that rewrite themselves WHILE THEY ARE BEING RUN are trivial. (Well, to a programming expert.) I know this because this kind of programming is also a huge security gap in computing at large. -As for the final sentence: "No two brains are the same in finer detail, so it would have to be a generic AI brain."-That's called window dressing. That part will be addressed by the fact that social context alters meanings:-http://www.youtube.com/watch?v=DF39Ygp53mQ-^^Same exact robot. One is dressed as male, one as female. -Basically, a generic AI brain dressed differently will be interpreted appropriately.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
Matt: Dangerous AI?
by David Turell , Monday, December 23, 2013, 15:48 (3988 days ago) @ xeno6696
> Matt: It isn't necessary to COPY the human brain via neurons, (as in all biochemical processes) it's only necessary to exhibit the same behavior with a given input. [/color]-But will it think like I do and develop thoughts that are not anticipated in an analysis of data or history? > > > Matt: It isn't necessary to make new hardware connections because of software. Let me state it another way: programs that rewrite themselves WHILE THEY ARE BEING RUN are trivial.-That is fascinating. -> Matt: As for the final sentence: "No two brains are the same in finer detail, so it would have to be a generic AI brain."-It will have an average human personality? > > Matt: Basically, a generic AI brain dressed differently will be interpreted appropriately.-Won't it think thoughts programmed into it? Do you expect it to have independent thought?
Matt: Dangerous AI?
by xeno6696 , Sonoran Desert, Monday, December 23, 2013, 17:10 (3988 days ago) @ David Turell
> > Matt: It isn't necessary to COPY the human brain via neurons, (as in all biochemical processes) it's only necessary to exhibit the same behavior with a given input. [/color] > > But will it think like I do and develop thoughts that are not anticipated in an analysis of data or history?-I don't know, I haven't tried building one yet. -> > > > > > Matt: It isn't necessary to make new hardware connections because of software. Let me state it another way: programs that rewrite themselves WHILE THEY ARE BEING RUN are trivial. > > That is fascinating. -It's the very basis of computer hacking. There's an attack vector called "Buffer Overflow" that allows you to overwrite an address in memory, and force the next instruction to start executing at that address. That address can be arbitrary code of any kind. It is usually code constructed from data already existing in known locations. This technique is being used (for good) in the field of self-healing systems, which are systems that are capable of being attacked, but still fulfilling critical functions safely. -Additionally, the actual techniques of self-modification at runtime are precisely how computer viruses are built to evade defense mechanisms like your antivirus software. I'll know alot more about this in May. I'm finally getting to take the class I wanted this master's degree for: viral analysis, packing and payload construction. -For the final nail in the coffin in the "self-modification is HARD" myth: -http://www.amazon.com/Field-Guide-Genetic-Programming/dp/1409200736/ref=sr_1_2?s=books&ie=UTF8&qid=1387817301&sr=1-2&keywords=genetic+algorithms-I've posted this here before, but I point to it again. All of these algorithms are "write-once, and then the machine adapts and learns on its own." - > > > Matt: As for the final sentence: "No two brains are the same in finer detail, so it would have to be a generic AI brain." > > It will have an average human personality?-No clue. But judging from that Japanese research, looks like they have identical personalities, just different dress, wigs, and speech boxes. -> > > > Matt: Basically, a generic AI brain dressed differently will be interpreted appropriately. > > Won't it think thoughts programmed into it? Do you expect it to have independent thought?-No to the first, yes to the second. -Expanding on the first: -As I showed above, writing code that modifies itself is trivial, and in fact is the key entry point in a multibillion dollar per-year black market. And I'll remind you of an old point I made: I don't think you're aware that AI programming *is not like the programming I do on a day to day basis.* -A machine that is programmed to learn, quite literally is *trained* like an animal or a child using behavioral techniques. And some of the more sophisticated systems really do generate their own unique trains of thought, even with proper context. -http://www.youtube.com/watch?v=ZS9sIwH0m8k-Brits are leading the cutting-edge here. This one even understands philosophical dilemma. Whether or not he really "feels," I don't think so. But the fact is that he can actually handle a real-conversation, and he wasn't pre-programmed with speech, he learned how to make the sounds and the meaning behind the sounds was taught gradually. This isn't a machine that has a word bank and just constructs sentences. He had to be taught to talk--just like you and me. -Expanding on the Second: -Well, I kind of already expanded on that. We're not at the point yet where we've got any real answer to "creativity" but I consider that a rather distant problem. What we've been able to accomplish so far, is creating an AI that can learn to talk, and can actually hold a decent conversation. That's only one aspect. But the fact that these robots *learn* something that is not pre-defined in their programming (beyond a desire to communicate, akin to our own instinct) is evidence in itself that yes, a machine based on learning algorithms is capable of exceeding its programming, in fact, rewrites its own programming with new knowledge.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
Matt: Dangerous AI?
by David Turell , Tuesday, December 24, 2013, 01:08 (3988 days ago) @ xeno6696
Matt:Expanding on the Second: > > Well, I kind of already expanded on that. We're not at the point yet where we've got any real answer to "creativity" but I consider that a rather distant problem. What we've been able to accomplish so far, is creating an AI that can learn to talk, and can actually hold a decent conversation. That's only one aspect. But the fact that these robots *learn* something that is not pre-defined in their programming (beyond a desire to communicate, akin to our own instinct) is evidence in itself that yes, a machine based on learning algorithms is capable of exceeding its programming, in fact, rewrites its own programming with new knowledge.-All I can say is amazing. Your descriptions are very appreciated.
Matt: Dangerous AI?
by David Turell , Thursday, December 26, 2013, 05:16 (3986 days ago) @ xeno6696
> > > Matt: I think that in the next 100 years or so, we'll be able to mimic a great majority of human behaviors, but "true" AI is something that I do not believe will be possible until the underlying hardware more accurately models a brain.- The hope for n AI brain just got much more difficult:-With trillions of dendrites in the brain it seems the brain is nothing but a mass of computing areas:-"Dendrites, the branch-like projections of neurons, were once thought to be passive wiring in the brain. But now researchers at the University of North Carolina at Chapel Hill have shown that these dendrites do more than relay information from one neuron to the next. They actively process information, multiplying the brain's computing power. "Suddenly, it's as if the processing power of the brain is much greater than we had originally thought," said Spencer Smith, PhD, an assistant professor in the UNC School of Medicine. His team's findings, published October 27 in the journal Nature, could change the way scientists think about long-standing scientific models of how neural circuitry functions in the brain, while also helping researchers better understand neurological disorders."- http://esciencenews.com/articles/2013/10/27/unc.neuroscientists.discover.new.mini.neural.computer.brain
Matt: Dangerous AI?
by xeno6696 , Sonoran Desert, Thursday, December 26, 2013, 20:26 (3985 days ago) @ David Turell
http://spectrum.ieee.org/computing/hardware/lowpower-chips-to-model-a-billion-neurons-~88k chips needed to model 1Bn Neurons. Again, the only wall is money.-Key takeaway:-"Unlike the digital circuits in traditional computers, which could take weeks or even months to model a single second of brain operation, these analog circuits can model brain activity as fast as or even faster than it really occurs, and they consume a fraction of the power."-You keep coming back to computers as you know them, which is quite simply only a fraction of all the different kinds of computing architectures in existence. Even Penrose wasn't aware of all of them.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
Matt: Dangerous AI?
by David Turell , Friday, December 27, 2013, 00:12 (3985 days ago) @ xeno6696
http://spectrum.ieee.org/computing/hardware/lowpower-chips-to-model-a-billion-neurons&a... > Matt: ~88k chips needed to model 1Bn Neurons. Again, the only wall is money.-Again ,fascinating article. Thanks. Best comment:-'Cool hardware but seems a bit wasteful without better hypotheses as to brain functioning.' -> > Matt: Key takeaway: > > "Unlike the digital circuits in traditional computers, which could take weeks or even months to model a single second of brain operation, these analog circuits can model brain activity as fast as or even faster than it really occurs, and they consume a fraction of the power."-Still must listen to the complexity of the function of the brain. Computing shortcuts may undercut the results.
Matt: Dangerous AI?
by xeno6696 , Sonoran Desert, Friday, December 27, 2013, 03:40 (3985 days ago) @ David Turell
http://spectrum.ieee.org/computing/hardware/lowpower-chips-to-model-a-billion-neurons&a... > > > Matt: ~88k chips needed to model 1Bn Neurons. Again, the only wall is money. > > Again ,fascinating article. Thanks. Best comment: > > 'Cool hardware but seems a bit wasteful without better hypotheses as to brain functioning.' > -The comment is short sighted: We aren't going to GET better hypotheses of brain functioning if we don't attempt construction. - > > > > Matt: Key takeaway: > > > > "Unlike the digital circuits in traditional computers, which could take weeks or even months to model a single second of brain operation, these analog circuits can model brain activity as fast as or even faster than it really occurs, and they consume a fraction of the power." > > Still must listen to the complexity of the function of the brain. Computing shortcuts may undercut the results.-Or it may actually create a more efficient brain design, allowing us to drastically reduce the effort it takes to create an AI. (And also subsequently showing that we're better at optimization than nature.) Which is ultimately the goal of this kind of research. (Though it was couched in terms to allow study of certain kinds of brain diseases.) At any rate, if you desire the machine to model inefficient behavior, you just need to program signal delays. -I'll poke you again: -"Traditional CMOS chips were not invented with parallelism in mind, so it shouldn't come as a big surprise that they have trouble mimicking mammalian brains, the best parallel machines on Earth."-You're not thinking about the problem in the right way, thus obvious solutions cloud your perspective.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
Matt: Dangerous AI?
by David Turell , Sunday, December 29, 2013, 15:00 (3982 days ago) @ xeno6696
Matt: I'll poke you again: > > "Traditional CMOS chips were not invented with parallelism in mind, so it shouldn't come as a big surprise that they have trouble mimicking mammalian brains, the best parallel machines on Earth." > > You're not thinking about the problem in the right way, thus obvious solutions cloud your perspective.-This NYT article has really helped my understanding of your point:-http://www.nytimes.com/2013/12/29/science/brainlike-computers-learning-from-experience.html?nl=todaysheadlines&emc=edit_th_20131229&_r=0
Matt: Negative thoughts about AI
by David Turell , Thursday, January 02, 2014, 15:08 (3978 days ago) @ David Turell
From a software "expert":-http://nautil.us/issue/8/home/ai-has-grown-up-and-left-home-Hope you will comment.