A new model for building AI (Humans)
by xeno6696 , Sonoran Desert, Saturday, August 14, 2010, 04:43 (5214 days ago)
http://www.kurzweilai.net/a-new-blueprint-for-artificial-general-intelligence-Ironically my response to dhw concerning kurzweil landed me on this article...-Good stuff!!!!
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
A new model for building AI
by David Turell , Wednesday, November 03, 2010, 18:53 (5133 days ago) @ xeno6696
I don't believe AI is possible. Note the complexity of the biochemistry using DNA to form synapses (junctions) among axons as they form and branch, making new memories for brain intelligence. (There are about one quadrillion synapses in the human brain.) The brain is very dynamic in its formation. A young child becomes more intelligent (develops more IQ)if his mother works with him, reading to him, talking to him, stimulating him mentally. He will learn faster in his later life.-http://www.the-scientist.com/article/display/57775/
A new model for building AI
by xeno6696 , Sonoran Desert, Friday, November 05, 2010, 00:55 (5132 days ago) @ David Turell
David knows how to bring me out of my shell! -You actually happened to chance upon a small area of neurology I know something about. One of my old acquaintances was involved in a project that taught biologists to splice Jellyfish genes into neurons...-His PhD thesis was on the neuronal development of rats; they injected these genes into rat neurons and watched the development. -They discovered throughout the course of the experiment (from embryo to adult) that first;-The precursor to norepinephrine would trigger pathways through brain. These pathways would later be followed by axon and neuron growth. -And after birth, when a mother rat would touch her offspring, it would cause a fire of norepinephrine; rats who had no contact with their mothers would become highly selfish and antisocial. -To think that we couldn't model this is hubris; and I think we'll do well over the next couple of decades, but quantum computers will be the key. If we can't do it there, we'll never be able to do it. -It's easy to say it can't be done when its never been tried.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
A new model for building AI
by dhw, Wednesday, November 10, 2010, 16:22 (5126 days ago) @ David Turell
My daughter has sent me a fascinating article by Dr Cynthia Breazeal, Director of the Personal Robots Group MIT Media Lab, who acted as a consultant on the movie A.I., and who has since been working with Stan Winston, a "pre-eminent special-effects animatronics expert".-Unfortunately, I can't find the article on the Internet, and it's far too long for me to type out here, but I was particularly struck by certain passages which I will reproduce:-"Robots such as Kismet and Leonardo serve as a mirror, reflecting our humanity back at us as we interact with them and they engage us. As we look into these mirrors, we can better see ourselves ... scientifically, socially and ethically."-"What ultimately matters when we make a judgment as to the authenticity of the emotion or friendship of another? Is it how they treat us? Is it how these attributes are implemented in biological (or silicon) brains? Is it how such things are grounded in experience? How human-like does the exchange have to be? If we are willing to grant other species with genuine emotion ... e.g. dogs with dog emotions, dolphins with dolphin emotions etc. ... then are we willing to grant robots with robot emotions? If so, then what is robot emotion? And what is the nature of the emotion or relationship that it might evoke in us?"-"What distinguishes 'us' from 'them'? Humankind has steadily been retreating from our notion of specialness for hundreds of years. Science has shown us that we are not at the centre of our solar system, that we share a surprising percentage of our genome with other species, that certain animals (and certain machines) are also able to use tools, communicate with language, and solve cognitive problems in at least limited ways. Now technology is even entering into the social and the emotional realms, as computers begin to recognize facial expressions and reason about socio-emotional states of people."-"Robots may eventually incorporate more biological technologies to leverage from biochemical processes, such as the ability to self-repair. [...] Will we still be human? What does it mean to be human? What do we want to preserve of our humanness? What are the implications for granting the status of personhood?"-There are some great questions here, and I can well understand Matt's fascination with this subject. Dr. Breazeal has no idea herself how far the technology can go (she says that programming the various human attributes into a robot is "a daunting task, if not impossible") but it's obviously part of the fascination to keep testing the frontiers.
A new model for building AI
by xeno6696 , Sonoran Desert, Friday, November 12, 2010, 03:11 (5124 days ago) @ dhw
dhw asks many good questions; but I wish to direct to a specific question at a time:-"What do we want to preserve of our humanness?"-What makes you think that we have to choose to preserve anything? That which makes us human; is that ineffable river we call man. Never the same thing twice... do we cast man in stone as we did Gods and thus destroy them?
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
A new model for building AI
by dhw, Tuesday, November 16, 2010, 13:13 (5120 days ago) @ xeno6696
MATT: dhw asks many good questions, but I wish to direct to a specific question at a time: "What do we want to preserve of our humanness?"-I can't take the credit for any of the questions. They were asked by Dr Cynthia Breazeal, from whose excellent article I quoted several passages.-Your response is: "What makes you think that we have to preserve anything?"-I don't think you can take that question on its own. She leads into it from the statement that robots "may eventually incorporate more biological technologies to leverage from biochemical processes, such as the ability to self-repair." This makes her ask, "Will we still be human? What does it mean to be human? What do we want to preserve of our humanness? What are the implications for granting the status of personhood?" I see all these questions as interconnected. If biological technology is married to robotic technology, we begin to move closer and closer to the fictional scenarios of Frankenstein, Jekyll and Hyde, the Stepford Wives, in which humans lose control of their identities. -Like David, I'm sceptical as to whether we can ever build a robot indistinguishable in all respects (including intellectual and emotional) from humans, but Cynthia Breazeal's questions enter into fields of philosophy. You call man an "ineffable river...never the same thing twice" (ineffable means indescribable or inexpressible, so I'm not sure how it applies to a river), and you ask if we should "cast man in stone as we did Gods and thus destroy them". I think that may be the very problem ... that robots could be the same thing twice, and having been programmed they could be metaphorically cast in stone; even organic humans subjected to technological implants could be cast in stone, in the sense that their behaviour might be dictated by technology and not by that still unfathomably mysterious personal identity linked to the source or medium called consciousness. I don't think the process is stoppable, and I don't think it has to be stopped because I think it will reach insurmountable barriers, but the moral questions still apply even before we reach those barriers. Imagine the consequences in a totalitarian regime if the government controlled the technology to change and direct people's behaviour. It's already bad enough that they can influence thought (see Orwell's 1984), but at least individuals are still potentially capable of thinking for themselves (hence rebellions and resistance movements). And so I think you have answered your own question. What we have to preserve is individual autonomy ... which is the right not to be "cast in stone".
A new model for building AI
by David Turell , Tuesday, November 16, 2010, 22:42 (5120 days ago) @ dhw
> Like David, I'm sceptical as to whether we can ever build a robot indistinguishable in all respects (including intellectual and emotional) from humans, but Cynthia Breazeal's questions enter into fields of philosophy. You call man an "ineffable river...never the same thing twice" (ineffable means indescribable or inexpressible, so I'm not sure how it applies to a river), -As an old river rafter her metaphor is perfect. The water in the river is never the same water moment by moment. Its color may be the same, its temperture the same, but the molecules are always different. The Grand canyon at its quietist is 10,000 cfm, and 120,000 cfm at flood stage. Never the same but looks the same, and is raftable at both levels.-Like you I am more than skeptical that a robot can ever be as changeable as a human. Our views of the world vary all through our lifetimes. A robot may represent a moment, a slice of a lifetime.
A new model for building AI
by xeno6696 , Sonoran Desert, Wednesday, November 17, 2010, 01:52 (5120 days ago) @ dhw
MATT: dhw asks many good questions, but I wish to direct to a specific question at a time: "What do we want to preserve of our humanness?" > > I can't take the credit for any of the questions. They were asked by Dr Cynthia Breazeal, from whose excellent article I quoted several passages. > > Your response is: "What makes you think that we have to preserve anything?" > > I don't think you can take that question on its own. She leads into it from the statement that robots "may eventually incorporate more biological technologies to leverage from biochemical processes, such as the ability to self-repair." This makes her ask, "Will we still be human? What does it mean to be human? What do we want to preserve of our humanness? What are the implications for granting the status of personhood?" I see all these questions as interconnected. If biological technology is married to robotic technology, we begin to move closer and closer to the fictional scenarios of Frankenstein, Jekyll and Hyde, the Stepford Wives, in which humans lose control of their identities. > -Yet, you forget the most prescient analysis from Kurzweil; AI may become superfluous because it will soon be possible to extend human consciousness with that of the mechanical... think of processors and computer memory installed within the human brain!!! This scenario--in my mind more probable--and what do you think of human consciousness then?-> Like David, I'm sceptical as to whether we can ever build a robot indistinguishable in all respects (including intellectual and emotional) from humans, but Cynthia Breazeal's questions enter into fields of philosophy. You call man an "ineffable river...never the same thing twice" (ineffable means indescribable or inexpressible, so I'm not sure how it applies to a river), and you ask if we should "cast man in stone as we did Gods and thus destroy them". I think that may be the very problem ... that robots could be the same thing twice, and having been programmed they could be metaphorically cast in stone; even organic humans subjected to technological implants could be cast in stone, in the sense that their behaviour might be dictated by technology and not by that still unfathomably mysterious personal identity linked to the source or medium called consciousness. I don't think the process is stoppable, and I don't think it has to be stopped because I think it will reach insurmountable barriers, but the moral questions still apply even before we reach those barriers. Imagine the consequences in a totalitarian regime if the government controlled the technology to change and direct people's behaviour. It's already bad enough that they can influence thought (see Orwell's 1984), but at least individuals are still potentially capable of thinking for themselves (hence rebellions and resistance movements). And so I think you have answered your own question. What we have to preserve is individual autonomy ... which is the right not to be "cast in stone".-Ah... a tremendous reinterpretation of my words; but beautiful nonetheless! I'm of the opinion that it is mankind's wish to fix gods in place for all eternity that is the mortal blow to any God or religion. But I ask you to imagine a world where we do have machines that seem remarkably human... if you haven't seen the movie AI--this is a very good place to start! Does it really make you think that it would challenge your autonomy? It is precisely my position that states, "Each man is his own island..." The only drastic implication for man that I see is that of over-moralization...-If you see the movie AI--and I hope you do--watch how your emotions react to the 'lost child' of the story. If machines interacted with men in this fashion, I predict that man would over-moralize to the point of effeminacy; the society that would invariably evolve around such a monstrosity as that child would undoubtedly be one in which man would lose his autonomy... we would have to all become Jainists in order to survive! We would become a slave to life in the most grossest of senses. Harming such a child would feel identical to harming an actual child--
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
A new model for building AI
by dhw, Thursday, November 18, 2010, 13:05 (5118 days ago) @ xeno6696
MATT: Yet, you forget the most prescient analysis from Kurzweil; AI may become superfluous because it will soon be possible to extent human consciousness with that of the mechanical...think of processors and computer memory installed within the human brain!!! This scenario ... in my mind more probable ... and what do you think of human consciousness then?-This is precisely the scenario I was envisaging when I wrote about organic humans being "subjected to technological implants", with their behaviour being dictated by technology and not by their personal identity. It's scary enough that an engineer several thousand miles away can tap into my computer even now and take it over. The same thing could be done with a computer implanted in the brain. It's nightmarish but utterly plausible, and poses a threat to our individual autonomy. Human consciousness can be manipulated psychologically (e.g. with propaganda) and also physiologically (e.g. with drugs), but technological manipulation may well be the greatest threat. Even without implants, technology is affecting the way our brains work. How many young people nowadays are capable of the simplest mental arithmetic? However, I don't quite know how to answer your question because I'm not sure I understand what point you're trying to make! Sorry.- MATT: If you see the movie AI ... and I hope you do ... watch how your emotions react to the 'lost child' of the story. If machines interacted with men in this fashion, I predict that man would over-moralize to the point of effeminacy; the society that would invariably evolve around such a monstrosity as that child would undoubtedly be one in which man would lose his autonomy...we would have to all become Jainists in order to survive!-I haven't seen the film AI, but will look out for it. It's not clear to me, however, why an android would lead to us all becoming Jainists. Wouldn't the android be a special case, to be considered on a par with humans, while every other form of life remains the same? If so, won't our attitude towards the rest of life also remain the same?
A new model for building AI
by David Turell , Thursday, November 18, 2010, 15:13 (5118 days ago) @ dhw
MATT: Yet, you forget the most prescient analysis from Kurzweil; AI may become superfluous because it will soon be possible to extent human consciousness with that of the mechanical...think of processors and computer memory installed within the human brain!!! This scenario ... in my mind more probable ... and what do you think of human consciousness then? > > > MATT: If you see the movie AI ... and I hope you do ... watch how your emotions react to the 'lost child' of the story. -Good luck with AI. the following article indicates the size of the task:-http://news.cnet.com/8301-27083_3-20023112-247.html
A new model for building AI
by xeno6696 , Sonoran Desert, Monday, November 22, 2010, 01:11 (5115 days ago) @ David Turell
MATT: Yet, you forget the most prescient analysis from Kurzweil; AI may become superfluous because it will soon be possible to extent human consciousness with that of the mechanical...think of processors and computer memory installed within the human brain!!! This scenario ... in my mind more probable ... and what do you think of human consciousness then? > > > > > > MATT: If you see the movie AI ... and I hope you do ... watch how your emotions react to the 'lost child' of the story. > > Good luck with AI. the following article indicates the size of the task: > > http://news.cnet.com/8301-27083_3-20023112-247.html-No not at all; it might not be necessary to have to emulate even THAT many connections. We can already emulate cat brains--I think either dhw or George provided a link for that this summer. And as I stated earlier, quantum computing will make this problem much more tractable. Like I said before, a quantum computer will be able to factor a number of the size 2^100 in one computational step. (The conjecture is 'will be' because at the moment they can only maintain a computational state long enough to find all the primes under 16.) With current computers it takes 2^n steps to factor a large prime. Meaning in practical terms the sun will die before the computer finished. So--quantum computing eliminates boundaries; the current constraint for a number as big as 150 Trillion (as the article states) is memory. However, this is another area where quantum computers will not have such a constraint; the 'qubits' are memory themselves. -And before you tell me that it's all a "pipe dream" Quantum computing was such a thing in 1993. Now we have working machines; By 2025 we'll have our first "ENIAC" and then some law similar to Moore's takes over. Though it will be MUCH faster than Moore's. To quote Kurzweil, "We're entering an exponential age." With quantum computing there's no longer a heat or transistor barrier that has stifled processor speeds starting in 2002. -So again; the problem if you want to fully model the human brain is 150 Trillion. A quantum computer won't be restricted by any conventional problems as we have now, so again--if it will be possible, it will be here. -Still, a much simpler means of countering an AI aspiration right now, David, has nothing to do with biology and everything to do with practicality: How do you write an equation that solves everything? -I have ready counters for that question, but we'll worry about that another time. For now, just note, that we are soon reaching an epoch where the kinds of problems we can use computers for are only going to be limited by the ingenuity of the programmers, and it will have nothing at all to do with the complexity of the problems. If AI is intractable, it will only be because of a lack of human ingenuity and not because of the hardware it sits on.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
A new model for building AI
by xeno6696 , Sonoran Desert, Monday, November 22, 2010, 00:48 (5115 days ago) @ dhw
MATT: Yet, you forget the most prescient analysis from Kurzweil; AI may become superfluous because it will soon be possible to extent human consciousness with that of the mechanical...think of processors and computer memory installed within the human brain!!! This scenario ... in my mind more probable ... and what do you think of human consciousness then? > > This is precisely the scenario I was envisaging when I wrote about organic humans being "subjected to technological implants", with their behaviour being dictated by technology and not by their personal identity. It's scary enough that an engineer several thousand miles away can tap into my computer even now and take it over. The same thing could be done with a computer implanted in the brain. It's nightmarish but utterly plausible, and poses a threat to our individual autonomy. Human consciousness can be manipulated psychologically (e.g. with propaganda) and also physiologically (e.g. with drugs), but technological manipulation may well be the greatest threat. Even without implants, technology is affecting the way our brains work. How many young people nowadays are capable of the simplest mental arithmetic? However, I don't quite know how to answer your question because I'm not sure I understand what point you're trying to make! Sorry. > -As someone who pays attention to computer security, there's plenty of reasons in my mind not to augment say, computational abilities into the brain. Memory would be one thing. -> > MATT: If you see the movie AI ... and I hope you do ... watch how your emotions react to the 'lost child' of the story. If machines interacted with men in this fashion, I predict that man would over-moralize to the point of effeminacy; the society that would invariably evolve around such a monstrosity as that child would undoubtedly be one in which man would lose his autonomy...we would have to all become Jainists in order to survive! > > I haven't seen the film AI, but will look out for it. It's not clear to me, however, why an android would lead to us all becoming Jainists. Wouldn't the android be a special case, to be considered on a par with humans, while every other form of life remains the same? If so, won't our attitude towards the rest of life also remain the same?-Maybe it won't happen for you, but for me, the nurturing side of me comes out in spades in the course of the movie, and its precisely because the machine is a child. If one were to logically extract forward, imagine if an adult would beat the child 'machine.' Technically, its a machine. It also cries out in pain and acts just like a real child... What then do you think our society would say about abusing these machines? To what extent are they "real," considering that we cannot even verify experience? I'm not going to be as extreme as Martin as to suggest that these are only mimics and nothing else, but I think this scenario raises interesting points. Would we treat them as animals or as people?
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
building AI: never with consciousness
by David Turell , Monday, December 22, 2014, 01:28 (3624 days ago) @ xeno6696
I agree with Bishop and Penrose. Great computation no free-floating productive thought.:-http://www.newscientist.com/article/dn26716-fear-artificial-stupidity-not-artificial-intelligence.html#.VJd1J8B0MB-"First, computers lack genuine understanding. The Chinese Room Argument is a famous thought experiment by US philosopher John Searle that shows how a computer program can appear to understand Chinese stories (by responding to questions about them appropriately) without genuinely understanding anything of the interaction.-"Second, computers lack consciousness. An argument can be made, one I call Dancing with Pixies, that if a robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be everywhere: in the cup of tea I am drinking, in the seat that I am sitting on. If we reject this wider state of affairs - known as panpsychism - we must reject machine consciousness.-"Lastly, computers lack mathematical insight. In his book The Emperor's New Mind, Oxford mathematical physicist Roger Penrose argued that the way mathematicians provide many of the "unassailable demonstrations" to verify their mathematical assertions is fundamentally non-algorithmic and non-computational."
building AI: never with consciousness
by David Turell , Monday, May 25, 2015, 17:54 (3469 days ago) @ David Turell
Gelernter on AI. Computers are just machines. The mind is not:-http://inference-review.com/article/the-mind-in-its-place-"The mind is its own place, John Milton observed. But where is this place, and what is its structure? Where is the map? I argue that our unwillingness to confront these simple questions has got us into deep trouble. Too many thinkers have let themselves be guided by an analogy instead of a map. The analogy in this case is an important part of modern intellectual history; it is over half a century old, and remains deeply influential and deeply misleading. My emphasis is on the analogy itself and its remarkable influence on science, philosophy, and popular culture.-The brain is not a computer. The mind is not its software."
building AI: never with consciousness
by Balance_Maintained , U.S.A., Tuesday, May 26, 2015, 13:23 (3468 days ago) @ David Turell
Gelernter on AI. Computers are just machines. The mind is not: > > http://inference-review.com/article/the-mind-in-its-place > > "The mind is its own place, John Milton observed. But where is this place, and what is its structure? Where is the map? I argue that our unwillingness to confront these simple questions has got us into deep trouble. Too many thinkers have let themselves be guided by an analogy instead of a map. The analogy in this case is an important part of modern intellectual history; it is over half a century old, and remains deeply influential and deeply misleading. My emphasis is on the analogy itself and its remarkable influence on science, philosophy, and popular culture. > > The brain is not a computer. The mind is not its software."-This is very true. Even though there are some mechanical similarities, for every one thing that is similar there are a hundred more that are completely different.
--
What is the purpose of living? How about, 'to reduce needless suffering. It seems to me to be a worthy purpose.
building AI: never with consciousness
by David Turell , Tuesday, May 26, 2015, 14:30 (3468 days ago) @ Balance_Maintained
> > The brain is not a computer. The mind is not its software." > > Tony: This is very true. Even though there are some mechanical similarities, for every one thing that is similar there are a hundred more that are completely different.-Even a summa cum laude game programmer cannot produce a computer program that gives a computer a mind.
building AI: never with consciousness
by Balance_Maintained , U.S.A., Tuesday, May 26, 2015, 17:58 (3468 days ago) @ David Turell
> > > The brain is not a computer. The mind is not its software." > > > > Tony: This is very true. Even though there are some mechanical similarities, for every one thing that is similar there are a hundred more that are completely different. > > Even a summa cum laude game programmer cannot produce a computer program that gives a computer a mind.-Game Designer, and no, I couldnt. I do recognize that it IS designed, though. There is entirely too much 'purpose' in the way it is all put together and the way it all works for it to be random. That level of cohesion and organization does not happen by chance.
--
What is the purpose of living? How about, 'to reduce needless suffering. It seems to me to be a worthy purpose.
building AI: never with consciousness
by xeno6696 , Sonoran Desert, Tuesday, May 26, 2015, 23:59 (3468 days ago) @ David Turell
I agree with Bishop and Penrose. Great computation no free-floating productive thought.: > > http://www.newscientist.com/article/dn26716-fear-artificial-stupidity-not-artificial-in... -I guess I'm here to muddy the waters a bit. -Yes, every (non-viral) program can only execute within its narrow confines. -I'll challenge: Same with us. There's never been a real example, in my estimation, of a human being that has quite literally come up with some idea completely from scratch. What do I mean?-If you study the history of religions, we started with animism, and ended up penultimately converging (worldwide) to a mix of pantheism and monotheism. Animism... well we observed nature, and realized we were a part of it--and more importantly, we realized that things happened that we couldn't directly control. So we worshipped the spirits of the trees and animals. There's no great leap here. Even when we leap to the Pythagoreans... their spiritual world was still bound by the limits of human thought: and the same is true today.-The history of human thought is a slow evolution of one idea leading into the next, sometimes coming full circle, sometimes finding a new plateau, but never... anything new.-How is that any different than a limited computer program? Sure, our program might be the cardinal product of every chemical reaction in our bodies, but because we're run by chemistry... still finite. Still...-...computable...-...even if "computable" means "can only be accomplished by cells." -Now... bacteria are far from people... but one may take note of this software: https://simtk.org/home/wholecell/-A complete simulation of a bacterium that allows full predictability of genotype to phenotype. -Human thinking is broadened by contact with other people, but we're still limited by all the amassed knowledge that came before us... there is no real *genius* in this world... only a limited creativity based upon what we've managed to stumble across.-We are a bumbling species, intellectually.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
building AI: never with consciousness
by Balance_Maintained , U.S.A., Wednesday, May 27, 2015, 00:28 (3468 days ago) @ xeno6696
I agree with Bishop and Penrose. Great computation no free-floating productive thought.: > > > > http://www.newscientist.com/article/dn26716-fear-artificial-stupidity-not-artificial-in... > > > I guess I'm here to muddy the waters a bit. > > Yes, every (non-viral) program can only execute within its narrow confines. > > I'll challenge: Same with us. There's never been a real example, in my estimation, of a human being that has quite literally come up with some idea completely from scratch. What do I mean? > -There is nothing new under the sun.-> If you study the history of religions, we started with animism, and ended up penultimately converging (worldwide) to a mix of pantheism and monotheism. Animism... well we observed nature, and realized we were a part of it--and more importantly, we realized that things happened that we couldn't directly control. So we worshipped the spirits of the trees and animals. There's no great leap here. Even when we leap to the Pythagoreans... their spiritual world was still bound by the limits of human thought: and the same is true today. > > The history of human thought is a slow evolution of one idea leading into the next, sometimes coming full circle, sometimes finding a new plateau, but never... anything new. > > How is that any different than a limited computer program? > Human thinking is broadened by contact with other people, but we're still limited by all the amassed knowledge that came before us... there is no real *genius* in this world... only a limited creativity based upon what we've managed to stumble across. > > We are a bumbling species, intellectually.--Human 'creativity' is not really about coming up with anything 'new'. It is about putting what we already know together in new combinations that form a completely unique perspective. Mathematical formulae, paintings, music, speech, language, none of these things are new in and of themselves. However, I will challenge you on one point. We DO create something new. We create new experiences, perspectives, memories.-Now, that may not seem like much. You might say that creating new memories is no special feat, and trying to quantify experiences and perspectives is really hard to pin down. What makes us different is NOT the data computational functions, but our ability to FEEL, to empathize, and dare I say it, to love. One of the interesting things about game design is that even though we traffic in algorithms and data, what we really peddle to the masses is an experience. It is emotional. No computer can emulation emotions. Computers can not feel. Without feeling, their computational capacity will never match a humans, no matter how fast it gets. Feelings allow our creativity beyond preprogrammed patterns. In fact, our brain has a mechanism design to do just that; when you are tired it stops enforcing the logical contraints that it normally operates under and begins making new and unexpected connections.
--
What is the purpose of living? How about, 'to reduce needless suffering. It seems to me to be a worthy purpose.
building AI: never with consciousness
by David Turell , Wednesday, May 27, 2015, 00:45 (3468 days ago) @ Balance_Maintained
> > Tony: Human 'creativity' is not really about coming up with anything 'new'. It is about putting what we already know together in new combinations that form a completely unique perspective..... However, I will challenge you on one point. We DO create something new. We create new experiences, perspectives, memories. > > No computer can emulation emotions. Computers can not feel. Without feeling, their computational capacity will never match a humans, no matter how fast it gets. Feelings allow our creativity beyond preprogrammed patterns. In fact, our brain has a mechanism design to do just that; when you are tired it stops enforcing the logical contraints that it normally operates under and begins making new and unexpected connections.-Simply, bravo!
building AI: never with consciousness
by xeno6696 , Sonoran Desert, Wednesday, May 27, 2015, 01:19 (3468 days ago) @ David Turell
> > > > Tony: Human 'creativity' is not really about coming up with anything 'new'. It is about putting what we already know together in new combinations that form a completely unique perspective..... However, I will challenge you on one point. We DO create something new. We create new experiences, perspectives, memories. > > > > No computer can emulation emotions. Computers can not feel. Without feeling, their computational capacity will never match a humans, no matter how fast it gets. Feelings allow our creativity beyond preprogrammed patterns. In fact, our brain has a mechanism design to do just that; when you are tired it stops enforcing the logical contraints that it normally operates under and begins making new and unexpected connections. > > Simply, bravo! http://www.theatlantic.com/entertainment/archive/2014/08/computers-that-compose/374916/-If the audience can't tell the difference, and you have already accepted that all human ingenuity is derivative...-What's the difference?
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
building AI: never with consciousness
by Balance_Maintained , U.S.A., Wednesday, May 27, 2015, 11:18 (3467 days ago) @ xeno6696
> > > > > > Tony: Human 'creativity' is not really about coming up with anything 'new'. It is about putting what we already know together in new combinations that form a completely unique perspective..... However, I will challenge you on one point. We DO create something new. We create new experiences, perspectives, memories. > > > > > > No computer can emulation emotions. Computers can not feel. Without feeling, their computational capacity will never match a humans, no matter how fast it gets. Feelings allow our creativity beyond preprogrammed patterns. In fact, our brain has a mechanism design to do just that; when you are tired it stops enforcing the logical contraints that it normally operates under and begins making new and unexpected connections. > > > > Simply, bravo! >Xeno: http://www.theatlantic.com/entertainment/archive/2014/08/computers-that-compose/374916/... > If the audience can't tell the difference, and you have already accepted that all human ingenuity is derivative... > > What's the difference?-Well, for starters, those robots will never deviate, never improvise, and will never get caught up in the emotion of the music and perform even better than they are typically be able to. Aside from that, they will also never develope a NEW style of music. -Music is not new. It is tuned frequencies of sound put together in a pleasing arrangement. That said, no one could question that Boogie Woogie was new, as a style, compared to all that came before it. No one could question that Jazz, Rock, Blues, Death Metal, Alternative, New Age, and even Classical music were all new. Music is not new. Sound is not new. But the arrangement and composition of the sounds, the tones and timbres used, the number of harmonies, the timing and rythms, these elements were put together in a new and original way to produce new and powerful emotional experiences. Robots can't do that. They can take a list of rules, and follow the rules.
--
What is the purpose of living? How about, 'to reduce needless suffering. It seems to me to be a worthy purpose.
building AI: never with consciousness
by David Turell , Wednesday, May 27, 2015, 00:42 (3468 days ago) @ xeno6696
> Matt: Human thinking is broadened by contact with other people, but we're still limited by all the amassed knowledge that came before us... there is no real *genius* in this world... only a limited creativity based upon what we've managed to stumble across. > > We are a bumbling species, intellectually.-Welcome back! And you have neatly skipped consciousness, which computers will never have. Computers are simply machines.
building AI: never with consciousness
by xeno6696 , Sonoran Desert, Thursday, May 28, 2015, 15:57 (3466 days ago) @ David Turell
> > Matt: Human thinking is broadened by contact with other people, but we're still limited by all the amassed knowledge that came before us... there is no real *genius* in this world... only a limited creativity based upon what we've managed to stumble across. > > > > We are a bumbling species, intellectually. > > Welcome back! And you have neatly skipped consciousness, which computers will never have. Computers are simply machines.-There's a fallacy in here, mainly that we can't *know* whether or not something is conscious. Its a brain-in-the-vat kind of problem: We just assume that we're conscious, and that other people we meet are conscious. -That assumption of consciousness will necessarily hold true for machines. Why do you think Turing wrote the test for AI as one where a machine could convice people? Both because lying requires a dynamic of thought that only a conscious being could possess. Also, that people don't *know* what consciousness really is--its kind of a gray "it looks like its conscious" kind of thing. -So if a machine can manage to trick enough people, we have no choice but to assume it is conscious. -So my main disagreement when you say a machine will *never* be conscious is the two-pronged thrust:-1. We don't know what consciousness *is.* If you can't define it without controversy, then you have no right to claim you possess knowledge about it, and have no right to claim a machine is incapable of being conscious--especially when our day-to-day operating principle of consciousness to date is "I have no reason to believe I'm a brain in a vat, so I'll assume I'm not."-2. You're betting against human ingenuity, and I'll repeat this again: AI researchers know more about how human beings learn about the world than any other researcher, neuroscientists included. -A lesser thrust, but it hasn't been demonstrated sufficiently to me that we're not machines ourselves. What I mean by that, is to date, I've still witnessed nothing about the world that makes me believe my assumption of materialism is false. (Dennett holds sway for me here.) We can program a machine to learn on its own. That isn't automata. It isn't consciousness, but it isn't automata. Our bodies are ruled by the laws of chemistry, which are finite--but with an amazing complexity. Our bodies may just be a biological equivalent to a machine. Our minds--an emergent property of that complex milieu. If a machine that can learn isn't automata, it doesn't stand to reason that just because we're machines by the laws of biology & chemistry we're automata ourselves. We have free will--just an extremely limited free will.
--
\"Why is it, Master, that ascetics fight with ascetics?\"
\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"
building AI: never with consciousness
by David Turell , Thursday, May 28, 2015, 19:50 (3466 days ago) @ xeno6696
edited by David Turell, Thursday, May 28, 2015, 19:59
> Matt: There's a fallacy in here, mainly that we can't *know* whether or not something is conscious. Its a brain-in-the-vat kind of problem: We just assume that we're conscious, and that other people we meet are conscious. -I know my consciousness, and cannot know yours. I agree. Just as I know pornography when I see it, but it defies definition for everyone's satisfaction. > > Matt: That assumption of consciousness will necessarily hold true for machines. Why do you think Turing wrote the test for AI as one where a machine could convice people? -Facts and lying are not same as the development of new concepts. I can do that. A computer cannot. Can a computer ever develop the concept of relativity? > Matt: So if a machine can manage to trick enough people, we have no choice but to assume it is conscious. -That is absolutely a false assumption. Tricks are not facts. Can your computer have this discussion with me? -> > Matt: 1. We don't know what consciousness *is.* -We know what we experience.-> Matt: If you can't define it without controversy, then you have no right to claim you possess knowledge about it, and have no right to claim a machine is incapable of being conscious--especially when our day-to-day operating principle of consciousness to date is "I have no reason to believe I'm a brain in a vat, so I'll assume I'm not."-I still maintain that is a false premise. It may satisfy you but not me. I may not give you a satisfactory definition, but I accept I fully know what I experience. > > Matt: 2. You're betting against human ingenuity, and I'll repeat this again: AI researchers know more about how human beings learn about the world than any other researcher, neuroscientists included.-Learning about the world is not conceptualizing, although I agree that what we learn helps to create what conclusions we reach. > > Matt: A lesser thrust, but it hasn't been demonstrated sufficiently to me that we're not machines ourselves. What I mean by that, is to date, I've still witnessed nothing about the world that makes me believe my assumption of materialism is false. (Dennett holds sway for me here.) -I don't buy Dennett's approach to the brain, but you are right, most of our bodily fuctions are very automatic. The brain is a whole different story, with its plasticity abilities, and its relationship to intelligence so responsive to appropriate stimulation. See the articles on juvenile wiring as prime examples.-> Matt; We can program a machine to learn on its own. That isn't automata. It isn't consciousness, but it isn't automata. Our bodies are ruled by the laws of chemistry, which are finite--but with an amazing complexity. Our bodies may just be a biological equivalent to a machine. Our minds--an emergent property of that complex milieu. -Exactly, "emergent" and more than the sum of the parts. -> Matt: If a machine that can learn isn't automata, it doesn't stand to reason that just because we're machines by the laws of biology & chemistry we're automata ourselves. We have free will--just an extremely limited free will.-Really limited by what? The constraints of the biologic electricity? Billions of neurons, and trillions of adaptable synapses? New synapses throughout a lifetime?