Identity (Identity)

by xeno6696 @, Sonoran Desert, Wednesday, September 09, 2009, 18:04 (5553 days ago) @ David Turell

Apropos of my comment that recognizing a brain area that lights up for a given stimulus doesn't really tell you how it is working 'inside': 
> 
> The study shown below identifies an area of the hippocampus that may be a schizophrenia trigger:
> 
> http://sciencenow.sciencemag.org/cgi/content/full/2009/908/1
> 
> 
> 
Awesome, this study. -> > Computing (as I mentioned when I first joined) takes an entirely different approach to such a question. It asks us to literally build a model of what it is we're trying to study, because at least in computer science, it can't be said you understand something without being able to build a working model of it. 
> > 
> >> For the first time--through the "magic" of model-building we're figuring out the complexities of biochemistry. 
> 
> I doubt that we can ever model any area of the brain. Do you do a proxy model or an exact replication of each synaptic connection, with their innate ability to modify. Computers do not live and self-modify.-It might not be a requirement [EDIT] to "live" in the biological sense of the word in order to have consciousness. -Machines most certainly DO self-modify. Anytime you run Java code on your machine the compiler sitting in the virtual machine does this frequently.-http://en.wikipedia.org/wiki/Self-modifying_code-AI is self-modifying in all but its most rudimentary forms. Pattern-recognition allows a machine to solve unknown problems using only the solutions it "knows." When it learns in the first place it is "tabula rasa" and the learning algorithms alter behavior (often using genetic algorithms) and it codes its own solutions. So here, you are incorrect: Computers DO self-modify. You're thinking in the mindset of traditional programming and not AI programming: Completely different skill sets and paradigms. -To my counterparts that study biology, cells are actually quite easily modeled as finite automata. The problem of complexity is that you need a powerful enough network of processor cores (like the conficker network) to adequately model a complex system. The first paper I showed you was the model of a single cell. The processing power required for that one sim is immense. Assuming Moore's law hasn't died yet, in about 20yrs (opposed to the 10 on the original science daily "make a brain from scratch" article, we should have the baseline technology to be able to run our first *true* brain models. -http://en.wikipedia.org/wiki/Finite_automata-Note here, as I seem to run into this: I am not making any claims concerning the origin of consciousness or about the likelihood of a creator. All I'm saying is that the complexity of even things such as the human mind, *may* not be mysteries forever.-EDITED

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"


Complete thread:

 RSS Feed of thread

powered by my little forum