Identity (Identity)

by xeno6696 @, Sonoran Desert, Wednesday, September 30, 2009, 05:58 (5532 days ago) @ David Turell

David,-I'll do my best to explain this... it'll actually help me a bit as I'm moving into some more "advanced data structures" in my program. -In computer science, the simplest type of data structures are arrays. In short, they are a list identified by an index. Item 0 through x. The only way to access the data is to pull something in by its index. Most languages won't allow you to break that rule... though this year I've been introduced to one that lets you break a ton of rules. (Beyond scope, BAD matt...) -The next level up are linked lists. The difference between linked lists and arrays, is that there is no need for an index. As each data object is created it is linked to the previous one in the chain. The benefit here is that you don't need to have an index... the drawback is that you have to crawl through each 'node' in order find out where your desired data object is.-The node is the central concept in neural-net programming--and in more 'generic' types of programming. A higher level of data structure complexity is the graph; where a linked-list can be viewed as a line of cups, a graph is better represented by a spider's web. (If you Wikipedia "graph theory" it'll give you enough background to get the flavor.) Each node can be connected (by memory reference) to any number of other nodes. In terms of modeling something biologically, this has tremendous advantages as we can model a neuron by creating a node, and simply storing all of its individual references.-Neural net programming involves creating a series of nodes and limiting their individual abilities while allowing emergent behaviors to play. While a symphony is more than its pieces, so are neural nets. Emergent properties emerge, and it turns out that neural nets tend to make better decisions than we can "forcefully" program. -The basics are actually explained pretty well here: http://en.wikipedia.org/wiki/Artificial_neural_network
It loses some coherency in the areas where they start bringing up formulas. Not that I can't piece them together, but the writing style is... lackluster. In about a year I should be able to say more on the subject beyond a 30k ft overview. -Modeling a human brain would essentially mean that we have several billion blocks of memory connected in various paths to each other. The biochemistry COULD be modeled, but to me that would be unnecessary complexity at this point. We need a 'best case' working model before we can start worrying about the finer chemical complexities... that we don't understand yet. We need both UNO's and Yale's systems to collect a ton of data before we can really do that part of it justice. -The basic ideas are there, the computer languages we have available are in my opinion, sophisticated enough to allow us to give this a real try. While I mute my optimism, there can be no better way to understand something except from trying to build it. Of course, this is the humble opinion of an engineer, so take that as my own brand of perspectivism.

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"


Complete thread:

 RSS Feed of thread

powered by my little forum