Life as Evolving Software... (Chaitin) (Humans)

by xeno6696 @, Sonoran Desert, Tuesday, December 20, 2011, 03:21 (4721 days ago) @ xeno6696

What I do and what it means.

The past semester of doing independent research has put me in quite a reflective mood for what it is that computer scientists actually do; undergrad touches on many concepts, but it is important to note that things really do get to a whole different level in grad school.

Computer Science—at its core—is about taking a very complex problem—and making it solvable. There’s different ratings for solutions (we call them “Big-Ohs”) but if you can remember back far enough to high school algebra, these map to simple graph functions.

On this site I just posted, if you scroll to the very bottom, you get the useful bits that relate to what I'm talking about. When I create an algorithm, it's rated by its growth. The numbers from left to right, represent time. The numbers from bottom to top, represent how many operations the algorithm needs to perform to complete. (Roughly, we're dealing with estimates here.) The most important thing to note though is that flatter is better. (The fastest possible time is the line for "1.") This means "one operation."

Going back to Chaitin's paper, he said that previous work had demonstrated a worst-case complexity for completely random walks at exponential time. That corresponds to the line 2^n. He then introduces "Intelligent Design." He says that by being able to (by hand) select each mutation perfectly, the time complexity becomes linear. This is the line represented by just "n."

What this does is provide a boundary--evolution can be no better than "n" and no slower than 2^n. EVERYTHING in between is fair game.

It is here that I need to explain what it is that he's doing by using "Intelligent Design." In Operating Systems research, we do something called "Page Replacement." This is just a fancy way of saying "how we schedule memory swaps." What's important is a theoretical algorithm called "OPT." What OPT is, is similar to Chaitin's "Intelligent Design." With perfect knowledge of the future, you can create the optimum schedule. (Obviously impossible.)

The stunner in this paper, is that by some small tweaks... Chaitin was able to create a complete random-walk that results in fitness with n^2. That's an extremely significant feat. It's still not as good as Intelligent Design as our benchmark, but we're MUCH MUCH better off than exponential complexity.

What this does, on the theoretical level, is demolish entirely Dembski's argument for ID via information theory. (He claimed to prove that random walks were impossible.)

What this paper does for our debate:

Not a whole lot. The easy philosophical argument is that from the bottom-up we're dealing with man-made software artifacts. Even IF we prove a reasonable time-bound for a complete random-walk mutation schedule, the ID counter will be "these are intelligently designed." However, it makes the time argument they make much less poignant.

The other way: If its demonstrated that complete random walks can actually reach the same time-complexity as Intelligent Design, then there's three philosophical interpretations to be drawn:

1. God is random.

2. We cannot tell the difference between design and randomness.

3. Clearly life was intelligently designed. (Drawn from the conclusion that evolution maps to being the same complexity as ID.)

So... at least as far as Chaitin's research goes, don't let uncommondescent try and spin things into directions they simply cannot go...

--
\"Why is it, Master, that ascetics fight with ascetics?\"

\"It is, brahmin, because of attachment to views, adherence to views, fixation on views, addiction to views, obsession with views, holding firmly to views that ascetics fight with ascetics.\"


Complete thread:

 RSS Feed of thread

powered by my little forum