A little knowledge...

Posted 10 February 2007 by

Over at Uncommon Descent, the poster Pav has a post entitled "Programmers Only Need Apply". In it, they note, but fail to discuss this paper, Xue W, et al., Senescence and tumour clearance is triggered by p53 restoration in murine liver carcinomas. Nature. 2007 445; 656-660. What gets the poster excited is not the finding that restoration of the protein p53 can stop tumor growth (which, amusingly, drives yet another nail in the coffin of Discovery Institute Fellow Jonathan Wells's non-mutational model of cancer ), but that the authors use the word "program" to describe the cellular senescence pathway activated by p53.
The use of the word "program" highlights that proponents of NDE have an even sterner task at hand: explaining how the logical loop of a "program" can be built up using NDE mechanisms. There is a ring of "irreducibility" to the idea of a "program", since each part of a "program" is indispensable and likewise an integral part of the program's intended output. Genetics is looking everyday to be more and more like an exercise in computer programming--just as IDists have predicted.
Uh, guys, the use of the word "program" is a convenient analogy, we use the term "program" to help us grasp the timing of activation of the cell death and senescence pathways, but they aren't human programs. An instructive example comes from John R. Searle:
Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. ('What else could it be?') I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer.
Get that DI folks? It's a metaphor, not an actual computer program. The fact that PaV get so exercised over the term "program" is a bit puzzling, The program metaphor is used extensively in biology, for developmental programs to programmed cell death (which p53 plays a key role in as well). In fact, the term "programmed cell death" has been around since 1964, so one is forced to conclude that PaV doesn't know very much about the biology of cell death (or possibly biology at all). Certainly, PaV goes on to claim that the cell p53 cell senescence pathways must be irreducibly complex, because Xue et al used the term "program" in their abstract. Would it be too much to ask PaV to actually look up the literature on cell senescence and cell death, rather than pontificate on the basis of an abstract? A little reading would show that the p53 pathway cannot be IC, as when you knock out p53, other systems take over. P53 is the major, but not the only, gatekeeper of cell senescence. It certainly makes the system more fragile, but knocking out p53 doesn't make the system fail completely, which is Behe's criteria for irreducible complexity. Indeed, there is quite a large literature on the origin and evolution of the programmed cell death pathway. So, the home message folks; don't build an elaborate scenario on the basis of a metaphor, learn a bit of the biology of the system instead. PS. I was amused by this statement:
Behe and Snoke's paper shows the huge improbability of placing two amino acids side-by-side via gene duplication and random mutation.
Actually, it shows that even in the complete absence of selection, binding sites such as the DPG binding site in haemoglobin can evolve in quite reasonable time frames , and the bacterial populations in a bucket of soil will do it much faster. Nice own goal there.

77 Comments

sparc · 11 February 2007

so one is forced to conclude that PaV doesn't know very much about the biology of cell death (or possibly biology at all).
PaV definitely knows nothing about biology at all. There is not a single reasonable PaV post at UD. E.g., in a recent post he messed up the changes of inversion frequencies (inversions present in a given Drosophila species) under different climatic conditions with the inversion rate (frequency of occurrence of new inversions). However, this is the normal "quality" of biological knowledge displayed at UD. Just remember DaveScot's 1n Jesus speculations and what he wrote about translocations. Recently, Dave experienced occasional sound cognitions that he interpreted as Taxonomy, The Neutral Theory, the Molecular Clock and "Survival of the Fittest" exploding. If your IQ is north of 150 there is of course no need to verify such claims. Don't they have a single biologist over there who could prevent these guys from posting such bullshit? Dembski either doesn't care or his biological knowledge is not better. BTW, over at overwhelmingevidence training camp quizzlestick is collecting stupidity points by "proving" that ID articles have been published in peer reviewed journals with this example. Thus, we are not only observing a lack of biological knowledge but absence of common sense.

sparc · 11 February 2007

Dave had visions again: This time he noticed Mendelian Genetics and the Genetic Code exploding. Soon there'll be nothing left from common biological knowledge.

Ian Musgrave · 11 February 2007

THE STUPID! IT BURNS!

I mean, geez, sparc, did you have to do that! I felt my eyeballs melting from the concentrated idiocy. Couldn't they at least put a minute fraction of effort into actual science, instead of this concentrated nonsense. It makes the UFO fanatics seem logical.

And Dave Scott, yeah, acording to him every new discovery is the detah of some aspect of biology. Pity it wasn't predicted by any ID types, and it's just another "materialistic" genetic mechanisms that follows the central dogma. Sheesh!

Anton Mates · 11 February 2007

There is a ring of "irreducibility" to the idea of a "program", since each part of a "program" is indispensable and likewise an integral part of the program's intended output. Genetics is looking everyday to be more and more like an exercise in computer programming---just as IDists have predicted.

How can a person who has access to a computer--the Internet, even--claim that "each part of a program is indispensable?" Look, here's a non-irreducibly complex program. function y=addoneto(x) y=x+1 y=y+0 end Can you find the part which is not indispensable? Look carefully now.

Rupert · 11 February 2007

Look, here's a non-irreducibly complex program. function y=addoneto(x) y=x+1 y=y+0 end

Ah, but any random mutation to that program can't make it add one to x any better! It can only make it WORSE! You've just DISPROVED DARWINISM! Sorry. I've just been reading the comments on the link sparc provided and I think I've lost half my upper brain function. If the original post was bad, those comments are... words fail me. In British soap opera, a genre as ritualistic and stylised as Noh, there's a popular set piece. A bloke and his girlfriend are in a pub, and another bloke starts giving him some lip. First bloke gets upset, and squares up to land a punch on his antagonist. "Leave it, Dave," says his girl. "'E's not worth it". There's then either a bit of a barney followed by police, or glares, readjustment of jackets and a storming out. It's astonishing how often ID arguments make me think of this, these days. R

Kevin W. Parker · 11 February 2007

That whole post can be summed up as "Heh! The evolutionists said 'program'!"

infamous · 11 February 2007

...from that post:

"This is just very good programming.

Go God!"

steve s · 11 February 2007

(reposted from AtBC ) "Programmers Only Need Apply" So, PaV, the scientists who made this discovery weren't qualified to make this discovery, and they should have stepped out of the way for the likes of you? No thanks. "Just as IDists have predicted" Oh, did you predict this discovery about RNAi? Did you publish that prediction somewhere? Is it in your pathetic little journal? Not only has that 'journal' failed to publish its last 5 issues, but nobody can even tell me if there's a new one coming out ever. "Behe and Snoke's paper shows the huge improbability of placing two amino acids side-by-side via gene duplication and random mutation." -yeah, as long as the earth is 10,000 years old and the size of a pickle barrel. If you use the real Earth, his numbers come out, shall we say, a bit differently:

Based on the math presented there [in Behe & Snoke], it appears that this sort of mutation combination could arise about 10^14 times a year, or something like 100 trillion times a year.

http://www.pandasthumb.org/archives/2005/10/behe_disproves.html#comment-53184 http://www.pandasthumb.org/archives/2005/08/critique_of_beh_1.html http://scienceblogs.com/dispatches/2005/10/behe_disproves_irreducible_com.php PaV continues the tard in the comments:

Rule #2: Every evolutionary biologist must take a class in computer programming.

Rule #3: Every IDer has to drive to their local community college and take Algebra 101, Biology 101, and Genetics 101. Utterly uneducated creationism is boring. Get some book learnin and bring a little meat to the dinner table.

steve s · 11 February 2007

(waits for someone to check the approval queue...)

Duncan Buell · 11 February 2007

Yes, but we know that computer programs can in fact be irreducibly complex. A couple of lifetimes ago I submitted a short job to the university computer center. Unburst from the one page of output that was mine was the entire 100 pages or so of the university's Cobol program for printing payroll checks.

Naturally, I browsed through this to see what real Cobol looked like. There was one page of code, with comments at the top:

"This routine is never executed because it is no longer called by any other routine. However, when I delete this code, the program no longer works."

The comment was signed and dated, as one might expect, by the maintenance programmer.

Come to think of it, the whole history of software engineering has been the result of realizing that all programs eventually become irreducibly complex through the normal evolutionary path we call the software lifecycle.

I just realized I have about a dozen journal articles I should write on this topic, viz. irreducible complexity evolving from unintelligent design. But I am not sure off the top of my head that I know which side of the PT argument I'm going to be on by the time I finish these masterpieces.

Ian Musgrave · 11 February 2007

(waits for someone to check the approval queue...)

— Steve S
Can take a while as I snooze away on the other side of the world.

Henry J · 11 February 2007

Re "This routine is never executed because it is no longer called by any other routine. However, when I delete this code, the program no longer works."

Maybe the module defined something - a variable or a smaller function - that does get used somewhere?

Maybe the code generated for the module inserts space between two other things that don't work without the spacing?

Maybe I'm getting a bit off topic with these speculations?

Bye now. Also exit return and logoff.

Henry

Dr Block · 11 February 2007

Overwhelming Evidence now has a wiki.

http://en.wikipedia.org/wiki/Overwhelming_Evidence

Perhaps something about their references to non-journal articles as examples of journal articles needs to be included?

yiela · 11 February 2007

I think that a lot of these folks are very literal thinkers. (This is my own opinion based on my own observations). PaV took the word "program" literally and is so literal minded it seems he can not even conceive of the idea of a metaphor. They also tend to think highly of technicalities and will hold them up as if they are real evidence or even feel that a technicality will trump real evidence. It's about winning the game more than trying to understand anything. At first I thought that literal thinking was always a sign of unintelligence but now I think it's more like a learning disability.

Andrew · 11 February 2007

Re "This routine is never executed because it is no longer called by any other routine. However, when I delete this code, the program no longer works." Maybe the module defined something - a variable or a smaller function - that does get used somewhere? Maybe the code generated for the module inserts space between two other things that don't work without the spacing? Maybe I'm getting a bit off topic with these speculations? Bye now. Also exit return and logoff. Henry

— Henry J
Quite likely there was a bug in another area of the program that overwrote the memory that the routine occupied (ie a buffer overrun). With it removed, it may have overwritten something important instead. Weirder problems have happened.

Popper's Ghost · 11 February 2007

There is a ring of "irreducibility" to the idea of a "program", since each part of a "program" is indispensable and likewise an integral part of the program's intended output.

If every part of Windows XP is indispensable, then why weren't they all in Windows 2000, or Window NT, etc.? It seems that the IDiots are morons, too.

Wheels · 11 February 2007

That's easy, PG.
Because they share a common designer.

Sir_Toejam · 12 February 2007

That's easy, PG. Because they share a common designer.

Bill Gates is the Intelligent Designer? well, that WOULD explain a lot. damn buggy crap.

Popper's Ghost · 12 February 2007

That's easy, PG. Because they share a common designer.

And how exactly would that make their parts indispensable? Youre response is a complete non sequitur, moron.

Popper's Ghost · 12 February 2007

BTW, if I mistook Wheels for a creationist troll but he isn't one, my bad, but his response is still absurd.

Wheels · 12 February 2007

I was parodying the standard Creationist explanation for why there are many variations among similar living things, or why so much of the genome is common to all modern life, *ahem* imbecile.
:)

k.e. · 12 February 2007

Gee PG you should get that thing checked it seems to go off at the slightest touch, I had a shotgun like that once, damn near had a Cheney moment.

Popper's Ghost · 12 February 2007

Yes, but we know that computer programs can in fact be irreducibly complex.

Your anecdote certainly doesn't demonstrate it. All it shows is that (if the comment is even true, while it may not be) there is some code that can't be removed without the program failing; that certainly doesn't mean that no part of the code can be removed -- for all we know, even that routine could be removed as long as one of its statements (probably a data declaration) is retained. And if some code is removed and the program fails to function as intended, so what? Unless it loops indefinitely, it still produces some output, even if that output is the empty string. And even if there did exist an "irreducibly complex" program somewhere, what of it?

Come to think of it, the whole history of software engineering has been the result of realizing that all programs eventually become irreducibly complex through the normal evolutionary path we call the software lifecycle.

Uh, no, it hasn't. You seem to have confused inscrutability with irreducible complexity, but even then, there's a lot more to the history of software engineering that making code comprehendible.

Popper's Ghost · 12 February 2007

I was parodying the standard Creationist explanation for why there are many variations among similar living things, or why so much of the genome is common to all modern life, *ahem* imbecile.

Even as a parody it missed the point, cretin.

Popper's Ghost · 12 February 2007

Gee PG you should get that thing checked it seems to go off at the slightest touch

Not really; "Wheels" is similar to moniker of a creationist troll who has posted here, and his comment was incredibly dumb, even as parody, because there being a single designer wouldn't be any reason at all why parts would be indispensable.

argystokes · 12 February 2007

so one is forced to conclude that PaV doesn't know very much about the biology of cell death (or possibly biology at all).
He certainly knows next to nothing about biology (he argued that frameshift mutations require 2 mutations... right next to one another). On the other hand, I believe he actually has a biology degree, which makes him UD's resident academic in biological sciences. Ouch.

Ian Musgrave · 12 February 2007

Alright, calm down every one and stop the name calling.

Wheels · 12 February 2007

Even as a parody it missed the point, cretin.

— Popper's Ghost
Well if you'll take a breather for a moment from your leg-lifting insult campaign, I'll explain why I said it: it's the standard Creationist non-answer that misses the point it's supposed to address. Whether the question is why there are similarities among different organisms in similar niches, dissimilarities among closely related organisms, the commonality of many genes among all organisms (somewhat like the commonalities among the various Windows releases), or even the fact that all living things are made of basically the same atoms, you can count on the "savvy" Creationist to simply ascribe it to the work of the same designer (must be an unimaginative one for all that intelligence) and claim it as evidence that said designer is at work behind the scenes. It's such a question-begging response that it can't really address anything, but it's used to cover any number of bases from biology to basic physics and chemistry. The point is supposed to be that the answer given doesn't address the question, it's a stock response thrown around whenever it's seen as convenient because the people using it don't understand the criticisms, or refutations. On another level, though, there actually is a "common designer" of sorts for the various Windows releases, but Windows often goes "non-functional" anyway due to one of its parts not working right, which as we all know from the Behe the Ever-Bloviating is a sign of irreducible complexity! I understand your criticism of the idea that "a program," as in "the entirety of any program given program," is irreducibly complex, I'm just making an off-the-cuff remark that ties into inept Creationist rhetoric and the subject of Windows. And which Creationist troll used to post with "Wheels" in their handle? I used to post under this handle around here semi-regularly, but the atmosphere took a turn for the rank so I've been scarce.

Ian Musgrave · 12 February 2007

Anyone remember AVIDA, the is an artificial life platform in which digital organisms evolved Irriducible Complexity? The research made the cover of Discover Magazine.

David vun Kannon · 12 February 2007

Avida seems to be a particular sore point for DaveScot.

Personally, I find the Hummies awards handed out by John Koza in the GP world a great talking point with ID folk. Here are programs that have not only evolved, but evolved to the point of doing something better than any human, not just better than the programmer.

steve s · 12 February 2007

Comment #160608 Posted by Ian Musgrave on February 12, 2007 6:24 AM (e) | kill Anyone remember AVIDA, the is an artificial life platform in which digital organisms evolved Irriducible Complexity? The research made the cover of Discover Magazine.
You just think that because you're a Darwinist. According to the best ID scientists, the mere act of choosing which software to use, or having any intent while doing the research, infuses Intelligent Design into the experiment, and you're no longer dealing with evolution.

Torbjörn Larsson · 12 February 2007

At first I thought that literal thinking was always a sign of unintelligence but now I think it's more like a learning disability.
(OT discussion considering some of the comments here. :-) I think so. It seems easy to get literal and humorless when one gets very tired. Granted, IQ is lowered too, but not that much. That suggests IMHO that it may not be a one-one situation. Also, many circumstances of programming seem to make up for some cognitive deficits. For example, I have heard dyslectics say that programming fits them since they get feedback on spelling errors.

Gwen · 12 February 2007

You just think that because you're a Darwinist. According to the best ID scientists, the mere act of choosing which software to use, or having any intent while doing the research, infuses Intelligent Design into the experiment, and you're no longer dealing with evolution.

Why don't we co-opt that strategy? Their trick is to say that any time anyone does anything on purpose in a model of evolution, they're only creating intelligent design (even if they design via evolution). So our trick ought to be to say that any idea that they come up with, including intelligent design, is actually the product of evolution, because they are the product of evolution. Turn the classic "God is the creator of the universe (or "you"), so the universe is (or "you are") evidence for God" back on them--"evolution is the creator of you, so you are evidence for evolution". What? actually explain the link? What do you mean, "begging the question" and "circular reasoning"? I don't know what you're talking about...

RBH · 12 February 2007

Ian asked
Anyone remember AVIDA, the is an artificial life platform in which digital organisms evolved Irriducible Complexity? The research made the cover of Discover Magazine.
Erm, I just finished teaching a semester-long undergraduate biology seminar in evolutionary modeling using Avida as the principal platform. From the feedback I got (and the departmental colloquium on their final projects that my students gave last week), it was highly successful, and I'll teach it again next academic year. Rob Pennock is finishing up an educational version, Avida-ED, that should be available later this year. My class was among the beta-version evaluators, along with versions 1.3.0 and 2.06b of the 'standard' research platform for Windows. The Linux version is up to around version 2.6 now, I think. (I'm not running a Linux box on my desk, so I haven't kept up with that version lately.) Wes Elsberry is at this moment in the process of moving to Michigan State to work with Pennock for a year on extending Avida. A comment from one of my students (who happened to be a senior physics major; the rest were upperclass bio majors):
The evolutionary modeling class was a fantastic experience. The understanding of dynamical systems, from fractals to schools of fish, is essential for a complete view of biology. And there is no better example of a system governed by a fixed rule that describes the time dependence of the system than evolution. In my experience, evolution is usually taught by presentation of fossil evidence and by appealing to a sense of reason, but without a visual demonstration it is virtually impossible to imagine and predict the effect of even the simplest rules. The computers were paramount to the class, allowing us to observe and indeed control evolution for the first time in a timely manner.
The students worked with the program every class meeting, using the department's laptops in both planned familiarization exercises and (semi-) independent projects that diverged over the semester as each student followed his/her own line of interest, ranging from the role of varying mutation rates in the acquisition and retention of non-executed ("junk") instructions to the relationship between resource variations and population diversity. That direct experience with controlling the relevant variables and watching the consequences in real time in class, followed by post-run data analyses and individual in-class presentations of results and the final departmental colloquium, was critical to the success of the course. RBH

DragonScholar · 12 February 2007

I am reminded of Patrick Harpur's statements on the dangers of literalism negatively affecting one's ability to function, and this is a pretty good example.

I'm noticing a lot of this in the ID/Creationist sphere lately - perhaps its just me noticing an existing trend, but it seems that "misquoting" is a big thing now, kind of a Bible Code where they seek the hidden evidence of Intelligent Design.

AC · 12 February 2007

It's about winning the game more than trying to understand anything.

— yiela
Bingo. It's related to the fact that, if you're wrong about a thing, being able to admit you're wrong is essential to understanding that thing. These are people who cannot admit error. They must win the game.

Sir_Toejam · 12 February 2007

It's related to the fact that, if you're wrong about a thing, being able to admit you're wrong is essential to understanding that thing.

is it? hmm. You might want to weigh in on the practical application of your statement given this: http://scienceblogs.com/pharyngula/2007/02/trained_parrot_awarded_phd.php this person was granted a PhD in the very subject that as a YEC, is most diametrically opposed to his philosophical position. I'm not sure I can legitimately say he doesn't understand the field he received his PhD in. His application of his understanding leaves much to be desired (obviously his placement after he received his degree at a religious university was appropriate), but it would be wrong to say his actual understanding of the material itself is in doubt, regardless of whether he thinks it "right" or not.

Keith Douglas · 12 February 2007

Anton Mates: It is worse than that - finding "dead code" - code which cannot be reached by any path of execution of the program - is a computationally unsolvable problem.

Duncan Buell · 12 February 2007

Judging by the number of responses that suggest people were taking my post seriously, I think I have to point out that literal thinking seems to exist in more than one place. I would have thought that this crowd would take for granted that yes, there had to be some sort of bug in the overall program, and yes, with sufficient analysis and diagnosis (something akin to the scientific method) that bug could be detected...

I am still going to write those masterful journal articles, though. Maybe somewhere out there is someone with a sense of gallows humor about the human foibles that surface when writing code.

AJ · 13 February 2007

Back when I used to actually write programs for my Ph.D. thesis (in FORTRAN of course!) I was always told that every program can be reduced by at least one line, and that every program contains at least one bug. By induction, every program can be reduced to a single line that doesn't work :)

I have refused to learn Java until they put a GOTO statement in it :)

Dizzy · 13 February 2007

Anton Mates: It is worse than that - finding "dead code" - code which cannot be reached by any path of execution of the program - is a computationally unsolvable problem.

— Keith Douglas
Is that true? Most of the IDEs I've worked with seem to have no trouble finding out where each method/parameter is used...seems like a simple text search for the method name would reveal if it's used anywhere. Or am I misunderstanding you?

GuyeFaux · 13 February 2007

Is that true? Most of the IDEs I've worked with seem to have no trouble finding out where each method/parameter is used...seems like a simple text search for the method name would reveal if it's used anywhere. Or am I misunderstanding you?

E.g. if (PIsNP()) { foo() } else { bar() } So is bar() used? Here, PIsNP() is difficult to solve.

Dizzy · 13 February 2007

I see what you mean now. That's a different category from what I had in mind.

A text search would reveal if a method is not called by any other method or by the main branch. It seems pretty straightforward to identify those methods as "dead."

As for PIsNP(), I see two possible situations:

1) PIsNP() always (computationally) resolves to true. I think most compilers can identify this?

2) PIsNP() depends on external input.

I would guess that 2) indicates that bar() is not dead, since one cannot be sure that PIsNP() will always resolve to true? And 1) indicates that bar() is definitely dead.

Sorry for the OT...just curious about this.

Flint · 13 February 2007

It is worse than that - finding "dead code" - code which cannot be reached by any path of execution of the program - is a computationally unsolvable problem.

I was always told that every program can be reduced by at least one line, and that every program contains at least one bug. By induction, every program can be reduced to a single line that doesn't work :)

Not a whole lot of programming experience being demonstrated here. First, there are degrees of "dead" code. There is code that can be removed from a program without changing the functionality of that program in any way, which is truly dead. But there is what we might call "zombie code", code which is never executed, but which just by existing has certain side effects - changing the length of calls and jumps around it, changing the memory location where other code resides, occupying space that otherwise wouldn't be occupied. It's pretty common to find that removing unexecuted code introduces unexpected behaviors, because what it does is changes the symptoms of bugs elsewhere in the code simply by virtue of moving things around. Now that uninitialized pointer, that once harmlessly trashed unexecuted code, points into the middle of critical code. Crash! Now the few extra CPU cycles of delay while a cache line filled that aren't there anymore, means a device doesn't quite answer in time, leading to a race conditioin the device *always* lost (so that was OK) before the dead code was removed. And so on. Finding dead code (or zombie code) isn't just a matter of searching for label references. The simple statement IF X, GOTO Y creates dead code if X can never be false or never be true. Finding this dead code requires that one analyze all possible runtime values of X under all conditions, in advance. It is not practically possible. As for the induction that a bug is a line of code that doesn't work, this is almost too simplistic for words. In the first place, a "line of code" is not a quantifiable entity in any real sense. In the second place, many bugs work fine most of the time, causing symptoms only under very unusual conditions or situations. Removing the "line of code" is almost sure to wreck everything. By and large, fixing bugs adds to the total code size, rather than reducing it. This happens because most bugs arise from the programmer's failure to consider all possible paths and conditions. It's a truism in programming that any state not anticipated will be entered at the worst possible time in the field, and the default (what the code actually falls into in that case) will be as bad as possible. Perhaps all this is off topic, perhaps not. Computer programs are not abstract logical exercises, they are sequences of instructions written to cause hardware to produce intended side-effects. They're also what I've been writing and then repairing for decades.

Torbjörn Larsson · 13 February 2007

finding "dead code" - code which cannot be reached by any path of execution of the program - is a computationally unsolvable problem.
Looks natural to me, since finding all error states and/or misconstructed dead circuits in VLSI designs is also a dud AFAIK. Probabilistic methods covers some of this in somewhat realistic test times. The problem for creationists here is that they want certainty.

Dizzy · 13 February 2007

First, there are degrees of "dead" code.

I think that's what was confusing me. The original quote maintained that "finding dead code" is "computationally unsolvable." I think what he really meant was "finding ALL dead code" is "highly impractical."

The simple statement IF X, GOTO Y creates dead code if X can never be false or never be true. Finding this dead code requires that one analyze all possible runtime values of X under all conditions, in advance. It is not practically possible.

Doesn't this depend on the definition of X? I can certainly imagine cases where it would be extremely impractical (X depends on 20 parameters, each of which depends on 50 other parameters, etc. etc.), but "not practically possible" doesn't necessarily mean "impossible," right? I'm only asking these things to attempt to tie back into the analogy to biology...no human could ever practically simulate every possible combination over the past 4 billion years, but nature can...

As for the induction that a bug is a line of code that doesn't work, this is almost too simplistic for words.

Agreed...not sure what the original poster intended by that. In my experience, though, which is apparently far more limited than yours, a lot of "bugs" have to do with interpretation, rather than inherent problems with the code. In other words, the program is executing exactly as coded, but the outcome is not acceptable to the user. A security loophole in an authentication system, for example, is not a "technical" bug in the sense that it causes the system to crash or breaks other functionality. The system is functioning "normally," but it does not meet the user's expectations because it allows unauthorized users through. Going back to the biological analogy, a "technical" bug can be seen as a coding error that is immediately fatal or crippling to the organism - i.e., requires no interaction with external entities in order to fail. A "business expectations" bug is probably analogous to a mutation that produces a working organism, but one that is less well adapted to its environment than other organisms. Does that sound about right?

Anton Mates · 13 February 2007

Anton Mates: It is worse than that - finding "dead code" - code which cannot be reached by any path of execution of the program - is a computationally unsolvable problem.

— Dizzy
Is that true? Most of the IDEs I've worked with seem to have no trouble finding out where each method/parameter is used...seems like a simple text search for the method name would reveal if it's used anywhere. Or am I misunderstanding you?

I think Keith is talking about the general case--you can't construct a procedure which can find dead code in any finite-length program. Finding such in a particular, human-crafted set of programs may be comparatively easy.

GuyeFaux · 13 February 2007

1) PIsNP() always (computationally) resolves to true. I think most compilers can identify this?

That's some compiler. No, the idea here is that PIsNP() is such a hard problem that the compiler is not going to be able to figure it out without actually running the program. So you don't know which way to if clause is gonna go. More realistically, in actual code you can have exception-handling clauses for cases that never come up. The compiler will never know that the clauses are useless.

Flint · 13 February 2007

Dizzy:

The original quote maintained that "finding dead code" is "computationally unsolvable." I think what he really meant was "finding ALL dead code" is "highly impractical."

These evaluate to the same thing. If we can show that today's most powerful computers would need thousands of trillions of years to compute the answer, this is close to unsolvable.

"not practically possible" doesn't necessarily mean "impossible," right?

You are correct, but so what? If you have until next week to show what would require a trillion years to show unequivocally, then you call it impossible.

no human could ever practically simulate every possible combination over the past 4 billion years, but nature can

Not at all. Nature showed ONE path, not all possible paths.

In my experience, though, which is apparently far more limited than yours, a lot of "bugs" have to do with interpretation, rather than inherent problems with the code. In other words, the program is executing exactly as coded, but the outcome is not acceptable to the user.

Oh no! What you're talking about is called a "feature"!

A security loophole in an authentication system, for example, is not a "technical" bug in the sense that it causes the system to crash or breaks other functionality.

OK, more seriously, the programmer starts with a functional specification - what it is the code is required to accomplish. There are tests and standards established in advance, that the code must meet or pass. Failure to do so isn't considered a bug, as you say. If it fails, the code is regarded as unfinished, rather than wrong. Often, it's the spec that's unfinished rather than the code.

A "business expectations" bug is probably analogous to a mutation that produces a working organism, but one that is less well adapted to its environment than other organisms. Does that sound about right?

Yes, that sounds right to me. Think of all the different programs that might have been written to meet a functional requirement. They all do so, more or less, in different ways. They all have relative strengths and weaknesses. A lot of programming houses in fact do exactly this: assign the same task to different teams, then pick the one that comes closest to what was actually wanted (which might not be exactly what was requested!). There is a temptation to take the best features of each effort and combine them all together, but this is asking for trouble - nothing exactly fits. Truly modular code has a ways to go yet, above the level of primitive functions. For example, pasting on an entire user interface you like better is going to be more effort than it's worth. Much better to pick the best candidate, and everyone start tweaking it. So there certainly are biological analogies to some extent.

GuyeFaux · 13 February 2007

As for the induction that a bug is a line of code that doesn't work, this is almost too simplistic for words.

Agreed... not sure what the original poster intended by that. It was a joke... I thought it was decent.

Dizzy · 13 February 2007

That's some compiler. No, the idea here is that PIsNP() is such a hard problem that the compiler is not going to be able to figure it out without actually running the program. So you don't know which way to if clause is gonna go. More realistically, in actual code you can have exception-handling clauses for cases that never come up. The compiler will never know that the clauses are useless.

Ok. I guess I just don't know enough about low-level operations to understand this...my thought was that since compilers eventually reduce code to a set of simple logical operations, and since you can reduce logical operations down to the point where you can identify if the output will always be 1 or 0, it should be possible. But I guess you'd probably have to know all the details of the software *and* the hardware to do that? The exception handling example makes sense...in fact, as I recall there are methods that REQUIRE a try/catch even if there is other code that ensures the exception will never occur.

GuyeFaux · 13 February 2007

I guess I just don't know enough about low-level operations to understand this... my thought was that since compilers eventually reduce code to a set of simple logical operations, and since you can reduce logical operations down to the point where you can identify if the output will always be 1 or 0, it should be possible.

This is true iff the program doesn't receive input and it definitely halts and the output is constrained to be yes/no. But such classes of programs are not very useful; you can run them once and then replace their entirety with that result. But, notably, except in some trivial cases, a compiler will not be able to tell if the code will halt. Also, most interesting programs take some sort of input. If you can narrow the set of possible inputs to a finite set, you'll run into the halting problem still. Furthermore, many programs are written with an infinite set of inputs in mind. Once again, the halting problem is an issue as well as undecidability.

Dizzy · 13 February 2007

Flint: Thanks for the responses - glad to hear I'm not totally out of it. Just to continue the more relevant (to me) parts of the discussion:

Nature showed ONE path, not all possible paths.

I think my use of the term "every possible" was ill-advised. What I was getting at was "all or very many possible" paths *within the constraints* of the situation at that time. The only time that literally "every possible" path is available is when there is utter nothingness...as soon as there is *something*, the possible paths from that point forward are constrained. I'm not sure Nature showed just one path, depending on how you define it. If you look at the extant species today, I'm guess that yes, you can identify a single path for each one - in which case, it would be one for each extant species. But of course, there are extinctions both known and unknown, each of which could be considered a "possible" path that was tried, but eventually failed. My point in taking up the "impossibility" issue is that Nature has probably "tried" far more possibilities than we can imagine. I think, in our discussions back and forth, we've seen that parallels between programming and evolution *can* be helpful, so long as they are not (deliberately) misused. So I wouldn't fault any biologist for drawing that parallel (not that anyone in this thread is doing so). I'm just wondering how far it goes. Is the distinction between "impossible" and "highly impractical" simply an ID-style "I can't imagine it, so it can't be possible" trap?

Flint · 13 February 2007

I don't think extinctions can be directly equated with failures; evolution has no control over external environmental events. Additionally, evolution can only select among those variations that occur; we regard mutation as random with respect to fitness. There is no requirement that the most optimal possible mutation ever occur.

Similarly, there's no guarantee that even a perfect program will fit next year's organizational needs, and there certainly is no guarantee that the best programmers will be employed there. I know for sure that large chunks of my code will "fail" on next week's hardware, because I write code to control specific hardware.

Dizzy · 13 February 2007

I don't think extinctions can be directly equated with failures; evolution has no control over external environmental events.

I'm talking about evolution by natural selection, though; true, it doesn't have control over external environmental events, but it is influenced by it, yes? The environment is what enables natural selection to act. By "failure" I meant the inability to reproduce further, which seems to be the most basic criterion for "success" from a evo-bio standpoint - i.e., survival. One parallel I see is in your example of multiple teams coding for the same functional spec; the "failures" are the ones that are ultimately discarded.

Additionally, evolution can only select among those variations that occur;

Yes, that was my point regarding my misuse of the term "every possible." Given the existing environment and the existing variations, Nature probably has tried far more of the "possible" variations - possible within those existing constraints - than we can imagine.

we regard mutation as random with respect to fitness. There is no requirement that the most optimal possible mutation ever occur.

Definitely agreed...but in a similar vein, back to your previous analogy of multiple candidate programs, there is no requirement that the selected program be the most optimal possible program for that functional spec (even if you write the code from scratch, you are still constrained by the creativity of your programmers, the limitations of your language and OS/hardware, etc.). It's simply the best among the candidates you have available...at least, according to your knowledge at the time. I'm sorry if it seems like I'm nitpicking an irrelevant point, but my real point is this: One of the standard ID lines, *if* the programming analogy is apt in some ways, would be something like this: "Evolution is like saying a bug in a program would get fixed if you inserted random characters into the code." There are a great many things wrong with that statement, and I haven't heard an IDist specifically say it, but it is in line with many of the other comparisons they generally make. In fact, it *is* possible for a bug to get fixed via random mutation, *IF* there is a process like natural selection that removes the more-buggy programs from the environment, and there are many, many candidates. It would speed things up quite a bit if the mutations themselves used valid, pre-existing building blocks (i.e. valid statements or methods, rather than random characters), which I think is a more accurate parallel to the biological world, but I'm not sure that's a requirement. The core idea of natural selection, in my mind, is that the process doesn't need to be a person or "intelligent" arbiter; it can simply be a mindless algorithmic process. True, we define a "target" outcome with the program analogy; but so does Nature define a "target" outcome for all living organisms, i.e. survival. The "trap" I often see is that people equate a "very large" number of possibilities with "infinite" possibilities, and hence "highly improbable" becomes "impossible." There is a huge, huge difference between the two, in my mind.

Henry J · 13 February 2007

Re "people equate a "very large" number of possibilities with "infinite" possibilities,"

And, a decillion to the decillionth power isn't any closer to infinite than a hundred. Or even just one.

Henry

Popper's ghost · 14 February 2007

Anton Mates: It is worse than that - finding "dead code" - code which cannot be reached by any path of execution of the program - is a computationally unsolvable problem.

No, actually, it much worse that trivial cases such as Anton offered contradict PaV's silliness.

Popper's ghost · 14 February 2007

The "trap" I often see is that people equate a "very large" number of possibilities with "infinite" possibilities, and hence "highly improbable" becomes "impossible." There is a huge, huge difference between the two, in my mind.

But it's only a difference in the mind; in the real world, there's no distinction of any value between "too many to instantiate in practice" and "infinite".

Popper's ghost · 14 February 2007

my thought was that since compilers eventually reduce code to a set of simple logical operations

No, code cannot be reduced to a set of logical operations; "goto" is not a logical operation.

Popper's ghost · 14 February 2007

But, notably, except in some trivial cases, a compiler will not be able to tell if the code will halt.

This is a common misunderstanding of the unsolvability of the halting problem. There are programs that can determine whether any of an infinite set of non-trivial programs halts. However, there is no program that can determine whether every program halts for all possible inputs; that's because, for any purported such program P, one can construct a program P' such that P can't determine whether P' halts. But P' isn't likely to be a program that we care about, so the unsolvability of the halting problem has little practical significance.

Popper's ghost · 14 February 2007

The original quote maintained that "finding dead code" is "computationally unsolvable." I think what he really meant was "finding ALL dead code" is "highly impractical."

No, he didn't mean that. He meant that the problem of finding all dead code is formally, mathematically, provably, unsolvable -- a program that finds all dead code is truly impossible.

Popper's ghost · 14 February 2007

Finding such in a particular, human-crafted set of programs may be comparatively easy.

Well, of course there are many human-crafted programs for which it is easy to determine whether they halt. But there are also human-crafted programs for which no one knows how to show whether they halt. For instance, for( t = 4;; t += 2 ){ for( a = 2; a < t-1; a++ ){ b = t - a; if( is_prime(a) && is_prime(b) ){ printf("Goldbach's conjecture is false for %d = %d + %d\n", t, a, b); exit(0); } } }

Popper's ghost · 14 February 2007

Judging by the number of responses that suggest people were taking my post seriously,

A common response when someone screws something up is "I was just joking".

Popper's ghost · 14 February 2007

In my experience, though, which is apparently far more limited than yours, a lot of "bugs" have to do with interpretation, rather than inherent problems with the code. In other words, the program is executing exactly as coded, but the outcome is not acceptable to the user.

The discussion was about bugs, not "bugs". A bug is a failure of a program to perform according to its specification.

Popper's ghost · 14 February 2007


if (PIsNP()) {
foo()
} else {
bar()
}

So is bar() used? Here, PIsNP() is difficult to solve.

This is not a good example. That no one has proven whether P is NP does not imply that it would be difficult to evaluate PIsNP(), which can only be written once someone has managed to prove that P is or is not NP. And at that point, an implementation of PIsNP() could simply be return(true) or return(false). Even if PIsNP() encodes the proof itself, that proof won't necessarily be difficult to evaluate.

Dizzy · 14 February 2007

But it's only a difference in the mind; in the real world, there's no distinction of any value between "too many to instantiate in practice" and "infinite".

I wasn't talking about "too many to instantiate in practice" in the context of nature. My whole point was that it's *not* only a difference in mind. The standard ID "trick" is to make people assume that "more possibilities than you can imagine" is "infinite." Nature has little problem trying out "more possibilities than you can imagine." I thought this was clear in my response.

Dizzy · 14 February 2007

No, he didn't mean that. He meant that the problem of finding all dead code is formally, mathematically, provably, unsolvable --- a program that finds all dead code is truly impossible.

I think the key was that I read "finding dead code" (in the original) as "finding ANY dead code," whereas the OP (and you) mean "finding ALL dead code." Hence, my capitalization of "ALL" in my response.

Dizzy · 14 February 2007

The discussion was about bugs, not "bugs". A bug is a failure of a program to perform according to its specification.

Wikipedia defines a software bug as "an error, flaw, mistake, failure, or fault in a computer program that prevents it from behaving as intended (e.g., producing an incorrect result)." I.e. "intended," not "specified."

GuyeFaux · 14 February 2007

This is not a good example. That no one has proven whether P is NP does not imply that it would be difficult to evaluate PIsNP(), which can only be written once someone has managed to prove that P is or is not NP. And at that point, an implementation of PIsNP() could simply be return(true) or return(false). Even if PIsNP() encodes the proof itself, that proof won't necessarily be difficult to evaluate.

Dizzy caught this. I clarified later:

No, the idea here is that PIsNP() is such a hard problem that the compiler is not going to be able to figure it out without actually running the program.

GuyeFaux · 14 February 2007

But, notably, except in some trivial cases, a compiler will not be able to tell if the code will halt. This is a common misunderstanding of the unsolvability of the halting problem. There are programs that can determine whether any of an infinite set of non-trivial programs halts.

What is the misunderstanding?

...so the unsolvability of the halting problem has little practical significance.

I brought it up to illustrate that this is not plausible:

...since you can reduce logical operations down to the point where you can identify if the output will always be 1 or 0, it should be possible.

If by "logical operations" Dizzy meant like term, then "reducing" it is not plausible in all cases. Even a really smart compiler is not going to be able to do this, since that would solve the halting problem.

Flint · 14 February 2007

I.e. "intended," not "specified."

I have to agree with PG's interpretation here. Otherwise, we are faced with the notion that an entirely error-free program can develop bugs simply because the job the program was written to perform has changed. In fact, I think this makes a useful dividing line. A program that does what the programmer intended under all possible inputs has no bugs. If the programmer misunderstood the assignment, or if the program's goals are changed externally, that's not a bug. Making a program do something different from what the programmers intended is regarded as changing the feature set. Making a program perform the programmers' intentions without error is fixing bugs. This distinction is critical. If my accounting program does a lousy job of word processing, this is NOT a bug in the accounting program!

David B. Benson · 14 February 2007

Computer program maintenance --- There are two aspects. One is so-called bug finding and fixing. The other is that the information environment of the program changes, so new functionality is required. A trivial example of the latter is changes in the law regarding required deductions from pay.

There are some how call the latter activity bug fixing in some settings, because in many distributed computing applications it is difficult to ascertain the difference.

A more professional term is 'fault'. This is failure to meet the specification. The settings that I know about wherein this term is used are ones in which there are written specifications. In these settings, if the specifications change, the program requires so-called re-engineering.

dhogaza · 14 February 2007

A trivial example of the latter is changes in the law regarding required deductions from pay.

Or that here in the US we'll be starting daylight savings time on March 11th this year, rather than in April ... Y'all ready? :)

Henry J · 14 February 2007

Or that here in the US we'll be starting daylight savings time on March 11th this year, rather than in April ... Y'all ready? :)

Get our time zones out of the hands of the politicians! ;) (Well, somebody needs to say that - sorry if it's off topic.) Henry

PvM · 15 February 2007

A common response when someone screws something up is "I was just joking".

— PG
The inescapable conclusion should thus be that PG is just joking.

Ensjo · 15 February 2007

What does "NDE" stand for? "Non-Divine Explanations"?

Henry J · 16 February 2007

Re "What does "NDE" stand for? "Non-Divine Explanations"?"

It might be "non-directed evolution", but I'm just guessing based on the context in which I've seen it used.

Henry