The Sad state of Intelligent Design: Or why it shuns 'peer review'

Posted 15 January 2007 by

On Uncommon Descent, our friend Davescot shows once again why Intelligent Design has to hide in the shades of our ignorance. Richard B Hoppe has dealt with most of what he called Dissent Out of Bounds on Uncommon Dissent (Oops, make that "Descent") and this posting is meant to archive the excellent comments by Febble which caused so much concern at UcD. While UcD is well known for its aggressive moderation policies, deleting much of anything critical of ID and quickly banning those who expose ID's scientific and religious vacuities, Uncommon Dissent seems to also favor squashing unfavorable reviews of its theses. In a thread titled ID in the UK ID activist Bill Dembski invited comments from people in the United Kingdom to comment on the recent 'activities' of ID in this country. A poster, named Febble complied with the invitation and politely expressed her feelings. Soon thereafter Davescot banned Febble from participating on UcD. Why? Read on Febble introduced herself as "... a scientist, a Christian theist, and a UK citizen, with a son in a UK secondary school. This is my response to your encouragement to comment." She points out that ID, as a scientific hypothesis (sic) defines intelligence in a manner which includes natural selection as the designer. This conclusion was long since reached by others, including Wesley Elsberry, but it seems that ID activists are largely unfamiliar with the impact of Dembski's musings and unwilling to take it where it leads.

I am happy to accept "Intelligent Design" as a scientific hypothesis to account for the development of life, as proposed by yourself, Dr Dembski, as long as you stand by this definition of intelligence: ' by intelligence I mean the power and facility to choose between options--this coincides with the Latin etymology of "intelligence," namely, "to choose between" ' http://www.designinference.com/documents/2000.11.ID_coming_clean.htm However, such a hypothesis need not (and should not) be presented as an "alternative to evolution" as it is described in the Truth In Science materials. Far from rejecting an agent "with the power and facility to choose between options", this is exactly what the Theory of Evolution postulates as the agent of evolutionary change - a process of_selection_ (aka "choice") between options.

— Febble
The responses were varied but invariably hasty and avoiding the issue raised by Febble. But first let's look at the full response by Febble and save it for posterity since ID sites have a history of having embarassing postings disappear.

The fact that the selection process postulated by the ToE is a "natural" one ("Natural Selection") does not disqualify it from being an agent "with the power and facility to choose between options". This is exactly what it does, by means of a simple IF...THEN selection algorithm. IF a variant survives THEN it replicates. Variants with greater capacity to survive are selected (chosen), while those with lesser capacity are rejected. Certainly Natural Selection has no_intentionality_ but you yourself, Dr Dembski, have made it clear that the "intentionality" problem ", together with the "ethical", "aesthetics" and "identity" problems,"are not questions of science". Yes, patterns we see in life-forms indicate an "intelligent" (as per your definition) design process. But they do not imply anything not also implied by Natural Selection. Suggesting that the appearance of "intelligent design" ( by your definition of intelligence) contradicts the Theory of Evolution is therefore illogical, and it would appear that the Truth In Science materials do just that. Suggesting that life-forms have the appearance of "intelligent design" using a definition of intelligence that would NOT cover Natural Selection (e.g. one that invoked intentionality) would not, as you say, be science at all. I am therefore opposed to the Truth In Science materials.

— Febble
After her introduction, Febble continued to participate in a very polite manner by educating ID activists as to the vacuity of their arguments and claims. In her response, she also unveiled her true identity as Elizabeth Liddle

DaveScot: Yes, I am aware that cells are complex. But you are arguing ID from degree of complexity, not kind. Dr Dembski's ID argument is that it is the quality ("specified"), not the quantity, of the complexity exhibited by living things that identifies them as having been produced by an intelligent agent, for which he provides an operational definition. Natural Selection possesses "intelligence" according to that operational defines it. As such, it can produce specified complexity. Whether it can produce enough "complexity" to account for the variety of life is separate issue. IDist: Nature may not be "intelligent" by many definitions. I do not ascribe to it intention or foresight, or consciousness, for example. But I refer you again to Dr Dembski's definition of intelligence for the purpose of inferring an intelligent designer from observed patterns (e.g. in a signal picked up by SETI), namely "the power and facility to choose between options". Natural Selection has this power and facility. It's how it works. It's also why it's called selection. (BTW, you assumed I'm male. I'm female, as it happens.) shaner74: Again, we were talking about ID as a scientific theory. As a scientific theory ID is sound, if "intelligence" is defined as Dr Dembski defines it. And, as Dr Dembski defines it, an intelligence "with the power and facility to choose between options" is indeed capable of, as you put it "inputting new information into the system". It's how computers work. You may not believe it is capable of making a cell work, but that is not the debate here. The debate is whether ID, as defined by Dr Dembski is a scientific hypothesis. It is. And it describes Natural Selection very nicely. Natural Selection cannot, of course, account for the existence of my immortal soul. But the origin of my immortal soul cannot be investigated by means available to science. Cheers Elizabeth Liddle

— Febble
Elizabeth Liddle has quite a name on the Internet. On Mystery Pollster she is disclosed to have developed a model which analyzed voting behavior.

The Liddle Model That Could Regular readers of this blog may know her as "Febble," the diarist from DailyKos. Her real name is Elizabeth Liddle, a 50-something mother and native of Scotland who originally trained as a classical musician and spent most of her life performing, composing and recording renaissance and baroque music. She also studied architecture and urban design before enrolling in a PhD program in Cognitive Psychology at the University of Nottingham where she is currently at work on a dissertation on the neural correlates of dyslexia. In her spare time, she wrote a children's book and developed in interest in American politics while posting on the British Labor party on DailyKos. A self proclaimed "fraudster" she came to believe our election "may well have been rigged," that "real and massive vote suppression" occurred in Ohio where the recount was "a sham" and the Secretary of State "should be jailed."

Quite a history... But let's return to Febble's plight on UcD: Her insightful and polite postings led Davescot to quickly ban her from participating

febble is no longer with us - anyone who doesn't understand how natural selection works to conserve (or not) genomic information yet insists on writing long winded anti-ID comments filled with errors due to lack of understanding of the basics is just not a constructive member - good luck on your next blog febble

— Davescot
What is truly fascinating is the level of ignorance found in the comments by ID activists, including our dear friend Davescot who seems quite confused about the concept of natural selection and information. Which is not surprising given that most of these posters have gotten their ill founded information from Bill Dembski's writings. But once again I digress from the remarkable performance of "Febble"

DaveScot: I'm sorry, I did not respond to this point:

"I'll accept random mutation and natural selection as the cause of life and its diversity right after someone shows me how to design a computer by trial and error starting from nothing but the simple unorganized elements (silicon, copper, etc.) that make it up. Start explaining that to me if you would."

— Febble
Firstly, of course, "random mutation and natural selection" are not postulated as the cause of life, only of its diversity. Darwin said nothing about the how the first cell came into existence, and nor do evolutionary biologists. So the equivalent operation in computer terms is not "design[ing] a computer by trial and error starting from nothing but the simple unorganized elements (silicon, copper, etc.) that make it up". The theory of evolution tells us nothing about how the first DNA molecule was assembled from "simple unorganized elements", nor indeed how any molecule is assembled from "simple unorganized elements", although chemistry tells us a lot, particularly about how complex organic molecules are formed from carbon and hydrogen. However, if you want me to explain how, given a computer and an operating system, complex algorithms can be created by trial and error, then I'm happy to do so. It involves a random number generator, and a series of "if...then" statements. In other words, the computer equivalent of random mutation, and natural selection proposed by the ToE. Such a program has the "power and facility to choose between options", and is thus capable of producing patterns with "specified complexity", and possesses "intelligence" as defined in this context by Dr Dembski. As does the system of random mutation and natural selection. Dr. Dembski's Intelligent Design theory is not, therefore, an alternative to the Theory of Evolution. Rather, the system proposed in the Theory of Evolution is, by Dr. Dembski's own definition, an example of "intelligent" design.

Faced with the impeccable logic of "Febble" ID activists are scurrying for responses, thus we find a poster named Patrick claiming that the Febble "... go read the UD archives. That claim has been repeated so many times it's not funny." Febble, not easily distracted, inquires politely as to the nature of the claim to which Patrick seems to be referring. Patrick, not constrained by any logic, 'responds'

Febbles: First off, Dave and everyone here is fully aware that OOL is different from TOE. Whenever a Darwinist comes on here and blindly starts "lecturing" I just roll my eyes. Secondly, run a search on "genetic algorithms" because that is obviously what you're referring to. And Bunjo's hypothetical discussion is a false caricature of ID. Only a teacher who is either ignorant of ID or purposely distorting it would respond as such. So do you guys have anything truly interesting to add to UD or is going to be another repetition of common nonsense?

No real response, but then again, addressing "Febble's" comments is no easy task, in fact it is a task which Dembski has mostly neglected himself. Lizzie then goes for the jugular in a very polite manner

Hi, Patrick I'm sorry if I made an erroneous assumption. DaveScot's comment began: "I'll accept random mutation and natural selection as the cause of life and its diversity..." (my bold) which suggested that he might not make the distinction that you assure me he does. In which case, my comments were redundant. They were not intended to lecture, merely to address the point he asked me to address. I am happy to run a search on "genetic algorithms" but I would appreciate it if you would say what claim of mine you were referring to. My point was simply that if intelligence, for the purpose of inferring an "intelligent designer" from an example of "specified complexity" is defined as "the power and facility to choose between options" (and I agree with Dr. Dembski that an agent with such power is required to produce a pattern with "specified complexity"), then natural selection has that power and facility. Genetic algorithms are interesting (and I work on learning algorithms myself) but they were not central to the point I was making. A simple if...then statement is all that is required to satisfy the requirement of "the power and facility to choose between options". Cheers Lizzie

— Febble
Patrick realizes that he cannot withstand the onslaught and responds

Didn't notice that myself. Dave was too quick with a response. DaveGoofPoints++;

Patrick continues to make some vague references to Lizzie's claim and she takes the opportunity to invite him to give some specifics

Hi, Patrick Well, I'm still not sure what you are regarding as my "claim". My "claim", as I see it, is simply that yes, the complexity we see in life forms has a quality that Dr. Dembski has defined as "specified" - the quality that emerges when a pattern is produced from what he calls an "intelligence". And he defines that "intelligence" as "the power and facility to choose between options". My argument is simply that natural selection is an agent that "has the power and facility to choose between options" - which is why life forms exhibit "specified complexity" - and is therefore a form of "intelligence" as defined by Dr Dembski. But I certainly didn't claim to have a computer algorithm that generated 10 letter words. What I have is a learning algorithm. It learns by trial and error - it repeats responses that lead to success. In fact, what it learns is optimal response times. It learns that if it is too slow it may miss its target, but if it is too fast it may respond to the wrong target. It actually starts with a flat distribution of reaction times, and I end with a distribution with a peak . Sometimes, depending on the contingencies, I get a bimodal distribution. When I change the contingencies, and it has to learn a new set of responses - and the distribution of response times changes. It's very simple though, just a fairly basic population model. But like actual populations, it tends towards the currently optimal solution. And what it models is actual intelligence - learning and set shifting, as seen in actual human behaviour. Cheers Lizzie

— Febble
In her next posting, Lizzie introduces us to her literary skills

Hi, avocationist

My question to Lizzie is, was God surprised when humans popped up?

— Lizzie
Well, not being omniscient myself, I don't know. But I think it's an interesting question, and it's one my son asked, a few years ago. So I wrote this story for him. It's called "Perhaps...." http://www.geocities.com/lizzielid/Perhaps.pdf Cheers Lizzie

She also responds to Patrick's question about genetic algorithms

Patrick: Thanks for suggesting "genetic algorithm" as a search term. I have read at least some of the posts in which that term, or "evolutionary algorithm" is used. My own view is that life is a profoundly algorithmic phenonemon, and it is the richness of its algorithmic structure that gives rise to its "specified complexity". It did not arise by chance, it arose from rules - algorithms. And a key algorithm is the if...then statement. In that sense, I consider Dr Dembski correct - biological systems are intelligent systems, arising from intelligent processes. Where I part company from the ID movement, as opposed to the concept of ID itself, is the frequent implication that intelligent design is coterminous with intentional design. I am happy with Dr. Dembki's operational definition of intelligence, which includes the concept of choice between options, but does not include consciousness or intention. Dr Dembski does not argue, as I understand him, that consciousness or intention are necessary to produce a pattern with "specified complexity", merely the "power and capacity to choose between options". As a neuroscientist (that's my field) I would argue that consciousness and intention emerge from simpler selection algorithms, of course. But I do not accept that consciousness and intention are necessary components of "the power and capacity to choose between options".

— Lizzie
Patrick, lost for logic attempts the appeal to authority approach

I'd suggest you read "No Free Lunch" since I doubt Dembski would agree with your redefinition of his work. Specifically, you should look at his law of conservation of information.

— Patrick
Lizzie then presents her master piece

I have read a fair number of Dr. Dembski's monographs and writings, although I have not read the book "No Free Lunch". However, I have read his piece: Intelligent Design as a Theory of Information http://www.arn.org/docs/dembski/wd_idtheory.htm several times, and I agreed with it (after considerable reflection), on first reading, up until the final section entitled "The Law of Conservation of Information". On re-reading it, I still find that up until that point in the article, Dr. Dembski makes perfect and elegant sense. CSI has to be the result of Actualisation-Exclusion-Specification, and Specification must be the result of choice. To quote Dr Dembski:

The word "intelligent" derives from two Latin words, the preposition inter, meaning between, and the verb lego, meaning to choose or select. Thus according to its etymology, intelligence consists in choosing between. It follows that the etymology of the word "intelligent" parallels the formal analysis of intelligent causation just given. "Intelligent design" is therefore a thoroughly apt phrase, signifying that design is inferred precisely because an intelligent cause has done what only an intelligent cause can do-make a choice.

— Lizzie
However to my mind, Dr. Dembski at that point takes a leap. He claims: Natural causes are in-principle incapable of explaining the origin of CSI.

His reasoning appears to be as follows: he states:

Natural causes comprise chance and necessity

(citing Monod), and proceeds to rule out both chance and necessity as a source of information. He then states:

Contingency can assume only one of two forms. Either the contingency is a blind, purposeless contingency-which is chance; or it is a guided, purposeful contingency-which is intelligent causation.

In other words, he defines chance as "blind, purposeless contingency" - and ascribes "intelligent causation" to any other kind. Now, I'd still agree with him, using his own operational definition of "intelligence". Where I disagree with him is that such "intelligence" cannot be "natural". He writes:

If chance and necessity left to themselves cannot generate CSI, is it possible that chance and necessity working together might generate CSI? The answer is No. Whenever chance and necessity work together, the respective contributions of chance and necessity can be arranged sequentially. But by arranging the respective contributions of chance and necessity sequentially, it becomes clear that at no point in the sequence is CSI generated. Consider the case of trial-and-error (trial corresponds to necessity and error to chance). Once considered a crude method of problem solving, trial-and-error has so risen in the estimation of scientists that it is now regarded as the ultimate source of wisdom and creativity in nature. The probabilistic algorithms of computer science (e.g., genetic algorithms-see Forrest, 1993) all depend on trial-and-error. So too, the Darwinian mechanism of mutation and natural selection is a trial-and-error combination in which mutation supplies the error and selection the trial. An error is committed after which a trial is made. But at no point is CSI generated. Natural causes are therefore incapable of generating CSI. This broad conclusion I call the Law of Conservation of Information

This is where Dr Dembski's reasoning is not clear to me. Indeed, earlier in the paper, Dr Dembski gives an example of rats learning maze, and notes:

Only if the rat executes the sequence of right and left turns specified by the psychologist will the psychologist recognize that the rat has learned how to traverse the maze. Now it is precisely the learned behaviors we regard as intelligent in animals.

But trial and error is the method by which rats learn to find their way through a maze -- or at least, if it is not, there is no way of telling that it is not. All we observe is that the rat has learned the maze. And Dr Dembski tells us, correctly of course, that if the rat makes the correct sequence of left and right turns, we can infer from that the fact that the pattern of its behaviour exhibits CSI that it was produced by an intelligent agent, namely the rat. In his own next words:

Hence it is no surprise that tthe same scheme for recognizing animal learning recurs for recognizing intelligent causes generally, to wit, actualization, exclusion, and specification

So it would appear that Dr. Dembski believes that learned behaviour can be recognized by its CSI, and inferred to be intelligent behaviour. Which is fine. But we know that learning can proceed by trial-and-error -- indeed many of the cognitive tasks we use in cognitive psychology can only be solved by trial and error. So it does not follow, to my mind, that a pattern that is arrived at by trial and error cannot generate CSI. The rat demonstrates that it can. So I took a closer look at Dr. Dembski's analysis of chance and necessity. Dr Dembski claims that:

Natural causes comprise chance and necessity.

He rules out chance as a source of CSI, as of course we must do. Chance, is, after all, the null hypothesis in any signal detection test. But he also rules out necessity. He does so by reasoning that if A must lead to B, observing B tells us no more about A than we know already from A. But consider this: if we have a "natural" choice maker, such as a perfect filter, or sieve, and a supply of particles of varying size, then we will find ourselves with a sorted arrangement of particles that cannot have arisen by chance -- the pattern of the sorted particles exhibits CSI. We can infer the rule that generated the pattern:particles smaller than a certain threshold pass, but larger particles are retained. I assume that Dr. Dembski would not want to call such a perfect filter "intelligent" -- although it clearly has "the power and capacity to choose between options": it chooses the large particles and releases the small ones. And I would agree that a sieve is not what we would normally call intelligent, although it fulfills Dr. Dembski's operational definition. So why would Dr Dembski not infer an intelligent agent from something a pattern that had resulted from a natural sieve? Well, he tells us that the pattern generated by the sieve cannot exhibit CSI because no new information has been added. And it is true that if we knew the precise mesh size of the sieve in advance, and the sizes of every particle, the piles of particles wouldn't tell us anything new about what pattern would be generated by the sieve. But in that case would a sieve with randomly fluctuating mesh size produce piles of particles that exhibited CSI? And could we then infer an intelligent sieve? Well, clearly not. All we'd have is an unreliable sieve instead of a perfect one. An unreliable sieve is not more intelligent than a perfect sieve. In other words, the pattern of sorted sand, which appears to me to exhibit CSI, cannot, according to Dr. Dembski have CSI, simply because we know everything about the sieve. His argument therefore appears to be not that, as he claims, that we can infer intelligence from a pattern that appears to exhibit CSI, but from the degree to which we have less-than-perfect knowledge about about the mechanism that created the pattern. Thus we can infer that the rat is intelligent, not from the CSI generated by its behaviour, but because we do not know everything about what determined the rat's choices. Conversely, we infer that the sieve is not intelligent, not because the pattern it generates does not have CSI (it does), but because we can know, in principle, everything about the sieve. This is the problem I have with Dr. Dembski's analysis. I agree with him that CSI is detectable. I agree with him that if is detectable we can infer that it was generated by something with "a power and capacity to choose between options". I do not agree with him that that we can distinguish between a "natural" choice-maker, like a sieve, and an "intelligent" choice maker, like a rat, by observing the differences between the patterns they generate. Both will generate patterns that exhibit CSI. But if intelligence is to be inferred from the amount of new information contained in the pattern, this quantity will depend not simply on the pattern, but on what we know about the factors that determine the choice-making of the choice-making agent. The more we know about the choice algorithm (whether rat or sieve), the less new information will be gained by examining the pattern. Once we know everything about the rat's brain, will the rat cease to be intelligent? No, because, in my view, is that there is no difference between the two agents. The amount of new information (in Dr. Dembski's terms) contained in the pattern generated by a trial-and-error learning algorithm in a computer only differs from the trial-and-error learning process in a rat because we know the algorithm -- because we wrote it. And the pattern produced by a "natural" filter only differs from the pattern produced by the rat in that the rules that govern the pattern are more amenable to inference by a diligent scientist. As a Christian, I believe in free will. I believe we are responsible, in some cosmic and meaningful sense, for our own actions. But I do not believe free will can be inferred from our behaviour. To an outside observer, our behaviour could as easily be entirely deterministic. If so, even the patterns generated by our intelligent behaviour would add no information to what an ideal observer already knew from the initial conditions of our neurons. Belief in free will seems to me to be as much an act of faith as belief in God -- indeed, belief in God is not possible without belief in free will. And I make that act of faith by believing in God. But we cannot infer an agent with free will from patterns that exhibit CSI. We can only infer "intelligence", as defined, by Dr. Dembski, as the "power and capacity to choose between options". Which is also possessed by a sieve. Cheers Lizzie Not to be outdone, Davescot enters

I have been skimming your writing waiting for you to get out all your objections and just now I searched all the comments on this thread for the word "probability". No discussion of Dembski's work and CSI can proceed very far without getting into probabilities yet you didn't use the word (or any derivative) even once.

— Davescot
Never mind that Davescot does not address Lizzie's comments, he quickly focuses on what he perceives to be a shortcoming in her arguments, namely that she does not use the term probability. Of course, in her analysis of Dembski's own words, Lizzie has to do nothing more than quote and analyze where Dembski's 'arguments' lead. Perhaps Davescot was not aware of Lizzie's history in data analysis and probabilities or he would not have chosen this particular path. In hindsight...

Ha! Well, now, look at that. It must be almost a record for me. I'm really quite reassured I have managed a probability-free history on a discussion board (trying googling Febble...). Well, I think I probably (no pun intended) assumed a shared understanding that we are talking about probabilities, on such a rarefied board. I have probably had my fill of explaining probabilities 101 elsewhere. And I did, of course, talk about "chance" and about the "null hypothesis" in signal detection. Because, as you rightly draw attention to, the detection of a signal depends on there being a low probability that what you are interpreting as signal is not random noise. So in fact I plead not guilty to the charge that I avoided the issue of probability (chance and null hypotheses are certainly derivatives). OK, let's move on. You state:

Given you have enough time and probabalistic resources anything is possible. Given an infinite amount of time or trials every possible pattern will be generated an infinite number of times.

— Lizzie
Ah, but we do not have to generate every possible pattern. For a highly specified pattern (say a unique pattern of 100 ones and zeros), and where even a near miss was as bad as a complete failure, then, yes, you would need to search a very large space with a very small probability of success. But that is a very specialised kind of search. Memory of near-misses won't help, because you have no feedback until you hit the jackpot. You just have to keep trying every combination, until you win. But random selection and natural selection isn't that kind of search. It's more like the game of hangman (even a game of hangman where the target is a random letter string). You guess at random, but when you get a correct answer for one slot, you get to keep it. You replicate what works, in other words. You don't start from scratch each time. This is the sense in which random selection and natural selection is a learning algorithm (and why learning algorithms tend to use random number generators and feedback). Trial-and-error proceeds very quickly for this kind of learning, as successes are replicated, and the search space is rapidly reduced. Thus when discussing CSI we come to terms like probabilistic resources and probability bounds. What do you know of these

Well, a fair bit. I'm a professional data analyst. I deal with probabilities daily. Right now, in fact.

and how do you explain life at the simplest level being composed of intricate interdependent networks of objects

Well, by incremental improvements to the ability of each organism to replicate itself. But, as you yourself pointed out, even modern single-celled organisms are far from simple, and have had (as I believe the evidence strongly suggests) several billion years in which to get their act together. My assumption is that the first prototypes of life (whether on this planet or elsewhere in the universe) would have been replicating molecules, with the replication governed by catalysts. As replication is the key to "learning" (in both evolving biological systems and elementary schools) then something that replicates has to be the starting point. Once you have a molecule that replicates, then you have a key element for a learning algorithm in place. Natural Selection being the other. All you need in addition is a bit of stochastic resonance. Actually, the concept of resonance is the key to all this, IMO - positive feedback loops. As for your point that those objects...

are themselves represented by digital codes that must be translated from codes to actual objects before they can be employed?

My simple one-word answer is: chemistry. The "codes" themselves are "actual objects" and they catelyse the synthesis of other "actual objects". This happens in inorganic as well as organic chemistry. As of course you will be aware. Sure, the code is digital, but all chemistry is digital - atoms are discrete (for the purposes of chemistry). But my argument, as I said upthread, is not that life could have begun without a intelligent designer (I do not find it necessary myself to invoke God for that part of the process, but I certainly don't know how the first DNA molecule formed). I am simply arguing that the variety and complexity of life does indeed indicate a) intelligent design, where intelligence is defined as Dr. Dembski defines it but that b) such intelligence is possessed by the system comprised of natural selection and modified replication. Cheers Lizzie Davescot struggles and Lizzie responds further

Hi DaveScot Well, as you can imagine, I read the evidence somewhat differently. However, I should say, firstly, that I agree that natural selection will fluctuate in strength as a "feedback" mechanism. The more unforgiving the environment, the more powerful the feedback. However, I disagree with your characterisation of the mechanism. I do not agree that the evidence suggests that "Natural Selection is a conservative force", and nor do I consider that the evidence supports the hypothesis of less-than-disastrous mutations piling up. I'd argue that there is a sole criterion for evaluating a mutation: does it increase the replication rate, or does it reduce it? If it does neither, it is neutral. And although I'd agree that when times are good, mutations that decrease an individual's probability of replication, but do not rule it out, will tend to be propagated through the population, the same will be true of those mutations that increase it. And where animals are in competition for resources, environment and genes will often interact, with the healthiest animals gaining more benefit - in terms of reproductive success - from the environmental bounty than the less healthy. So there is no particular reason for expecting "build up" of deleterious mutations in a population, even in good times. And, in contrast to your model, when times are hard, those individuals carrying genes that already compromise their chances of successful reproduction are more likely to be "purged" from the population than those carrying more reproductively advantageous genes. Although clearly, some mutations will be advantageous under certain environmental conditions, and disadvantageous under others. So you would expect, under this hypothesis, for times of rapid environmental events to coincide with rapid extinction events and rapid rates of speciation. So I'd agree that stasis and abrupt extinctions are "handily explained by rm+ns." But I'd maintain that relatively rapid rates of evolution are also handily explained by the same mechanism. Because, to return to the subject of Dr. Dembski's OP, that "rm+ns" is an intelligent system. It chooses and it learns, and thus tends towards optimal solutions for current conditions. You write:

The bit of evolution rm+ns can't adequately explain is the abrubt origin of new species with markedly different and unique anatomical features which is also part of the indisputable testimony of the fossil record.

— Lizzie
Well, hmm, not exactly "indisputable". Sure, it's a discrete record, because fossilisation is a discrete event, but there are some pretty impressive transitional series out there. But in any case if you are arguing (as I think you should not) that the fossil record indicates the emergence of abrupt novel features, then it puzzles me that you should then comment: Origination of novel cell types, tissue types, organs, and body plans has not been observed in historical times.

because of course I would agree. Incremental change, which is what is postulated by the Theory of Evolution would not lead to the prediction of "novel tissue types, organs and body plans" when observed at the sampling rate possible through observations recorded by human beings in historical times. What we see instead, as predicted by the ToE, is incremental adaptation, and occasionally the beginnings of speciation. The "abrupt" changes in the fossil record are not abrupt over a historical time-scale.

Any stories of how these things originated, and they must have originated thousands of times to get from bacteria to babboons, are works of fiction. Adding insult to injury, when the probablistic resources of rm+ns are scrutinized with 21st century knowledge of the complexities involved that particular fiction doesn't even pass the giggle test.

Well, there we will probably have to agree to differ. I do not consider it likely that "novel cell types, tissue types, organs, and body plans" ever appeared. It seems much more probable to me, and consistent with both the fossil and the genetic evidence, that modern bacteria and baboons are the end products of separate lines of incremental change from an common ancestor that probably resembles neither. So I don't even share your premise. And so I don't find the postulate either incredible or funny. But all postulates are, in a sense "fiction" - science isn't about certainty, as I'm sure you would agree, but about provisional models of reality that are always subject to potential falsification. In peace, Lizzie I think I am falling in love with Lizzie... Not to be left behind, our friend and Young Earth Creationist, Salvador Cordova joins the fray

I encourage your consideration of literature that is highly critical of of Darwinain evolution on simple theoretical and empirical grounds.

— Sal
Nothing so far of Lizzie's arguments against Dembski. But then again, Sal, 'I will take a grenade for Dembski' is not there to argue these uncomfortable points. Febble's detect Sal's weakness and responds

Thanks for the welcome. I think what I find odd about the Intelligent Design versus Theory of Evolution debate is that there seems to be very little discussion of what constitutes intelligence. And as a neuroscientist, intelligence - or cognition - is what I am interested in. So, having spent a day at the workface trying to figure out the kinds of algorithms an intelligent person might be using to solve a cognitive task I have set them (and trying to get measures of the patterns of neural firing that might shed light on this problem) , it feels odd to be confronted by an argument about whether or not a design could be the product of "natural" or "intelligent" processes. For me, intelligent processes are natural, and figuring out their nature is what earns me my living. So in the post of yours that you link to (and thanks for the link), you define something as "designed" (presumably "intelligently") if cannot be accounted for by "natural law " or "chance" . Now I understand that there may be legitimate philosophical and theological debate about whether cognition is the product of anything more than a vast array of neural processes, and my own view, as a theist, is that there is more to the person than the sum of the neurons by which they know and interact with the "natural" world. But I regard that view as an act of faith - I do believe, as I said upthread, in individual responsibility on some cosmic scale that matters. I think it matters to God. I define God to myself as the entity to whom my actions ultimately matter, and who is present in everyone. But enough theology... As a neuroscientist, I consider the bit of me that does that kind of theosophizing is my brain. And it's made of neurons, which, through a cascade of processes, transmit electrical impulses through my nervous system. It's a complex system. But I see no reason to suppose it is not a natural one. So the question as to how that complex system came to be "designed" is not for me a particularly burning one. I assume it was designed by the same kinds of process by which I think - in fact, by which Imyself "design" By a natural intelligence. And I consider the mechanism by which variance in our genetic inheritance interacts with natural selective pressures is an intelligent system. An extremely intelligent system, though not, I suggest a conscious one (although I suppose you might conceivable call it the mind of God....). So my problem with the arguments I have read that pit ID arguments against the ToE is not that I think the evidence invoked for ID is invalid, but that that the intelligence it is evidence for is the intelligence of a complex natural system. I don't know at what stage one can sensibly call a system of rules "intelligent". As I said, upthread, ordinary English usage would not allow a sieve, and certainly not an unreliable sieve, to be called "intelligent". But if we define intelligence as Dr. Dembski has done as, essentially, being an agent with the power of choice between options, then the reductio ad absurdum is to a sieve. Or, more sensibly, any "natural law". I think one of the most misleading words in the whole debate, in fact, is the word "random". Without getting into whether or not the universe is deterministic or not (and I'll stick with the quantum physicists who say that it is not), for practical purposes, things have causes. Chemistry is full of rules. Some things bind to other things in a particular way. Some things have affinities for other things; some things don't mix, like oil and water. Some things are catalysts, and affect the way other things bond. This system of rules means that natural algorithms are occurring all the time, and varied, often complex structures and compounds are the result. Now this all might be the product of a vastly intelligent First Mover, or it might just be the Way Things Are. But it is, nonetheless, the Way Things Are, now, and were, as I consider the evidence suggests, on the earth four billion years ago. And I see no intrinsic reason to doubt that, given that we are here now, as complex organisms whose minds and bodies function by means of complex cascades of chemical "if...then" algorithms, that we didn't emerge from much simpler chemical algorithms four billion years ago. The universe, as I see it, is an intelligent system. As soon as it diversified, with different forces having different rules, it became a vast algorithm, generating complexity, not by "chance" but by a sequence of algorithms so complex (and at stochastic at a quantum level) that "random" is often a convenient shorthand by which to describe it. I don't think we are the result of random processes. I think we are the result of intelligent processes, and that that intelligence, as with our own, is embodied in the "natural laws" that govern the matter of which we - and the entire universe - are made. For which, of course, as a theist, I give thanks to God. Cheers Lizzie

— Febble
Back to Davescot who attempts to (re)define intelligence

One of the hallmarks I use in defining intelligence in the context of intelligent design is the ability to plan for the future. Natural selection can only select among things that have been realized. Proactive vs. reactive.

— Davescot
Lizzie returns

Yes, I would agree that planning is an key component of intelligence as-we-know-it. However, I do not attribute intelligence to "natural selection", but to the entire system, including replication with modification. And there are different levels of planning. As intelligent animals, we certainly plan - and one model of the way in which we do this is that we make neural models of possible actions and their consequences - and only enact the one that suits our purposes best. We leave the rest as models. Natural selection +replication with modification doesn't do that, of course. It cannot rehearse possible future courses of action, and choose the best. It's gotta do what it's gotta do. However, it does do a form of planning that we also do, and so do less intelligent animals, which is that it learns. While it may not plan novel strategies de novo or from observation, it learns from direct experience, as we do. If it makes a mistake, it doesn't repeat the mistake. It makes sure that in the future it does what worked last time. So in that limited sense, yes, it "plans". It "chooses" what worked, rather than what didn't. And like us, sometimes it gets lucky by accident, and remembers that trick too. And as a strategy, that works pretty well. It may be described as "trial and error" learning, but that is a bit of a misnomer, as "trial-and-error" could just as easily describe random search. Trial-and-error learning involves, well, learning . It's much more efficient than random search because you learn from your successes and your mistakes. Natural selection + replication with modification also learns from both its successes and its mistakes, which makes it moderately intelligent. JasonTheGreek: Well, I think its a semantic issue. I'm probably predisposed to see the evolutionary mechanism as "intelligent" because it's the kind of mechanism I model when I am trying to model intelligent behaviour - particularly learning. And one thing that human beings do when they learn is learn bad habits. And instead of correcting their bad habits, they learn to compensate with some learned behaviour. The classic icons of "bad design" would, in my terms be "bad habits". Some of them are cool though. I do like those weird wasps. Cheers Lizzie

— Lizzie
Causing Davescot to express his ignorance for all to see

You're still making mistakes in describing rm+ns. Saying it learns from mistakes is misleading. It needs constant reinforcement of what it learns or it forgets even faster than it learned. This known as conservation of genomic information. Anything that is not immediately useful (no selection value) is not conserved within the genome forever. The genomic information with no immediate use gets peppered with random mutations and quickly becomes useless as a result. This is really basic stuff you don't know.

— Davescot
Of course, Davescot did not allow Febble to respond to his ignorant posting. And that is, my dear friends, the sad story of Intelligent Design, who invites people to comment on a particular event and when a well informed person politely responds, in a manner devastating to the cause, the person is treated with non-sequitors and other fallacies. And when it becomes clear that she has gained the upper hand in the discussions, she is unceremoniously booted out. Not only is ID scientifically and theologically vacuous, it also is not ready for any peer review. I wonder if ID activists who are rooting for 'teaching the controversy' where expecting a form of 'controversy' about their own thesis... How ironically appropriate to see how ID is forced back into the shadows of our ignorance. Next time, I bet that ID activists will be more careful in chosing their opponents, and avoid anyone who is familiar with uncovering 'fraud'.

The pattern instead is consistent with the E-M hypothesis of "reluctant Bush responders", provided we postulate a large degree of variance in the degree and direction of bias across precinct types. Mathematically, the observed pattern could arise from widespread fraud as well as from widespread response bias; differential vote spoilage rates for Kerry votes, or "ballot stuffing" of Bush votes, would produce results indistinguishable from "reluctant Bush responders." However, this is not the inference currently drawn from the data by USCV in their report.

PvM: Updated with reformatting some of the quotes per Febble's request

136 Comments

DrFrank · 15 January 2007

*rapturous applause for Febble*

Flint · 15 January 2007

The point of all this kind of fades in and out. ID is of course anti-science, and "discussion" of ID is like "discussion" of any other religious doctrine: the goal is to convert the Infidel, not discuss science. Conversion is definitely NOT facilitated by permitting disagreement. Perhaps polite and informed disagreement is least tolerable, but I think this doesn't matter a whole lot.

It's misleading, though, to say that ID shuns peer review. Religious peer review operates under different rules. Religious peers review doctrine by saying "Amen" lest they be evicted from the church. You don't debate or evaluate Received Truth, you worship it.

Febble is a Unitarian who decided to attend a Baptist service, for the purpose of politely explaining the fatal shortcomings, inconsistencies, and fallacies of Baptist doctrine. Should we have expected an open-minded reception?

Jon Fleming · 15 January 2007

I have to post this lovely bit of Dave Scottery from Larry Farflungdung's blog:

What PT does is bans the most effective opponents and then lets the roaring crowd of sycophants drown out and/or discourage the rest through browbeating and incivility. I ban least effective adversaries (there are many of those) and don't allow anywhere near the level of stupid, incivil cacaphony used by PT to discourage adversarial commenters.

— DaveScot
Words fail me.

Katarina · 15 January 2007

Febble's approach is great. She holds Dembski accountable for his definition of intelligence and goes from there. I've learned a whole new angle to the debate from her comments, which I'll most definitely use.

J-Dog · 15 January 2007

The Irony Of Ironies is that Dembski, DaveScot and their sycophants would support something called Intelligent Design...

J. G. Cox · 15 January 2007

Well, yes, because the purpose of peer review is not to detect fraud, as many laypeople seems to think it is whenever a retraction is made by a big journal. Peer review is there to ensure that (purported) experimental design and methods are sufficiently rigorous, that relevant papers and theory have been cited, that data analysis and is done correctly, that the conclusions drawn from the research are actually justified, and that the work has solid theoretical grounding.

Febble's comments were a discussion of the theoretical grounding of ID. She merely pointed out that, as formulated by Dembski, ID is indistinguishable from mainstream evolutionary theory. ID merely attached some words to the periphery (e.g., 'intelligent' and 'design') to sell itself. ID cannot be anything else until it posits a mechanistic basis for what it claims to observe, and it will never do that for political reasons.

Hmmm... I wonder if the creationist movement, as it learns from numerous legal defeats, would be defined as 'intelligent' under Dembski's definition...

Roland Anderson · 15 January 2007

I really enjoyed reading Febble's posts. This is really the final nail in the coffin for any pretence that ID is about serious intellectual endeavour, rather than empty word-games and smug self-congratulation. Second the *rapturous applause*.

_Arthur · 15 January 2007

The IDalists are all in favor of Teaching the Controversy, but will not allow any controversy or discussion on their website...

Mike Elzinga · 15 January 2007

Lizzie Liddle (Febble) has very articulately highlighted the issues surrounding the use of the word "intelligence".

As in any scientific field, it eventually becomes necessary to adopt more technically restricted definitions of colloquially understood words in order to proceed. This usually happens as ideas get clarified and classified (sometimes by trial-and-error), after which, a spurt in progress takes place (recall the early confusions in physics over momentum and energy, or about degrees of heat and temperature, etc.).

In the case of the word "intelligence", it seems that there will eventually have to be some technical distinctions made about what it refers to. We have a tendency to anthropomorphize when we attribute that word to patterns that seem to develop in situations that stir our emotions (e.g., to a diabolical intelligence when we notice that the check-out queue into which we step "always" comes to a halt when we are in a hurry).

I can certainly agree that random mutation plus natural selection has a seemingly "intelligent" character about it, but that word becomes both a clever analogy as well as a liability in trying to understand or describe what is actually taking place. It is one of those terms (like "work" or "force" in physics) that can be confusing and misleading, and can be used deliberately to obfuscate.

One of the problems with confronting the Intelligent Design/Creationist movement is that people too often allow them to define the terms used in science while continuing to carry on the discussion in their terms. This does not work because that movement has a history of deliberately confusing definitions and ideas. And their world view appears to obstruct the proper learning of well-understood scientific concepts.

If there has been any "good" that has come from having to deal with the ID/Creationism movement (and I don't believe there has been), it would be that it forces us to clarify our thinking and our descriptions of scientific ideas to the general public. Unfortunately, we usually have to do this to clean up messes created by people bent on generating confusion.

Ric · 15 January 2007

Ahahahahaha! That was beautiful. Elizabeth gave Davescot a royal whupping. She took him out behind the woodshed and sent him back with a red bottom... in the most civil way.

Fross · 15 January 2007

IT's a good angle. The reason is that it forces the IDers to admit that their type of ID requires the tinkering of a supernatural anthropomorphic type "mind".
In other words, they are forced to admit that the designer can't be natural.

VJB · 15 January 2007

I too have fallen in love with Febble. She is wonderfully Socratic, and neither ironic nor patronising ( a very difficult stance to take with the IDers). Her story 'Perhaps' should be published, and by all rights should become a classic. Go with God, Febble (girls' night out).

KL · 15 January 2007

Truly Febble is a class act. I do not have the intellect, am not as articulate, and would not have demonstrated the patience that she did. My hat is off to her. Bravo!

VitamanC · 15 January 2007

In other words, they are forced to admit that the designer can't be natural.

— Flint
Seems like they are pretty much explicitly stating this over at UD, (which recently got a make over): Uncommon Descent holds that... Materialistic ideology has subverted the study of biological and cosmological origins so that the actual content of these sciences has become corrupted. The problem, therefore, is not merely that science is being used illegitimately to promote a materialistic worldview, but that this worldview is actively undermining scientific inquiry, leading to incorrect and unsupported conclusions about biological and cosmological origins. At the same time, intelligent design (ID) offers a promising scientific alternative to materialistic theories of biological and cosmological evolution -- an alternative that is finding increasing theoretical and empirical support. Hence, ID needs to be vigorously developed as a scientific, intellectual, and cultural project Was this text always on the site somewhere? Seems tad more "creationist" to me...I can almost hear "God" being muttered under Demski's breath. It must drive him nuts to not be able to just say it.

normdoering · 15 January 2007

DrFrank wrote:

*rapturous applause for Febble*

I can't really give such rapturous applause. She did okay, she hit the basics, but she left out the deeper side of the connection between evolution and intelligence and she didn't mention a lot of important work in that area. Genetic algorithims are actually used in artificial intelligence: http://www.aaai.org/AITopics/html/genalg.html http://library.thinkquest.org/18242/ga.shtml Genetic algorithms are compared with neural nets and people like William Calvin have theories about the brain as a Dawinian machine. She presents it almost like its her own private and brilliant insight, but it's really the dominant theory in neurophysiology and should be basic, common knowledge for anyone in her field and easily supported by lots of URL links.

Flint · 15 January 2007

No, that was Fross (who is quite correct). My position is, the pretense that there is anything scientific about ID is simply false. Febble has taken their ostensible claims at face value, and carefully and politely hand-held them to the obvious and unmistakeable conclusions their claims imply. As dry a sense of humor as I've ever encountered, and a true joy. Pretending to accept that a creationist is honest is a wonderful technique.

Doc Bill · 15 January 2007

I, too, bow to Febble's patience, scholarship and technical excellence.

Having dealt with creationists for over 20 years I confess that I have lost interest in any kind of intellectual engagement, rather I derive my pleasure through satire and ridicule. Oh, to have fallen so far!

However, I was amused by both DaveScot and Patrick resorting to my favorite Creationist Tactic: Read the Book.

Read Darwin's Black Box, or No Free Lunch, or War in Peace, they say. And when you come back with more questions or arguments, well, laddie, you obviously either didn't read the book or you didn't understand what you read! Go back and read it again!

It was nice to see the creationists stifled by their own words.

Unfortunately, on their Very Own Website the creationists can't resort to my second favorite Creationist Tactic: Run and Hide

When a creationist's "argument" starts to go very badly, as is always the case, they have no choice but to suddenly drop the thread and disappear. How well we know our favorite creationist Houdini, Sal "Nothing Up My Sleeve" Cordova!

But on their own site, in their own discussion thread?
Where can they go? It's their site! So, all they can do is ban the infidel.

And, so, in the spirit of the previous thread on this subject, yes, I, Doc Bill, am an Uncommon Descentaholic (now retired) posting under Charles1859 and I have been banned by the Great Dembski himself. Multiple times.

Once again, kudos to Febble. Well done, scholar.

John · 15 January 2007

Norm wrote:
"I can't really give such rapturous applause. She did okay, she hit the basics, but she left out the deeper side of the connection between evolution and intelligence and she didn't mention a lot of important work in that area."

I think you're missing the point. The point was that she shamed them in a way that THEY understood.

"She presents it almost like its her own private and brilliant insight..."

She didn't come across that way to me at all.

"... but it's really the dominant theory in neurophysiology and should be basic, common knowledge for anyone in her field and easily supported by lots of URL links."

URL links don't work with the UD crowd. She hammered on a single point, and did so politely and relentlessly.

MarkP · 15 January 2007

Febble's performace just highlighted again the steamroller heading for the IDers: AI. Once we have robots that are indistingusihable from humans in the way they create, the whole design inference is rendered ridiculous. They will have been shown in the most glaring of terms that they cannot tell the difference between "intelligence" and the impersonal machine. The Steiner problem illustrated this pretty clearly, as do chess programs if one gives it some thought. Once the robots are debating us, one won't have to. IDers will go take a place next to the flat earthers.

So its only a matter of time before we see an aged, past-the-combover Dembski, sitting on a street corner with a sign that says "Will debate for food".

Katarina · 15 January 2007

I think I am falling in love with Lizzie...

I too have fallen in love with Febble.

Truly Febble is a class act.

Lol! Careful, guys. You fall in love with them one day, the next day they disappear from the internet! (sniffle);)

wright · 15 January 2007

Well, here is another highly entertaining application of the "vise strategy".

I can only echo previous posts: Febble has enough class and skill for a couple hundred duplicates of her detractors. If, as DaveScot implies, he really considers her among the "least effective adversaries" of ID, then ID is even deader than it currently appears.

Donald M · 15 January 2007

Apparently both Lizie and Pim overlooked that Dembski also wrote that "ID seeks to separate intelligent causes on the one the hand and undeirected, natural (emphasis mine) causes on the other." That eliminates calling NS an "intelligence" as Lizzie attempted to do at UD and Pim attempts to do here. You see, the 'N' in NS stands for natural. The only entities that can "choose between" are agents with intelligence. Copting Dembski's definition in an attempt call "NS" 'intelligent' is ludicrous. NS does not "choose" anything...it is an a posteriori observation of an event.

PvM · 15 January 2007

Donald M's response shows the depth of the confusion generated by Dembski's equivocation on the term intelligence.

Apparently both Lizie and Pim overlooked that Dembski also wrote that "ID seeks to separate intelligent causes on the one the hand and undeirected, natural (emphasis mine) causes on the other." That eliminates calling NS an "intelligence" as Lizzie attempted to do at UD and Pim attempts to do here. You see, the 'N' in NS stands for natural. The only entities that can "choose between" are agents with intelligence. Copting Dembski's definition in an attempt call "NS" 'intelligent' is ludicrous. NS does not "choose" anything...it is an a posteriori observation of an event.

— Donald M
So NS is perhaps a directed natural causes? Remember that Dembski has accepted the existence of apparent versus actual specified complexity, the former generated by algorithms, the latter by 'true intelligence'. Of course, no attempt is made so far to separate the two or provide us with means to do so.

William A. Dembski's writings claim, among other things, that algorithms cannot produce Complex Specified Information (CSI), but intelligent agents can. A recent posting of Dembski's introduced qualifiers to CSI, so that we now have "apparent CSI" and "actual CSI". Dembski categorizes as "apparent CSI" those solutions which meet the formerly given criteria of CSI, but which are produced via evolutionary computation. This is contrasted with "actual CSI", in which a solution meets the CSI criteria and which an intelligent agent produces. See my Dembski link page and follow the link for "Explaining Specified Complexity".

— Elsberry
Link Natural selection indeed chooses as it affects the probability distributions found in Dembski's

By intelligence, here, I mean something quite definite, namely, the causal factors that change one probability distribution into another and thus, in the present discussion, transform a blind search into an assisted search.

By all standards, NS fits the bill. Remarkably, Dembski has yet to address these findings, which trace back to Wesley Elsberry

The apparent, but unstated, logic behind the move from design to agency can be given as follows: 1. There exists an attribute in common of some subset of objects known to be designed by an intelligent agent. 2. This attribute is never found in objects known not to be designed by an intelligent agent. 3. The attribute encapsulates the property of directed contingency or choice. 4. For all objects, if this attribute is found in an object, then we may conclude that the object was designed by an intelligent agent. This is an inductive argument. Notice that by the second step, one must eliminate from consideration precisely those biological phenomena which Dembski wishes to categorize. In order to conclude intelligent agency for biological examples, the possibility that intelligent agency is not operative is excluded a priori. One large problem is that directed contingency or choice is not solely an attribute of events due to the intervention of an intelligent agent. The "actualization-exclusion-specification" triad mentioned above also fits natural selection rather precisely. One might thus conclude that Dembski's argument establishes that natural selection can be recognized as an intelligent agent.

— Elsberry
Link

Steviepinhead · 15 January 2007

Hmmm.

...Donald M.

Ah, I get it. It must be "that time of month" again.

Vroom vroom!

Who was that Masked Man, Mommy?

normdoering · 15 January 2007

John wrote:

"She presents it almost like its her own private and brilliant insight..." She didn't come across that way to me at all.

My feeling is this, anyone who starts talking about "evolution being intelligent" should mention at least one of these people, Danny Hillis, for making that massively parallel Connection Machine and running some evolutionary programs that prove the point, John Holland, for the seminal design notions for genetic algorithims or Marvin Minsky for writing "The Society of Mind." Or someone else like them to demonstrate this is what people in the field generally think. When she says things like "My own view is that life is a profoundly algorithmic phenonemon,..." or "I read the evidence somewhat differently. However, I should say..." or any similar phrases it's a bit pretentious because "her own view" of life being an algorithmic phenonemon is one I found years ago in books like Daniel Dennett's "Darwin's Dangerous Idea." It's her view because she learned it. She's not the only one who "reads the evidence somewhat differently," almost everyone working in AI and neurophysiology would agree. Why ignore all that support?

Henry J · 15 January 2007

Re "In other words, they are forced to admit that the designer can't be natural."

Since when do Creationists and/or IDers admit something simply because they've been forced to by descriptions of the contrary evidence? ;)

---

Re "She did okay, she hit the basics, but she left out the deeper side of the connection between evolution and intelligence and she didn't mention a lot of important work in that area."

She might have been trying to maximize the amount of info presented before getting kicked out of the place?

---

Re "If it makes a mistake, it doesn't repeat the mistake. It makes sure that in the future it does what worked last time."

The first of those two sentences puzzles me somewhat: if a mistake isn't propagated to the next generation, then (afaik) it is quite possible that it might occur again, esp. if it was due to a simple mutation to a fairly common allele of the affected gene.

Henry

PvM · 15 January 2007

Why ignore all that support?

— Norm
Because it is irrelevant. Febble is defending her position, not the position of others and as such is far more able to keep the discussion on track. If this suggests to you that she does not give sufficient credit to other great thinkers then perhaps you need to differentiate between scientific papers and a rhetorical argument.

John · 15 January 2007

Norm wrote,
"My feeling is this, anyone who starts talking about "evolution being intelligent" should mention at least one of these people..."

No, if she's pulling down the shorts of the yahoos at UD, she should only cite Dembski.

"Or someone else like them to demonstrate this is what people in the field generally think."

News flash, Norm: the yahoos don't care what people in any field generally think; that's why they think they know more about evolution than biologists do. Look at poor, pathetic Donald as an example--he thinks that pointing out that Dembski contradicts himself constitutes a coherent rebuttal.

"It's her view because she learned it. She's not the only one who "reads the evidence somewhat differently,"..."

She never claimed she was the only one! UD is kabuki comedy, not science. The thing that's so funny about them is that you can predict every one of their rhetorical moves, to their faces, and they still can't stop themselves. The same thing applies to antivaccination and animal-rights loons. Febble just did this in a beautiful way through her wily understatement and avoidance of standard strategies.

Popper's ghost · 16 January 2007

The reason is that it forces the IDers to admit that their type of ID requires the tinkering of a supernatural anthropomorphic type "mind". In other words, they are forced to admit that the designer can't be natural.

— Fross
No, that's very wrong. Her whole point is that "their type of ID", using Dembski's definition, does not require anything supernatural, anthropomorphic, or -- most importantly -- mindful. By Dembski's definition of ID, the process of evolution as understood in the ToE is an intelligent designer.

The only entities that can "choose between" are agents with intelligence.

— Donald M
Don't you folks bother to read what you're commenting on? Febble explained at length, carefully, patiently, and with great intelligence (as commonly understood, not Dembski's reduced definition), that "choosing between" does not require an "agent" -- that it does not require intention, foreplanning, etc. Any mechanism executing an algorithm containing an if...then statement has the capacity to "choose between".

It's her view because she learned it.

— normdoering
Duh. As if she had ever suggested otherwise. It is common for people to say "It is my view that ..." without specifying everyone and everything that influenced their view. Dan Dennett is particularly well known for using such an expression, as in http://books.guardian.co.uk/review/story/0,,1192975,00.html

My view is that free will is indeed real; it just isn't quite what you probably thought it was.

yet this view, compatibilism, was not invented by Dennett. Also http://www.pbs.org/wgbh/evolution/library/08/1/text_pop/l_081_05.html

My view is that creation itself, the universe itself, is the most wonderful thing deserving awe and respect. And that satisfies me as my substitute for God. Now, that's a view with an ancient tradition. Spinoza had a famous phrase, "God or nature," one and the same thing; I agree with Spinoza.

Had Dennett omitted mention of Spinoza or ancient tradition, he would not have thereby deserved the sort of ridiculous uncharitable ad hominem characterizations that you toss at Febble (keep in mind that Dennett, after his mentor Quine, is a major proponent of the principle of charity).

Popper's ghost · 16 January 2007

No, if she's pulling down the shorts of the yahoos at UD, she should only cite Dembski.

Quite so. Mentioning anyone else or providing a link would have provided an opening for attacking her source rather her argument. Sadly, Norm turns it around and -- ignoring her argument -- attacks her for something quite extraneous. I suspect jealousy or Asperger's Syndrome.

Popper's ghost · 16 January 2007

Oops, I forgot the other obvious motivation for Norm's hostility to Febble: she's a Christian.

Jon Fleming · 16 January 2007

Apparently both Lizie and Pim overlooked that Dembski also wrote that "ID seeks to separate intelligent causes on the one the hand and undeirected, natural (emphasis mine) causes on the other."

Calling a tail a leg doesn't make it one. Febble just carried Dembski's definition to its logical conclusion, which shows that his definition contradicts the sentence that you quoted. But inconsistency in ID is not news.

Donald M · 16 January 2007

Popper's Ghost
Don't you folks bother to read what you're commenting on? Febble explained at length, carefully, patiently, and with great intelligence (as commonly understood, not Dembski's reduced definition), that "choosing between" does not require an "agent" --- that it does not require intention, foreplanning, etc. Any mechanism executing an algorithm containing an if...then statement has the capacity to "choose between".
Well, I beg leave to differ with you and Lizzie on this point. Algorithmic "choosing" is nothing more than following a pre-scripted program, if you will. That doesn't explain where the algorithm came from, or who or what made the choice that it would be that algorithm as opposed to some other one. The algorithm is choosing anything...it is only following what it was programmed to do. In natural systems that program might be reduced to the simple laws of physics and chemistry, but that doesn't explain where the simple laws came from. Observing the results of this algorithmic choosing a posteriori and then claiming it was an "intelligent" choice made by the algorithm is equivocating terms and explaining nothing. Only intelligent agents can make real choices. Algorithms can only do what they are programmed to do.

Randi Mooney · 16 January 2007

Observing the results of this algorithmic choosing a posteriori and then claiming it was an "intelligent" choice made by the algorithm is equivocating terms and explaining nothing. Only intelligent agents can make real choices.

So how do you detect intelligence? How do you know the choices made by the natural system are any different to the choices made by something that you know is intelligent.

Darth Robo · 16 January 2007

Donald M's time of the month again:

"Algorithmic "choosing" is nothing more than following a pre-scripted program, if you will."

Why is it I'm reminded of an old thread from last year? The Steiner Solution thread, was it?

Dean Morrison · 16 January 2007

Donald M wrote: That doesn't explain where the algorithm came from, or who or what made the choice that it would be that algorithm as opposed to some other one.

I thought that she rather eloquently explained that as a Christian she puts her faith in the thought that God is responsible for the fundamental laws of nature.

The universe, as I see it, is an intelligent system. As soon as it diversified, with different forces having different rules, it became a vast algorithm, generating complexity, not by "chance" but by a sequence of algorithms so complex (and at stochastic at a quantum level) that "random" is often a convenient shorthand by which to describe it. I don't think we are the result of random processes. I think we are the result of intelligent processes, and that that intelligence, as with our own, is embodied in the "natural laws" that govern the matter of which we - and the entire universe - are made. For which, of course, as a theist, I give thanks to God.

. As an atheist I disagree of course - but I find her vision of God rather more inspiring than the ID version of some kind of handyman with a lot of time on his hands: one that is subject to the same learning algorithms as the rest in fact. Creationsts believe that God has had to learn from his mistakes, and even apologise for them - I'd cite the case of Noah's ark as an example. Perhaps that book on the geology of the Grand Canyon should point out it's all there because God made a pretty massive mistake, and every time you see a rainbow it's his way of saying sorry. Febble is scarily intelligent herself, it's such an irony that 'Intelligent Designists' run away when confronted by real intelligence.

Mark Lindeman · 16 January 2007

One more thing about the UcD scene astonishes me, and perhaps is worthy of archiving. DaveScot, in his response to RBH's earlier thread here, posts this update:
Update: It has been suggested to me that Liddle did not write that Bush stole the 2004 election through voter fraud. Well, here's what she wrote. You be the judge. New: Snark-free Exit Poll analysis, by Febble, Wed Apr 06, 2005 at 05:00:28 AM PST Regarding my "fraudster credentials": I am a fraudster. I believe your election was inexcusably riggable and may well have been rigged. It was also inexcusably unauditable. I am convinced that there was real and massive voter suppression in Ohio, and that it was probably deliberate. I think the recount in Ohio was a sham, and the subversion of the recount is in itself suggestive of coverup of fraud. I think Kenneth Blackwell should be jailed. Maybe someone can explain to me how to parse this language into a claim that Liddle doesn't think Bush won in 2004 through election fraud. Good luck.
Maybe I know the topic domain waaaaaay too well by now, but I don't understand how a normally intelligent reader could convince himself that any special "pars[ing]" is required here. I suppose we could speculate about what Liddle thought, but there is no room for speculation about what Liddle wrote. And, in fact, she "did not write that Bush stole the 2004 election through voter fraud." Can DaveScot really count on the assumption that no regular reader of UcD will notice? The irony is that Febble has spent considerable time over the last two years trying to counter poorly founded arguments that "Bush won in 2004 through election fraud." Indeed, the post quoted by DaveScot is an example. But that didn't stop him from getting her position wrong. (Disclosure: I am a founding member of International Friends of Febble.)

PvM · 16 January 2007

Donald M equivocates

Well, I beg leave to differ with you and Lizzie on this point. Algorithmic "choosing" is nothing more than following a pre-scripted program, if you will.

Much of intelligent choosing is following a "pre-scripted" program, the question however is: is natural selection a pre-scripted program? Or does it interact with sources of variation and the environment? In fact, what NS does is transferring information from the environment into the genome.

That doesn't explain where the algorithm came from, or who or what made the choice that it would be that algorithm as opposed to some other one. The algorithm is choosing anything...it is only following what it was programmed to do. In natural systems that program might be reduced to the simple laws of physics and chemistry, but that doesn't explain where the simple laws came from. Observing the results of this algorithmic choosing a posteriori and then claiming it was an "intelligent" choice made by the algorithm is equivocating terms and explaining nothing. Only intelligent agents can make real choices. Algorithms can only do what they are programmed to do.

Yawn... Explain to us how Dembski intends to differentiate between actual and apparant intelligence/specified complexity? How do intelligent agents make 'real choices'?

PvM · 16 January 2007

Donald still misses the point

Observing the results of this algorithmic choosing a posteriori and then claiming it was an "intelligent" choice made by the algorithm is equivocating terms and explaining nothing. Only intelligent agents can make real choices. Algorithms can only do what they are programmed to do.

What Febble has shown is that according to Dembski's own definition, NS is "intelligent". Now, Dembski has at least two options: 1. to accept the conclusion with all its consequences 2. to redefine intelligence to become a tautoloty. Dembski so far seems to have accepted the existence of actual versus apparent Complex Specified Information, where the latter is caused by algorithms. However he seems unable to present a way to detect between the two, suggesting that Febble's argument is a real problem. So who is really equivocating here... If Donald wants to state that intelligence is that what an intelligent designer (hint hint) does then fine and lets get over the idea that ID is a scientifically relevant concept.

PvM · 16 January 2007

Donald M has raised an interesting question: Is 'real intelligence' different from an algorithm? So far he has argued that this is the case but I have seen no rational argument. In fact, I argue that intelligence is not dissimilar from regularity and chance.

For instance, science has shown how innate behaviors exist which are triggered by genes and cascading proteins/hormones. In other words, what may appear to be an intelligently designed action, is clearly driven by chemicals.

But not all behavior is innate, behavior also involves learning. So is learning that makes 'real intelligence' different from algorithms? Not really, as for instance RM&NS involves learning as well. This is called evolvability and is not dissimilar from learning.

So both RM&NS and "real intelligence" involve choices based on chance as well as based on regularities where regularities may involve innate responses as well as learned responses.

It's up to ID proponents to show that "real intelligence" is somehow significantly different. Watch the ensuing equivocation about purpose and one quickly realizes that ID remains once again vacuous in its ignorance, ignorance which is a foundational principle of ID.

normdoering · 16 January 2007

Popper's ghost wrote:

Oops, I forgot the other obvious motivation for Norm's hostility to Febble: she's a Christian.

Did I say anything about her Christianity? You think I'd hide that if it were a problem for me? I said she only covered the basics and refused to mention sources, sources which are important for learning more and clinching her argument. But you're right about something in your previous post, a link would indeed have provided an opening for attacking her source rather her argument. And going beyond your point those attacks would probably have been straw man attacks and ad hominins if they involved Dennett (I know they have a wide array of straw man Dennett attacks from experience). One of the reasons she lasted so long was indeed because she played ignorant. A case in point: She writes:

However, if you want me to explain how, given a computer and an operating system, complex algorithms can be created by trial and error, then I'm happy to do so. It involves a random number generator, and a series of "if...then" statements. In other words, the computer equivalent of random mutation, and natural selection proposed by the ToE. Such a program has the "power and facility to choose between options", and is thus capable of producing patterns with "specified complexity", and possesses "intelligence" as defined in this context by Dr Dembski. As does the system of random mutation and natural selection.

As if no one ever told the guys at UD about genetic algorithms. But they already knew, as Patrick demonstrates:

Febbles: .... Secondly, run a search on "genetic algorithms" because that is obviously what you're referring to.

She responds:

Genetic algorithms are interesting (and I work on learning algorithms myself) but they were not central to the point I was making. A simple if...then statement is all that is required to satisfy the requirement of "the power and facility to choose between options".

This is where I think she takes a seriously wrong turn. Genetic algorithms are more central to her point than she seems to realize. They are models of evolution. She wants to reduce this to a "simple if_then" statement. I think that's going too far and it obscures her point rather than clarifies it. While an if_then statement is exactly the minimum of the Dembski definition she uses, and it could be a dumb agent of a larger intelligence in Marvin Minsky's scheme, as in "The Society of Mind," its too simple a scheme to support her argument about evolution being "intelligent." The UD guys already see the if_then of natural selection as a filter that gets rid of what wasn't "designed." They're arguing that evolution can not build complexity and specificities. To go there she needed to talk about things like "searchspace" or Dennett's skyhooks and cranes ideas. She never got there. She never made, what to me, are the most important points.

wamba · 16 January 2007

Febble's performace just highlighted again the steamroller heading for the IDers: AI. Once we have robots that are indistingusihable from humans in the way they create, the whole design inference is rendered ridiculous...

Sadly, once we have such implementations of AI, the proponents of ID will hold them up as proof of ID. See how they already reverently point to biomimetic engineering as evidence of ID, although it is exactly the opposite. See how they (deliberately?) misconstrue genetic algorithms.

John · 16 January 2007

Mark asked:
"Can DaveScot really count on the assumption that no regular reader of UcD will notice?"

Absolutely. At least he can count on no regular commenter noticing. Anyone commenting that he/she noticed can simply be banned.

John · 16 January 2007

Norm wrote:
"I said she only covered the basics and refused to mention sources,..."

At what point did she refuse to mention anything?

"... sources which are important for learning more and clinching her argument."

Her audience isn't interested in learning more, Norm.

"This is where I think she takes a seriously wrong turn."

I disagree. She kept it simple, which is why she was so devastating.

"Genetic algorithms are more central to her point than she seems to realize."

Her point is that a single choice satisfies Dembski's definition. It's therefore obvious that a truckload of choices satisfies it as well. She was effective because she kept it simple. That's hard to do.

"They are models of evolution. She wants to reduce this to a "simple if_then" statement. I think that's going too far and it obscures her point rather than clarifies it."

You're missing her point by a mile. You could babble to them and think that you scored points, but in their minds, you were irrelevant. They know (with the possible exception of the dimmest bulbs like Joseph and Jehu) that Febble made complete fools of them. That's nearly impossible to do with a group of seriously deluded people, especially in the relative absence of allies to reinforce your simple point.

There's always the empirical route, Norm. Why don't you start commenting over there? I predict that after you are banned, no one will express the slightest acknowledgment that you had ever made a point worthy of discussion. Several expressed that WRT Febble.

Elizabeth Liddle · 16 January 2007

Actually, I'm a Turing machine. (Just kidding). Thanks, guys for your warm words. Panda's Thumb has been bookmarked for quite a while now, but I never expected to be a topic, let alone two. A few responses: Norm: My words probably could have been read as though I was claiming the ideas as my own, but my intention was really just to state that those were my views (they are). I didn't mean to imply they were wildly original - and my point, essentially, is one of logic. It might have benefited from appeal to authority, but I didn't think it needed it.

Copting Dembski's definition in an attempt call "NS" 'intelligent' is ludicrous.

In that case, he needs to formulate a different definition. The problem for him is that the one I quoted (which he has used several times) actually is sufficient to describe an agent capable of generating "Complex Specified Information" - which presumably is why he used it. He specifically rules out intentionality as a necessary factor for producing CSI. PvM: Thank you very much for archiving my UD posts. One small request: my blockquotes have got lost in transcript - is there any chance you could re-format them in such a way that my words are distinguished from the people I quote? I think my prose style is sufficiently different from those I was arguing with for the boundaries to be clear, but you never know! Thanks, everyone. The world suddenly seems a sunnier place. Lizzie (Febble)

normdoering · 16 January 2007

There's always the empirical route, Norm. Why don't you start commenting over there? I predict that after you are banned, no one will express the slightest acknowledgment that you had ever made a point worthy of discussion. Several expressed that WRT Febble.

I've already been banned at UD, twice. I got banned the second time after one short post mentioning Marvin Minsky's "The Society of Mind" and how it also uses the term "intelligent agents." You say "They know (with the possible exception of the dimmest bulbs like Joseph and Jehu) that Febble made complete fools of them," but I don't think they do know that.

normdoering · 16 January 2007

Elizabeth Liddle wrote:

My words probably could have been read as though I was claiming the ideas as my own, but my intention was really just to state that those were my views (they are). I didn't mean to imply they were wildly original - and my point, essentially, is one of logic. It might have benefited from appeal to authority, but I didn't think it needed it.

I can accept that, and also Popper's note that if you had they would have gotten out their starw men and attacked your sources or banned you much sooner. By not going there you kept them guessing as to where you were going. You managed to last longer than most people before getting banned. So, I'll give you that -- it's an accomplishment. But did you make your points sink in for anyone on UD? Did you do as John claims and shame them in a way that they'd know it? That I'm not so sure about. I'll give you a maybe on that score. I don't share John's certainty.

Sarcastro · 16 January 2007

One of the hallmarks I use in defining intelligence in the context of intelligent design is the ability to plan for the future.

I'm just an agnostic so maybe I don't "get" something these theists do, but doesn't this statement, like so much of ID, reduce the supposed intelligence to something far less than an omniscient and omnipotent being?

I mean, such a God doesn't plan for the future, He planned the future. Period. If He is such hot doodoo why does He have to keep tinkering with crap all the time? Would not a deity who was truly sublime simply have to light off creation and be done with it? It seems to me that a deity working within the known cosmological framework, that is one whose influence is limited to being the prime mover and, perhaps, a little stir in the first few nanoseconds of reality, be a whole lot more impressive than this fiddling God who can't seem to get anything right the first time?

If we are the crown of creation I'd be more impressed if God rolled the dice 15 billion years ago just so perfectly that, eventually and by whatever super clever means such as natural selection, the bones came up "sentience!" and thus we are in His image than I am at the prospect of a God who can't shoot strait.

This garbage isn't about anything but the usual ignorant ideas that permeate the stupid among us. To these goobers, God is just them writ large.

So let me correct that quote:
"One of the hallmarks I use in defining intelligence in the context of intelligent design is the ability to think just like me."

Raging Bee · 16 January 2007

norm's reaponse to Febble:

But did you make your points sink in for anyone on UD? Did you do as John claims and shame them in a way that they'd know it?

My response to norm:

Did you?

Unless, and until, you can show superior results, you should not waste time sniping at your betters.

John · 16 January 2007

Norm wrote:
"You say "They know (with the possible exception of the dimmest bulbs like Joseph and Jehu) that Febble made complete fools of them," but I don't think they do know that."

Read these:
http://www.uncommondescent.com/archives/1940#comments

"Dave, I'm not sure how saying your opponent is so starved for news that they'll put your banning of Febble on their front page, then putting that article about the banning on your own front page, makes your point."

"I also would have preferred to have corrected this misconception in another manner." That's from a MODERATOR, Norm!

"I think it appropriate we be seen as quite willing to engage the varsity among the critics including Matzke, Hoppe, Bottaro, Musgrave, Inlay, Rosenhouse, Theobold, or any of the Talk Origin crew. As dismissive as we may feel toward some of them, some of our readers, particularly the new comers and young would enjoy seeing the exchanges." Another moderator.

GuyeFaux · 16 January 2007

One of the hallmarks I use in defining intelligence in the context of intelligent design is the ability to plan for the future.

Echoing Febble (and others), NS does a pretty decent job at predicting the future. NS encourages behaviors that work today in the hope that they will work tomorrow and discourages others.

MarkP · 16 January 2007

Donald said: Algorithmic "choosing" is nothing more than following a pre-scripted program, if you will. That doesn't explain where the algorithm came from, or who or what made the choice that it would be that algorithm as opposed to some other one.
Yes folks, that's a double helping of Moving The Goalposts with a twist of the No true Scotsman fallacy. Let's hear it for The Donald, for whom algorithms can't make real choices, because only an intelligence can make real choices, even if the choices and the process of reaching them is indistinguishable, as in the problem of the Steiner solutions. Real choices are tacitly defined as "those choices made by intelligent agents", and that is defined, conveniently, as "people"...or "god". Then of course, after it is demonstrated that real choices, by any non-question begging definition, are indeed made by nonhuman actors, he falls back on "but you can't explain where that algorithm came from", which manages to beg an unasked question. Pursued further, what one finds at the end of this rabbit trail is Pinker's "ghost in the machine", the "soul" in the religious vernacular. No matter how sophisticated the machine, or its form, be it metal or biochemical, the source of what people like Donald consider "real" thinking is the "soul", a decidely non-scientific presumption for a great science like ID. Tell me Donald, does Commander Data in Star Trek make "real" choices? If not, what is not real about them? Is he just a machine following programming? How do you know that we are not merely machines following our programming (if you'll pardon the borrowed metaphor), and our sense of "will" is mere illusion? Many psychological experiments done with those with seperated brain halfs make that case quite strongly. Is it because we have a soul and he doesn't? Why doesn't he? How do you know that?

Henry J · 16 January 2007

Re "Is it because we have a soul and he [Cmdr. Data] doesn't? Why doesn't he? How do you know that?"

Not to mention the same question for chimpanzees, dolphins, whales, elephants, pigs, parrots, octopi, and maybe others.

(The ones I named are known to show various signs of intelligence.)

Henry

normdoering · 16 January 2007

John quoted these from UD:

"I also would have preferred to have corrected this misconception in another manner." That's from a MODERATOR, Norm!

"I think it appropriate we be seen as quite willing to engage the varsity among the critics including Matzke, Hoppe, Bottaro, Musgrave, Inlay, Rosenhouse, Theobold, or any of the Talk Origin crew. As dismissive as we may feel toward some of them, some of our readers, particularly the new comers and young would enjoy seeing the exchanges." Another moderator.

Those two only demonstrates some regret about how they handled the issue. I'll only admit that they've been successfully shamed when they change their banning policy and allow open debate or when DaveScot stops acting like an arrogant, know it all, jerk. This they haven't done and Liddle has had no real effect except for creating more noise. DaveScot is still acting like an arrogant, know it all, jerk and even bringing up some valid questions. For example, DaveScot notes:

Febble wrote: If it [natural selection] makes a mistake, it doesn't repeat the mistake. It makes sure that in the future it does what worked last time. So in that limited sense, yes, it "plans". It "chooses" what worked, rather than what didn't. And like us, sometimes it gets lucky by accident, and remembers that trick too.

Gee, I wonder how that works. When a mutation causes death of the individual before it can reproduce how exactly does natural selection not repeat that mistake? Does it send a memo to all the other members of its species saying "don't try this, it's a mistake". That reminds me that I didn't get Liddle's comment about evolution learning from mistakes either. DaveScot is no doubt twisting her meaning, she doesn't mean mutations, but what was her meaning? Can you explain? Is Liddle still here, can she explain? I would not have said it that way myself. I think neural nets are far superior at learning from mistakes and I'm not sure you can even say that natural selection learns from mistakes -- at least not the mistakes DaveScot is talking about. DaveScot seems to mean things like Down's Syndrome or color blindness or any of thousands of genetic mistakes that happen over and over again. What kind of mistakes was Liddle talking about? Does anyone know? It's ambigous at least. How does using anthropomorphic phrases like "it 'plans'. It 'chooses'" help? In Liddle's defense, someone wrote:

To not repeat a mistake is to be unable to reproduce. To repeat a good trick is to reproduce better than average. It makes all sense to say that a population "remembers" what works and what doesn't, because this information indeed accumulates into its gene pool.

Ummm, not repeating a mistake isn't the same as learning from mistakes. Evolution doesn't remember what the mistake was, it only remembers the successes, and those are recorded in our DNA. Where are these mistakes recorded? Where are they remembered? It seems to me that the thing that makes evolution both slower and better (more complete in its search of searchspace) is the fact that neural nets remember the bad moves (mistakes) and the good moves (successes), but evolution only remembers the good moves (successes) -- that's what gets recorded in our DNA. Imagine having a genetic algorithm and a neural net learning chess, or go, or some other complex game and then playing thousands of games against each other every week. My intuition about this suggests that the neural net would surge ahead early and beat the genetic algorithm. But then, one day, years in the future, the genetic algorithm would start kicking the neural net's ass because in the end we sometimes learn the wrong lessons from mistakes.

Raging Bee · 16 January 2007

norm wrote:

I'll only admit that they've been successfully shamed when they change their banning policy and allow open debate or when DaveScot stops acting like an arrogant, know it all, jerk. This they haven't done and Liddle has had no real effect except for creating more noise.

Did it ever occur to you that the reaction you sneer at may be a necessary step toward the one you demand?

Any noncompoop can look at someone else's accomplishments and say "That's not good enough." Demonstrating superior accomplishments of one's own is another matter. We're still waiting for you to remind us of yours.

Evolution doesn't remember what the mistake was, it only remembers the successes, and those are recorded in our DNA.

That's tantamount to remembering and learning from mistakes. Once a certain species adapts to "correct" a "mistake" (i.e., using a swim bladder to get oxygen while out of water), then any creature that fails to follow suit will be crowded out by those that have -- sort of a reminder/bitch-slap at the biological level. Sounds like remembering and learning to me.

John · 17 January 2007

Norm wrote:
"Those two only demonstrates some regret about how they handled the issue."

Yes, it illustrates that they know their pants are around their ankles.

"I'll only admit that they've been successfully shamed..."

Now you're moving the goalposts, Norm. The issue is whether they know that Febble made fools of them. They clearly do, with a few exceptions.

The question's still on the table: how did YOU do at UD?

"It seems to me that the thing that makes evolution both slower and better (more complete in its search of searchspace)..."

Ya know, before you claim that "better" = "more complete in its search of searchspace," you might want to actually read a paper or two that shows that only a miniscule fraction of "search space" has ever been "searched." In fact, evolution isn't searching for anything, which is a huge hole in the notion of ID.

normdoering · 17 January 2007

John wrote:

Now you're moving the goalposts, Norm. The issue is whether they know that Febble made fools of them. They clearly do, with a few exceptions.

When did I set up any goalposts? How can I move them if they were never there? You're repeating cliches that have no meaning in this context. All I started with was my subjective opinion I didn't feel like giving her rapturous applause and stated something true -- these views she has had sources she didn't talk about. She admitted that and explained her position and I accept her explanation.

The question's still on the table: how did YOU do at UD?

Do what? I got banned like anyone who argues for evolution on UD, same as Elizabeth Liddle, but I didn't have to write as much.

Elizabeth Liddle · 17 January 2007

Thanks to PvM for the re-formatting! I really appreciate this.

That reminds me that I didn't get Liddle's comment about evolution learning from mistakes either. DaveScot is no doubt twisting her meaning, she doesn't mean mutations, but what was her meaning? Can you explain? Is Liddle still here, can she explain?

— norm
Well, what I meant is that "evolution" (or whatever we want to call the system that comprises replication-with-modifications + natural selection) repeats - i.e. replicates - its successes - genotypes that lead to fecund phenotypes - more readily that its failures. To pursue the neural analogy: the "phenotype" might be broadly analogous to short-term memory - a neural representation that is maintained by continuous neural firing. The phenotype "remembers" its genotype - it replicates it through meiosis, continuously, and expresses it as an individual whose fitness is tested against the environment. But that "memory" lasts only for the lifetime of the individual. However, the replication of the genotype through reproduction might be broadly analogous to the dentritic growth that "hard wires" memories into long-term form. Hebb famously (and possibly apocryphally), said "what fires together, wires together". What fires in short-term memory, and is reinforced, will tend to be wired together in long-term memory, resulting in what we call "learning". Similarly, a genotype that is expressed as a successful phenotype will be "wired" into the genotypes of the offspring of that phenotype. Thus the population will tend to "learn" "representations" of solutions that work (genotypes that code for fit phenotypes), and tend to "forget" those that don't.

I would not have said it that way myself. I think neural nets are far superior at learning from mistakes and I'm not sure you can even say that natural selection learns from mistakes --- at least not the mistakes DaveScot is talking about. DaveScot seems to mean things like Down's Syndrome or color blindness or any of thousands of genetic mistakes that happen over and over again.

— norm
Sure, errors will recur. But if they are less likely to be hard-wired into the gene-pool (and people with Down's syndrome are both less fertile, and more likely to have children with reduced fertility) they will recur less frequently than successes. And indeed the "mistakes" that DaveScot mentioned are far less frequent than healthy births. In the same way, human learning is less-than-perfect. We are prone to certain kinds of errors, even when we have learned that they are errors. In fact, my own current investigation is into the monitoring of errors of commission - into the ways in which the brain detects errors, actually, spookily, slightly before they are committed. And for some tasks, optimum performance is not achievable without errors. Conditions like Down's syndrome may be a necessary byproduct of a replication system that confers net benefit to the gene pool. In other words, even if an error (say a trisomy) itself is not actually replicated, it may be prone to recur ab initio as a result of a process that mostly works rather well but is too complex to be bug-free. In the same way, my participants learn not to respond to a stimulus until they have properly evaluated it. However, they cannot completely eradicate responses that are too rapid to be successfully inhibited, because (being rat-bags), we also impose penalty points if they are too slow. In other words, they are not capable of producing error-free performance. What "evolves" instead is a "population" of responses with a peak frequency in the optimum response time-window, but with the tails of its distribution in the two failure zones (too fast for adequate evaluation; too slow to avoid a time penalty). I have not myself modelled neural networks as such, and would be interested in your views. But it strikes me that even my simple model is, essentially, a neural network, in that the probability of a particular response is altered by feedback, just as weights that determine the probability of a particular pathway are adjusted in a neural network when exposed to training data, and as the probability of replication of a genotype is a function of fitness of the phenotype to replicate.

Elizabeth Liddle · 17 January 2007

Correction to the last sentence of the above:

I should have said "the fitness of the phenotype to breed". The phenotype is not, of course what is actually replicated.

normdoering · 17 January 2007

Elizabeth Liddle wrote:

- into the ways in which the brain detects errors, actually, spookily, slightly before they are committed.

That sounds utterly fascinating. I haven't got time to absorb your post fully and comment but based on where you're going with your research you might want to think about blogging on ScienceBlogs here: http://scienceblogs.com/ I'll pick up the rest after I've had some sleep. I think you went off track somewhere but I'm too tired to put my finger on it.

PvM · 17 January 2007

It seems to me that Febble (Liddle) is talking about the concept of evolvability but from a different perspective.

Elizabeth Liddle · 17 January 2007

It seems to me that Febble (Liddle) is talking about the concept of evolvability but from a different perspective.

— PvM
Well, I didn't think I was, but I was going to. One interesting thing I found with my learning model is that if I don't introduce a bit of stochastic variation into my replications, the thing learns very efficiently, but then can't adapt to new conditions. If I want it to emulate human "set-shifting" - learning a new set of responses when contingencies change, it helps if there is a bit of random noise in the system. And as "set-shifting" is an aspect of human intelligence that is actually tapped by IQ tests, that suggests to me that anything that can set-shift is quite smart. And evolution can set-shift. Probably as a function of "evolvability". Thanks! Lizzie

normdoering · 17 January 2007

Elizabeth Liddle wrote:

To pursue the neural analogy: the "phenotype" might be broadly analogous to short-term memory - a neural representation that is maintained by continuous neural firing. The phenotype "remembers" its genotype - it replicates it through meiosis, continuously, and expresses it as an individual whose fitness is tested against the environment. But that "memory" lasts only for the lifetime of the individual.

The phenotype broadly analogous to short-term memory? Okay, I sense you're going somewhere interesting with that metaphor, but it looks like a somewhat complicated and sloppy metaphor on the surface. I'm not getting what you're after there. I can see the phenotype stage as the active learning stage, the point where the genotype/genome meets the environment and selections get made, where the critters live or die. Unless you're a solipcist I think the phenotype would have to constitute more than just short term memory, it would constitute the sensory organs and pathways too. The whole training scheme of a neural net. Saying it's like "short-term memory" seems too limiting.

The phenotype "remembers" its genotype - it replicates it through meiosis, continuously, and expresses it as an individual whose fitness is tested against the environment. But that "memory" lasts only for the lifetime of the individual.

Okay, the phenotype, one generation of actual living creatures out there breeding, living and dying. But their genetic memory, their DNA, doesn't last only their life time, the successes pass on their DNA/genome/memory, the failures do not -- the failures are forgotten. The "memory" of the failures lasts only for the lifetime of the individual. That's one meaning of "learning from mistakes" that these type of evolutionary programs don't do. But it is an anthropomorphic metaphor with other potential meanings. And you seem to be saying there's another kind of "learning from mistakes" in this system. I think I agree. There is. However, when you start comparing this type of evolutionary algorithm with a neural net I think you're going off target:

What fires in short-term memory, and is reinforced, will tend to be wired together in long-term memory, resulting in what we call "learning". Similarly, a genotype that is expressed as a successful phenotype will be "wired" into the genotypes of the offspring of that phenotype. Thus the population will tend to "learn" "representations" of solutions that work (genotypes that code for fit phenotypes), and tend to "forget" those that don't.

Yes, but it's different for a neural net (at least some neural nets, there are different kinds) compared to a genetic algorithm. Things are recorded differently, the memory of a mistake gets into a neural net in a way it doesn't, as far as I can tell, get into a genetic algorithm. This seems to lead you to this mistake:

...it strikes me that even my simple model is, essentially, a neural network, ...

I'm not so sure it's kosher to call genetic algorithms or evolutionary programming "essentially, a neural network." There are definite similarities, neural nets are trained by "reward and punishment" while genetic algorithms are shaped by "life and death." They both belong to some kind of general class of algorithms (parallel distributed processing networks?). What distinguishes them, as far as I can tell, is how they remember. Punishments get worked into the memory of a neural net in ways different than the forgetting of dead phenotypes in evolutionary algorithms. But I really don't know, and I'm not sure anyone does. Proving or disproving that genetic algorithms are the same as neural nets might win you a nobel prize for all I know.

Mike Elzinga · 17 January 2007

Elizabeth (comment # 155721) points out a very interesting phenomenon caused by the effect of injecting random noise into the system under study. This enables the system to adapt to changing conditions more easily.

This is a beautiful analog to what has been known in physics for a long time, and has often been referred to a "stochastic resonance". The sensitivity of detection systems (e.g., touch, hearing, motion detectors, accelerometers, etc.) can be greatly enhanced by injecting small amounts of noise into the system. In mechanical systems, as an obvious example, small amounts of friction can be overcome by small amounts of random jiggling. But the idea extends to many other systems as well, even atomic and molecular systems.

Any time a system has to explore a range of configurations and adjust to them, there are underlying forces of various kinds (depending on the kind of system) that impede changes of state in the adapting system. These can be hysteresis-like forces and effects, potential wells of various sorts (e.g., van der Wall potentials, surface potentials, binding energies, etc.). Ultimately all of these come down to the fundamental forces of physics. The introduction of a small randomly varying force on top of the external forces that push the system into other nearby states facilitates the jump to the nearby states more easily by preventing the system from getting "stuck" in its current state or in a metastable state nearby.

GuyeFaux · 17 January 2007

The introduction of a small randomly varying force on top of the external forces that push the system into other nearby states facilitates the jump to the nearby states more easily by preventing the system from getting "stuck" in its current state or in a metastable state nearby.

This is kind of the idea behind annealing as well. What's good for physicists gets copied by computer scientists: this is why it's a good idea to introduce some sort of noise to any hill-climbing algorithm (such as gradient descent). Hence the popularity of simulated annealing and genetic algorithms to help in optimization problems.

tgibbs · 17 January 2007

normdoering wrote:
Okay, the phenotype, one generation of actual living creatures out there breeding, living and dying. But their genetic memory, their DNA, doesn't last only their life time, the successes pass on their DNA/genome/memory, the failures do not --- the failures are forgotten.
Not necessarily. Failures---mutations that do not improve or even harm reproductive success---can persist in the population for many generations, particularly if the trait is recessive or if there are compensating traits in the population. The frequency of that allele in the population can be thought of constituting the "memory" of the success/failure of that strategy.

David B. Benson · 17 January 2007

Henry J --- You forgot cats!

Definitely intelligent and soulful are cats.

Cats.

Henry J · 17 January 2007

Does noise in the system help avoid getting stuck in local optima instead of continuing to climb?

Henry

Mike Elzinga · 17 January 2007

Henry asks (comment # 155782) if random noise in a system helps avoid getting stuck in a local optimum instead of continuing to climb.

The basic idea is to make the rms value of the noise as large as or slightly larger than any local optimum you wish NOT to explore. So you need to have some idea of what this size is ahead of time or you have to find it empirically.

You can also adjust the rms value of the noise to explore the shape of local optima. The smaller the rms value, the deeper (higher) into the optima you go. Of course, the main "force" on top of which this noise is placed, is varied to explore the region of interest.

Anton Mates · 17 January 2007

Okay, the phenotype, one generation of actual living creatures out there breeding, living and dying. But their genetic memory, their DNA, doesn't last only their life time, the successes pass on their DNA/genome/memory, the failures do not --- the failures are forgotten.

— tgibbs
Not necessarily. Failures---mutations that do not improve or even harm reproductive success---can persist in the population for many generations, particularly if the trait is recessive or if there are compensating traits in the population. The frequency of that allele in the population can be thought of constituting the "memory" of the success/failure of that strategy.

I think that's the opposite of "remembering failures" in the sense norm and Elizabeth are using it. They're talking about systems which try out a solution, find it to be a poor one, and then never go back to it again. It would be as if an organism was born with a harmful mutation, had lousy reproductive success, and thereafter that mutation never popped up again in that species. Which, actually, you could make a case for with extended fitness. A harmful mutation not only damages its owner's fitness but also their parents' and relatives', so any trait which makes that mutation more likely to occur in your offspring will be selected against as well. An extinct and harmful trait can leave a "memory" behind in the depressed frequencies of any alleles which made its appearance more likely. I agree with Pim that evolvability would largely be a product of such memory.

Henry J · 17 January 2007

Mike Elzinga,
Re "The basic idea is to make the rms value of the noise as large as or slightly larger than any local optimum you wish NOT to explore."

Ah so. So for an evolving species, too low a "noise" level would limit its adaptability.

Henry

Mike Elzinga · 17 January 2007

From Henry: "Ah so. So for an evolving species, too low a "noise" level would limit its adaptability."

If I understand what you are asking, it's a bit more subtle than that. A dimple at the top of a mountain would no longer be a stable point if the "noise level" is too high. It has a lot to do with the complexity of the terrain of hills and wells being explored. It depends on whether or not getting "stuck" in a particular state is detrimental at a given time and what else is happening on the terrain. Getting "stuck" is not necessarily "bad" if it's "good enough".

Popper's ghost · 18 January 2007

NS encourages behaviors that work today in the hope that they will work tomorrow and discourages others.

This is very wrong; NS does not have hopes, and that can't be construed as a metaphor some something NS does have. Nor does NS "encourage" or "discourage" anything. NS simply selects those organisms that are fit now. However, the organisms from which it selects reflect all the selections for all the past "nows", so the effect is a cumulative transfer of information about the environment into the organism (the additional information is provided via random mutations that are selected among -- similar to the way repeatedly dropping a needle on a field of parallel lines and tracking how often the needle crosses a line produces a more and more accurate approximation of pi). As long as the environment remains stable or only changes gradually, the organisms necessarily become increasingly fit for that environment over time -- there's no "hope" involved.

Popper's ghost · 18 January 2007

Failures---mutations that do not improve or even harm reproductive success

Elizabeth defined successes as "genotypes that lead to fecund phenotypes". Presumably then, failures are genotypes that do not lead to fecund phenotypes -- not any sort of mutation at all.

Popper's ghost · 18 January 2007

for some tasks, optimum performance is not achievable without errors. Conditions like Down's syndrome may be a necessary byproduct of a replication system that confers net benefit to the gene pool. In other words, even if an error (say a trisomy) itself is not actually replicated, it may be prone to recur ab initio as a result of a process that mostly works rather well but is too complex to be bug-free.

— Elizabeth Liddle
The beginning and ending of that paragraph say different things, and I think the beginning is more accurate; you seem to be treating "error" and "bug" as synonyms, when they aren't. Consider the Traveling Salesman problem, which is NP-complete. Yet there are linear-time algorithms that produce near-optimal solutions. The problem is not too complex to find bug-free algorithms, but those algorithms are too inefficient. So we use efficient (bug-free) approximation algorithms that are known to produce "errors" -- longer-than-optimal routes. Such algorithms are known as heuristics, and evolution is a marvelous generator of heuristics -- algorithms that favor efficiency over perfection -- since perfection has no meaning in the absence of teleology (we may think that Down syndrome is an error, but evolution doesn't "care"), and efficiency translates into fecundity and fitness. A system that avoided all trisomies and other "bugs" or "errors" (in our view) but was far more complex than the current cell machinery is not the sort of thing we should expect evolution to produce, regardless of whether it could.

Popper's ghost · 18 January 2007

P.S.

When I say that evolution doesn't "care" about Down syndrome, I understand that people with Down syndrome are less fecund, but that doesn't matter, because Down syndrome offspring aren't the only offspring. From the POV of evolution, only fecund offspring are relevant; the "mistakes" are only relevant if they expend so many resources of the parents that they fail to produce other offspring.

Popper's ghost · 18 January 2007

Well, I beg leave to differ with you and Lizzie on this point.

Ah, ok, so it's not that you didn't read what you commented on, it's that you're an intellectually dishonest moron. Here's a clue for you: decision tables

Popper's ghost · 18 January 2007

This is where I think she takes a seriously wrong turn. Genetic algorithms are more central to her point than she seems to realize. They are models of evolution. She wants to reduce this to a "simple if_then" statement. I think that's going too far and it obscures her point rather than clarifies it. While an if_then statement is exactly the minimum of the Dembski definition she uses, and it could be a dumb agent of a larger intelligence in Marvin Minsky's scheme, as in "The Society of Mind," its too simple a scheme to support her argument about evolution being "intelligent."

It's hard to get this more backwards. It's a corollary to Dembski's argument that evolution is "intelligent", because it is Dembski who defines "intelligent" as having the capacity to make a choice. You completely fail to understand the argument she was making, and obscure and muddy it with a quite different agenda -- to show that evolution is intelligent in a broader sense. Your references to "her point" and "her argument" aren't about her point or her argument at all, they are about yours.

Elizabeth Liddle · 18 January 2007

The beginning and ending of that paragraph say different things, and I think the beginning is more accurate; you seem to be treating "error" and "bug" as synonyms, when they aren't.

— PvM
Yes, there was a bit of unjustified elision there... Yes, there are two kinds of "error". One is a replication error that is expressed as a phenotype with reduced fecundity. This error might be further replicated, but at a reduced rate compared to genotypes without the error, or it might not be replicated at all (the phenotype might bear no viable offspring. Then there is what I was calling a "bug" - a replication system that is prone to particular replication errors, such as those that produce Down's syndrome. And just as a delicate and sophisticated piece of kit may be less reliable than a simpler, cruder item, errors such as those that result in Down's syndrome may be the price we pay for a system that allows us to be a fecund population. Does that make more sense?

Popper's ghost · 18 January 2007

PvM wrote:

Not PvM.

And just as a delicate and sophisticated piece of kit may be less reliable than a simpler, cruder item,

You seem to have quite missed my point (I suggest that you go back and read my comment a few times for its own content, rather than merely as a call to modify your own). It make take a "delicate and sophisticated piece of kit", as opposed to a "simpler, cruder item", to be 100% reliable, but 100% reliability isn't of value. Unreliable systems are preferable if they are more efficient, as long as they are reliable enough -- in the case of evolution they merely need to be fecund; so-called "errors" simply don't matter much.

Does that make more sense?

I wasn't complaining that you weren't making sense, but rather that you were making errors -- such as suggesting that the replication mechanisms may be "buggy" because they are too complex not to be. What you consider a "bug" simply isn't one from the POV of evolution; trisomies and other such "errors" are, for the most part, irrelevant, as long as they are relatively rare, so evolution simply won't bother to produce systems that avoid them if those systems are more resource-consuming, or harder to reach in the "search space".

Elizabeth Liddle · 18 January 2007

Not PvM.

— Popper's Ghost
Sorry. My STM is terrible.

You seem to have quite missed my point (I suggest that you go back and read my comment a few times for its own content, rather than merely as a call to modify your own).

More apologies.

It make take a "delicate and sophisticated piece of kit", as opposed to a "simpler, cruder item", to be 100% reliable, but 100% reliability isn't of value. Unreliable systems are preferable if they are more efficient, as long as they are reliable enough --- in the case of evolution they merely need to be fecund; so-called "errors" simply don't matter much.

Well, I agree with that. It was the point I thought I was trying to make. In which case I still seemed to have missed yours.

I wasn't complaining that you weren't making sense, but rather that you were making errors --- such as suggesting that the replication mechanisms may be "buggy" because they are too complex not to be. What you consider a "bug" simply isn't one from the POV of evolution; trisomies and other such "errors" are, for the most part, irrelevant, as long as they are relatively rare, so evolution simply won't bother to produce systems that avoid them if those systems are more resource-consuming, or harder to reach in the "search space".

I agree. And one of my points upthread was that a "buggier" version of my learning algorithm (one that imperfectly replicates successes) was better than the non-buggy one, even though it made more errors, because it was able to "set-shift" - adapt to a changing contingencies. Although I also agree that efficiency is not the same as perfection. I agree with rest of your post. I think. Unless I am still missing your point.

Popper's ghost · 18 January 2007

Well, I agree with that. It was the point I thought I was trying to make. In which case I still seemed to have missed yours.

Well, you wrote that "just as a delicate and sophisticated piece of kit may be less reliable than a simpler, cruder item", whereas I am making the point that sophistication may be required to make a mechanism more reliable than what can be obtained from something simple and crude, so if we're agreeing, it's an odd sort of agreement. :-)

Popper's ghost · 18 January 2007

I should have added that the other point I was making was that reliability isn't necessarily a good thing -- on that we certainly agree (now), with your mention of the value of introducing noise. So your previous post, in which you suggested that the "bug" (but you didn't use scare quotes then) was a consequence of complexity, seems to disagree with both your earlier and later posts. Such inconsistencies aren't unusual -- we make a mistake in thinking that people "have beliefs"; what people have are behavioral dispositions, which vary dynamically; "beliefs" are folk-psychological fictions that we infer from behavior. Folk psychology is a powerful heuristic, a rough approximate model of the working of human minds that serves remarkably well in practice, considering that it is entirely based on external observation and is completely ignorant of actual brain mechanisms.

Anyway, the bottom line is that all these anthrocentric normative terms like "bug", "error", "reliable", and "perfect", can be very misleading when applied to evolution, which has a quite different, all-consuming "propagate genes forward" norm.

Elizabeth Liddle · 18 January 2007

Well, you wrote that "just as a delicate and sophisticated piece of kit may be less reliable than a simpler, cruder item", whereas I am making the point that sophistication may be required to make a mechanism more reliable than what can be obtained from something simple and crude, so if we're agreeing, it's an odd sort of agreement. :-)

— Popper's ghost
OK. "less reliable" meant "make more errors, but still be more efficient overall". So I do think we agree. Cheers Lizzie

Elizabeth Liddle · 18 January 2007

Sorry, Popper's ghost, I'm struggling with the formatting here!

Well, by "less reliable" I meant more prone to error/breakdown - but that doesn't stop ti being more efficient overall. That's the sense in which I agree with you.

Popper's ghost · 18 January 2007

Finally, having mentioned "anthrocentric", I want to go back to:

errors such as those that result in Down's syndrome may be the price we pay for a system that allows us to be a fecund population.

The price who pays? As long as we are a fecund population, no price is being paid. By analogy, if, in order to be rich, I have to work, or sell my soul, then I am paying a price for it -- there's a tradeoff. But if I have a huge inheritance that has the condition that I must give away 1% of it, I'm not paying to be rich, I simply am rich, though less than I would have been in some alternate universe where there was no such condition. If we are less fecund than we would have been in some alternate universe where all the Down syndrome offspring were magically born normal, we are still fecund; there's no tradeoff as far as our genes are concerned, much as we as persons might dislike the fact that some offspring are born with Down syndrome. The system we have is the most fecund that evolution was able to cobble together with the existing materials, and that it includes the production of trisomies is no more relevant than any of its other characteristics -- no more relevant from the POV of evolution, while certainly relevant to the parents of children with Down syndrome.

Popper's ghost · 18 January 2007

Well, by "less reliable" I meant more prone to error/breakdown - but that doesn't stop ti being more efficient overall.

Being prone to error and being prone to breakdown are two different things. "Simple, crude" systems may be more prone to error while "complex and fragile" systems may be more prone to breakdown. But you seem to have assumed that "complex" systems are necessarily "fragile"; yet complex systems may include redundancy and error correction, making them less fragile than "simple, crude" systems. So much as you may think you were agreeing with me, I don't think you were getting all of the points I was trying to make. However, it is gratifying that, each time I have written something, you have found yourself agreeing with it. :-)

Popper's ghost · 18 January 2007

Oops, I wrote "fragile" where you wrote "delicate". There doesn't seem to me to be a relevant difference in meaning, but sorry if there is.

Elizabeth Liddle · 18 January 2007

Being prone to error and being prone to breakdown are two different things. "Simple, crude" systems may be more prone to error while "complex and fragile" systems may be more prone to breakdown. But you seem to have assumed that "complex" systems are necessarily "fragile"; yet complex systems may include redundancy and error correction, making them less fragile than "simple, crude" systems.

No, I didn't assume it. I just gave it as an example. My fancy toaster breaks down more often than my grill, but it's still more efficient overall for making toast. But it wasn't a very apt example, because, as you point out, some complex systems confer robustness (which was what I meant when I gave the example of introducing stochastic noise) at the price of error. So I do take your point, and appreciate the further parsing you gave mine.

So much as you may think you were agreeing with me, I don't think you were getting all of the points I was trying to make. However, it is gratifying that, each time I have written something, you have found yourself agreeing with it. :-)

Gratifying? I'd have found it maddening, myself. But I agree that you have teased a nice double dissociation out of a rather over-general point of mine.

Popper's ghost · 18 January 2007

I just gave it as an example.

Hmm; you wrote "may be" ... ah, so you did. Sorry.

Gratifying? I'd have found it maddening, myself.

Not to me, as it was all forward progress.

But I agree that you have teased a nice double dissociation out of a rather over-general point of mine.

Now that I find very gratifying. It's been a pleasure!

tgibbs · 18 January 2007

They're talking about systems which try out a solution, find it to be a poor one, and then never go back to it again. It would be as if an organism was born with a harmful mutation, had lousy reproductive success, and thereafter that mutation never popped up again in that species.
But this kind of absolute binary learning is not really very common in biological systems. There are a few examples, such as the way an animal may acquire a permanent distaste for a type of food if it gets sick immediately after eating it. But for most learning, it makes more sense to think in terms of probabilities. If a behavioral strategy is unsuccessful, it becomes less likely to be tried in the future, but that doesn't mean that it will never recur. So the frequency of the trait in the population can be seen as the memory trace of the level of success of this strategy. A low frequency of an allele constitutes a "memory of failure"

Raging Bee · 18 January 2007

Dr. Martin: Thanks for the spology and clarification. Just one question: would you care to tell us what Creation Science has contributed to the field of neuroscience that evolution cannot?

normdoering · 18 January 2007

Popper's ghost wrote:

This is where I think she takes a seriously wrong turn. Genetic algorithms are more central to her point than she seems to realize. They are models of evolution. She wants to reduce this to a "simple if_then" statement. I think that's going too far and it obscures her point rather than clarifies it. While an if_then statement is exactly the minimum of the Dembski definition she uses, and it could be a dumb agent of a larger intelligence in Marvin Minsky's scheme, as in "The Society of Mind," it's too simple a scheme to support her argument about evolution being "intelligent."

It's hard to get this more backwards. It's a corollary to Dembski's argument that evolution is "intelligent", because it is Dembski who defines "intelligent" as having the capacity to make a choice. You completely fail to understand the argument she was making, and obscure and muddy it with a quite different agenda --- to show that evolution is intelligent in a broader sense. Your references to "her point" and "her argument" aren't about her point or her argument at all, they are about yours. Maybe I missed the point of her argument, but my point, which you cut off in your quote was:

The UD guys already see the if_then of natural selection as a filter that gets rid of what wasn't "designed." They're arguing that evolution can not build complexity and specificities. To go there she needed to talk about things like "searchspace" or Dennett's skyhooks and cranes ideas. She never got there. She never made, what to me, are the most important points.

My point deals with the objections that DaveScot was bringing up towards the end, perhaps not in line with Dembski's definition, and which I thought she failed to address. DaveScot thinks the only kind of if_then chosing is one of maintaining what the designer designed. There is no need to argue for how evolution can do more than maintain a design when you can simply point to people like Danny Hillis and his connection machine which he used to evolve programs that sorted long strings of numbers into numerical order and say, "there see, his evolutionary program created a sorting program close to the best any human has ever done. Evolution made it, it didn't just maintain it." Or point to hundreds of other examples of things researchers evolve with genetic algorithms. They will of course object and say, "you put that intelligence in there by designing the environment," but that is in the end the theistic evolution she was arguing for. Just say, "yea, so what? Couldn't God have done it that way too?" And what's with you turning into Raging Bee and attacking me with this bogus stuff? And now Liddle too. Should I suspect jealousy or Asperger's Syndrome?

Elizabeth Liddle · 18 January 2007

And what's with you turning into Raging Bee and attacking me with this bogus stuff? And now Liddle too. Should I suspect jealousy or Asperger's Syndrome?

— normdoering
Did you think I attacked you? Could you point out where? But I certainly didn't mean to make the point that a simple if...then algorithm could account for evolution. I said it could account for CSI, and would also comply with Dembski's definition of intelligence. I certainly consider replication with modification + natural selection to be very much more complex than that. And more akin to what we would truly call "intelligence". Which is why I think that "ID" covers the ToE, which was the point of my original post on UD. Cheers Lizzie

normdoering · 18 January 2007

Elizabeth Liddle asked:

Did you think I attacked you?

No, that was supposed to mean Popper's ghost attacking you. Instead of saying "And now Liddle too," I should have said "And now you're attacking Liddle too." My bad, I wasn't clear.

Steviepinhead · 18 January 2007

I hope nobody here is stupid enough to bet against me when I claim that the new, improved "Dr. Michael Martin" will turn out to be just as much of a lying cretin as the old one.

And, in all likelihood, he'll also turn out to be exactly the same as the old one, even if he's found a new computer to post from...

Anton Mates · 18 January 2007

But this kind of absolute binary learning is not really very common in biological systems. There are a few examples, such as the way an animal may acquire a permanent distaste for a type of food if it gets sick immediately after eating it. But for most learning, it makes more sense to think in terms of probabilities. If a behavioral strategy is unsuccessful, it becomes less likely to be tried in the future, but that doesn't mean that it will never recur.

— tgibbs
You're quite right. I was simply using the extreme case as a hypothetical.

So the frequency of the trait in the population can be seen as the memory trace of the level of success of this strategy. A low frequency of an allele constitutes a "memory of failure"

But that doesn't match the description you give above. Such a "failed" allele has a higher likelihood of recurring than most other alleles, simply because it's already in the population, even if at a low frequency. Most possible alleles, whether they'd succeed or fail, are currently at frequency 0 and will never show up unless produced by a mutation. If the allele fails completely, it drops out of the population, but its chance of recurring (or a phenotypically similar allele occuring) is no lower than it was before the allele first appeared. Evolution has forgotten that it ever existed. That's why norm says only the successes are remembered--the failures are merely dumped the "not currently in the population" pile where virtually all possible traits reside. If there was a memory mechanism in place, a trait which appeared once and then failed would be less likely to recur than if it had never appeared at all--just as a person is less likely to touch a flame after they burn themselves than before. But again, I do think this is a possibility in evolution thanks to extended fitness. If a kid's born with Down Syndrome, that lowers his parents' total reproductive success somewhat, which in turn lowers the success of any trait they had which made having a kid with Down Syndrome more likely. That depression of the frequencies of associated traits is what can persist after a trait's appeared and failed, making its reappearance less likely. This may well be the case with Down Syndrome, in fact, as apparently some people are genetically predisposed to have children with the condition.

trrll · 18 January 2007

Such a "failed" allele has a higher likelihood of recurring than most other alleles, simply because it's already in the population, even if at a low frequency. Most possible alleles, whether they'd succeed or fail, are currently at frequency 0 and will never show up unless produced by a mutation.
The term "allele" implies presence in the population.
If the allele fails completely, it drops out of the population, but its chance of recurring (or a phenotypically similar allele occuring) is no lower than it was before the allele first appeared. Evolution has forgotten that it ever existed.
Yes, some things are forgotten by evolution. But I forget some things, too. And there is a large continuum of genetic "memory traces:" alleles ranging in frequently from those that have almost always had a positive effect on fitness, to those that are occasionally positive (e.g. sickle cell anemia), to those that are neutral, to those that have been negative in every case "tried" so far, and that are likely destined to eventually be forgotten.

Raging Bee · 19 January 2007

norm wrote:

She never got there. She never made, what to me, are the most important points.

Instead of wasting a lot of time complaining that someone else didn't make your points for you, why not just make them yourself? There's no rule saying Lizzie has to make anyone else's points but her own, nor is there any rule saying no one else can post while she's here.

Popper's ghost · 19 January 2007

And what's with you turning into Raging Bee and attacking me with this bogus stuff? And now Liddle too. Should I suspect jealousy or Asperger's Syndrome?

That you think I attacked Liddle adds evidence to my hypothesis that you have Asperger's Syndrome.

RememberFebble · 19 January 2007

I tried to join the UD underground with the username "RememberFebble", but I never had my password emailed to me...wonder why? Then, I registered as "ScienceConspiracy" and they accepted me immediately, lol.

Of course, when I tried to post this...

"Get a load of this Doonesbury cartoon...

http://www.doonesbury.com/strip/dailydose/index.html?uc_full_date=20070114"

My post somehow never got up there. Boy, they've got strict controls on what gets said, huh? On this site, you don't even have to register!

Anton Mates · 19 January 2007

The term "allele" implies presence in the population.

— trrll
Not really. The literature contains statements like "The extinct allele reappears later because of mutation." If a particular hypothetical variant of a gene could arise by mutation/recombination, I don't see any problem in calling it an "allele" even if it has yet to show up in nature.

Yes, some things are forgotten by evolution. But I forget some things, too. And there is a large continuum of genetic "memory traces:" alleles ranging in frequently from those that have almost always had a positive effect on fitness, to those that are occasionally positive (e.g. sickle cell anemia), to those that are neutral, to those that have been negative in every case "tried" so far, and that are likely destined to eventually be forgotten.

Which is exactly what norm was saying. Evolution remembers successes and forgets failures. (And sure, both success/failure and remembering/forgetting are continua here.) Whereas intelligent organisms, as well as neural nets AFAIK, remember what didn't work and avoid doing it again, as well as trying to repeat what did work.

normdoering · 19 January 2007

RememberFebble wrote:

...My post somehow never got up there. Boy, they've got strict controls on what gets said, huh?

You have no idea how much control! It was perhaps over a year ago when Dembski was running the site himself that he decided to run this contest asking for examples of technology evolving from simple machines to more complex ones. I entered the contest and I noted here that my entry and comments weren't appearing. Then I went back and my comments were there, but someone else on Panda's Thumb said they couldn't see them when they went there. After a bit of diddling in their comments section we began to realize what comments you saw or didn't see depended on your ISP.

normdoering · 19 January 2007

Popper's ghost wrote:

... adds evidence to my hypothesis that you have Asperger's Syndrome.

That has to be the most brilliant diagnosis since Bill Frist diagnosed Terri Schiavo using only a videotape. And here's a snark emoticon to go with that: ;->

Katarina · 19 January 2007

I wouldn't be too offended Norm; he can probably relate.

trrll · 20 January 2007

Which is exactly what norm was saying. Evolution remembers successes and forgets failures. (And sure, both success/failure and remembering/forgetting are continua here.) Whereas intelligent organisms, as well as neural nets AFAIK, remember what didn't work and avoid doing it again, as well as trying to repeat what did work.
But in fact, we don't. As I noted previously, binary single-trial learning is not normally seen in biological intelligences, aside from apparently special-cased exceptions such as developing an aversion to a food that makes you sick. So if a particular behavior does not yied a positive result, we may be less likely to repeat it, but that doesn't mean that it won't occur again. And there are good reasons for this, because circumstances change. A particular behavioral strategy has to be tested repeatedly, in multiple circumstances before one can rationally conclude that it has no value. Similarly, a trait that has a negative effect on fitness will often persist in the population for long periods of time (particularly if it is recessive). So even apparently failed traits may be remembered for a long period of time (albeit with a low frequency representing a "memory" of failure), and occasionally tried in different circumstances. In both cases, the memory can be quantified as the probability that the trait will recur in the future. If a behavior never yields a positive result, eventually it will be extinguished, meaning that there is a very low probability that it will occur again. The population genetics analog of this is of course the complete elimination of the allele from the population, at which point it's likelihood of recurring is very low (corresponding to the mutation rate).

Popper's ghost · 20 January 2007

That has to be the most brilliant diagnosis since Bill Frist diagnosed Terri Schiavo using only a videotape.

Norm, I know that you understand the difference between a diagnosis, and a hypothesis against which one tests evidence. I have not concluded that you have Asperger's Syndrome.

he can probably relate.

I can -- which indicates that I don't have AS.

Popper's ghost · 20 January 2007

a trait that has a negative effect on fitness will often persist in the population for long periods of time (particularly if it is recessive). So even apparently failed traits may be remembered for a long period of time

What a bizarre equivocation. Persistence of a trait with negative effect would indicate a lack of memory that it has a negative effect.

tgibbs · 20 January 2007

What a bizarre equivocation. Persistence of a trait with negative effect would indicate a lack of memory that it has a negative effect.
The "memory" is not the trait itself, but rather the change in its population frequency over time. To carry the analogy a bit further, memory in the brain seems to be encoded by correlations in strength of the synaptic connections among specific subsets of neurons. Learning appears to be implemented by changes in the numbers of receptors at those synapses. If the behavior is reinforced (leads to a positive outcome) then receptor levels change to increase those correlations, while if the outcome is negative, receptor levels change to decrease those correlations, and that decrease can be said to constitute a memory of a negative outcome. In the extreme case that a behavior consistently has a negative outcome, those correlations might vanish entirely. It would never occur to you to behave that way. You could say that you have "learned not to do that." So a parallel can be drawn between allele frequencies in a population and the strength of synaptic correlations in the brain.

Katarina · 20 January 2007

I can --- which indicates that I don't have AS.

Ah! I should have seen that coming!:)

Torbjörn Larsson · 20 January 2007

it could account for CSI
The terminology is a problem here. There has never been a demonstration of 'CSI'. The best description of structures produced by evolution are phylogenetic trees. When we start to discuss all possible phenotypes, alleles, or describing what live, species, genes are, there is no general description. Some of Dembski's definitions of SC has been shown to be contradictory. That is not to say that for example complexity measures exists, for example characterizing neural nets. But Stephen Weinberg has noted that no single measure can capture all possible regularities on theoretical grounds. (It is an analogy to Gödel incompleteness.)
"ID" covers the ToE
More precisely, ToE covers the intelligence definition of ID.
AS
What exactly is the diagnostical difference between AS and AR (Anal Retentive)? :-)
If the behavior is reinforced (leads to a positive outcome) then receptor levels change to increase those correlations, while if the outcome is negative, receptor levels change to decrease those correlations, and that decrease can be said to constitute a memory of a negative outcome.
This doesn't seem quite like the neuroscience I have seen. Synapses can be both positive (reinforcing) and negative (inhibitory). When stimulus decrease, receptor levels decrease. Learning and possibly memory depends on synaptic growth and pruning. And pruning may happen when a synapse isn't used. So behavior can be inhibited from a negative outcome by inhibitory synapses or growth of more of them. Inhibition seems to be a verified method of behavior regulation. We learn not to do mistakes again.
In the extreme case that a behavior consistently has a negative outcome, those correlations might vanish entirely. It would never occur to you to behave that way. You could say that you have "learned not to do that."
It seems memories can be forgotten in several ways. If a behavior isn't used (by coincidence or possibly from early inhibition), synaptic pruning may happen. Another way is that remembering the situation forces re-memory, ie the memory is enforced. It seems that this can happen so that we loose the reference to the memory - it is still there but can't be accessed. This is one hypothesis for why most people loose the memories from early childhood - children seems to have good contingent memories lasting for years, but the references are sufficiently messed up to affect conscious recall. Children may live in a kind of conscious "now". But all that aside, it seems pruning as memory loss is a good analogy to decrease of alleles as you say.

Torbjörn Larsson · 20 January 2007

"That is not to say that for example complexity measures exists, for example characterizing neural nets." - That is not to say that for example complexity measures don't exist, for example characterizing neural nets.

Elizabeth Liddle · 20 January 2007

The terminology is a problem here. There has never been a demonstration of 'CSI'.

— torbjorn larsson
I took it to mean a pattern displaying the "triad" of features: Actualisation; Exclusion; Specification, which, I argued, back in the other thread, was observable in Chesil Beach, a beach on the south coast of England in which pebbles are sorted from small to large along an 18 mile stretch. It's an actual beach; a huge number of patterns are excluded; the pattern of pebbles is one of a small class of comparable specifiable patterns. And indeed, I understand from PvM's reference to an article by Elsberry's post that Dembski now distinguishes between "actual" and "apparent" specified complexity. That was what I attempted to address in one of my original posts on UD in response to a request that I consider Dembski's "law of conservation of information": - Dembski appears to distinguish between the two by claiming that in the case of "apparent" specified complexity, the pattern tells us nothing new (no new information) because we can predict tha pattern from knowledge of the "natural law" that created it. Which strikes me as being a perfectly circular argument, and means that whether a pattern has "actual", as opposed to "apparent" CSI has nothing to do with the pattern but to do with how much we know about the algorithm that created it. Presumably that's why he invented his "explanatory filter", which seems to me simply to say that if we don't know the algorithm, we must infer an "unnatural" law, i.e. that what we can't explain can't be natural. Which is no less circular.

The best description of structures produced by evolution are phylogenetic trees. When we start to discuss all possible phenotypes, alleles, or describing what live, species, genes are, there is no general description.

I'm not exactly sure what you are saying here. Sorry, I should have asked for clarification when you made a similar point earlier.

Some of Dembski's definitions of SC has been shown to be contradictory. That is not to say that for example complexity measures exists, for example characterizing neural nets. But Stephen Weinberg has noted that no single measure can capture all possible regularities on theoretical grounds. (It is an analogy to Gödel incompleteness.)

Well, yes. Could you explain Godel incompleteness?

Anton Mates · 20 January 2007

The "memory" is not the trait itself, but rather the change in its population frequency over time. To carry the analogy a bit further, memory in the brain seems to be encoded by correlations in strength of the synaptic connections among specific subsets of neurons. Learning appears to be implemented by changes in the numbers of receptors at those synapses. If the behavior is reinforced (leads to a positive outcome) then receptor levels change to increase those correlations, while if the outcome is negative, receptor levels change to decrease those correlations, and that decrease can be said to constitute a memory of a negative outcome.

— tgibbs
If this were true, learning through negative reinforcement would be impossible. But in fact negative outcomes also strengthen synaptic connections--they simply strengthen different ones than positive outcomes do. What weakens connections are neutral/undetectable/unpredictable outcomes.

In the extreme case that a behavior consistently has a negative outcome, those correlations might vanish entirely. It would never occur to you to behave that way. You could say that you have "learned not to do that."

No, you would have "forgotten to do that," which is a different case. And vanished correlations don't correspond to never performing a certain behavior, or one would have to have a specific set of synaptic connections present for every behavior one might ever perform in a lifetime. Again, take a child who burns herself. She doesn't just forget that she wanted to touch a flame; she remembers that touching fire is bad. In the future, she will actively avoid getting too near a flame even by accident.

tgibbs · 20 January 2007

If this were true, learning through negative reinforcement would be impossible. But in fact negative outcomes also strengthen synaptic connections---they simply strengthen different ones than positive outcomes do. What weakens connections are neutral/undetectable/unpredictable outcomes.
"Learning by negative reinforcement" refers to reinforcement by termination of a noxious stimulus (as opposed to providing a positive reward, such as when a rat learns to press a lever to terminate a shock. There is no reason to believe that it requires a different neural mechanism than learning by positive reinforcement.
No, you would have "forgotten to do that," which is a different case.
The question of whether forgetting represents a different process from learning remains an open one, but there is evidence to argue that it does not. From a neural point of view, there is no particular reason to think that such a distinction is required.
And vanished correlations don't correspond to never performing a certain behavior, or one would have to have a specific set of synaptic connections present for every behavior one might ever perform in a lifetime.
And you see this as a problem for what reason? The number of permutations of neuronal connections is enormous, so there could well be a connection for every behavior or thought one could ever perform or experience.

Marek 14 · 20 January 2007

Torbjörn Larrson wrote:

"What exactly is the diagnostical difference between AS and AR (Anal Retentive)? :-)"

Well, for one thing, I happen to have Asperger's Syndrome, but I really don't consider myself anal retentive, as so far as it refers to overt attention to detail. I usually don't notice details at all.

Though, when I DO notice, I tend to get peeved when people get them wrong, so... maybe there's some truth. But really - has AS already gone the way of idiocy and cretinism, from medical term to widely-used disparaging word?

Elizabeth Liddle · 20 January 2007

The question of whether forgetting represents a different process from learning remains an open one, but there is evidence to argue that it does not. From a neural point of view, there is no particular reason to think that such a distinction is required.

— tgibbs
Well, there is reason to think that some representations are actually inhibited. There is, for instance, the well-documented phenomenon of "inhibition of return" by which saccadic eye-movements to previously searched locations are inhibited, thus increasing the efficiency of visual search. And also evidence of "top-down" (not an expression I like) inhibition of certain "bottom up" representations, as evidenced by attentional modulation of activation in primary visual cortex. But in any case, I'm not sure how far one needs to stretch the analogy. It seems to me reasonable to describe replication with modification plus natural selection as a learning algorithm, with memory. I'm not sure if one could, or needs, to stretch it to cover learned inhibition. Although if the entire system is regarded as a "mind", then the less-fit phenotypes that result from certain phenotypes demonstrate, in effect, "inhibited" fecundity (tendency to die, for example). So in that sense, the system does actually "avoid" its errors - if it finds itself in the process of making one (e.g. if a genotype with genes that tend to code for unfit phenotypes has been conceived) then it produces a phenotype that is less likely to reproduce. Just as when my participants in error-monitoring paradigms find themselves about to make an error, they try to abort (pun intended in this instance) the response. Not always successfully, just as some crap genotypes produce viable phenotypes.

Elizabeth Liddle · 20 January 2007

But really - has AS already gone the way of idiocy and cretinism, from medical term to widely-used disparaging word?

— torbjorn larsson
I am appalled to observe that it seems to have done, especially as it so often seems to confer a clear-sightedness that can evade those of us more prone to social distraction.

Elizabeth Liddle · 20 January 2007

Sorry, still struggling with the format of this blog. The above comment of mine should have attributed the quoted comment to Marek.

Elizabeth Liddle · 20 January 2007

Sorry, still struggling with the format of this blog. The above comment of mine should have attributed the quoted comment to Marek.

normdoering · 20 January 2007

tgibbs wrote:

"Learning by negative reinforcement" refers to reinforcement by termination of a noxious stimulus (as opposed to providing a positive reward, such as when a rat learns to press a lever to terminate a shock. There is no reason to believe that it requires a different neural mechanism than learning by positive reinforcement.

"Learning by negative reinforcement" sounds like something you do to a flatworm. And that is one meaning of learning from mistakes (as I said many posts ago, these anthropomorphic terms can throw us off). However modern neural nets are doing more than just learning by negative reinforcement. We're getting real memory -- there are sensory systems, eyes and ears, specific memories made with no pressure from external reward and punishment systems. Patterns are abstracted and classified in seconds. A system that "learns by negative reinforcement" doesn't need that kind of sophisticated memory. That kind of sophisticated memory you can get from a neural net, but not from a genetic algorithm. We might get real intelligence from neural nets. Genetic algorithms and evolutionary programs can only give us dumb agents of intelligence. Look up "project BlueBrain" on google to see how far neural nets are going. While one might evolve a system with that kind of sophisticated memory, the genetic algorithms or evolutionary programs you are using don't have that kind of memory anywhere else in the system... At least as far as I know. But who knows what we will find happening in our cells? It is possible that they are sensing a blind and deaf world of molecular signals in our environment and recording abstracted memories in our DNA that are used like computer code in some ribosome like computer? If there's something like that happening in our cells then our evolution may be the result of something more sophisticated than a genetic algorithm. This is not an argument for Intelligent Design, it's just an argument against calling evolution as we know it through genetic algorithms is intelligent or saying that genetic algorithms and neural nets are the same.

normdoering · 20 January 2007

Marek asked:

... has AS already gone the way of idiocy and cretinism, from medical term to widely-used disparaging word?

Not really. Popper's ghost actually thought he might guess a diagnosis to explain why I am so rude and insensitive. I don't think he meant it to be a disparaging word, even though his comments were.

Anton Mates · 20 January 2007

"Learning by negative reinforcement" refers to reinforcement by termination of a noxious stimulus (as opposed to providing a positive reward, such as when a rat learns to press a lever to terminate a shock.

You're right, I used the wrong term re: operant conditioning. I should have said, learning through positive punishment. That's the triggering of a noxious stimulus following a behavior, leading to its inhibition, as when a rat receives a shock for pressing a lever and learns to avoid doing so.

There is no reason to believe that it requires a different neural mechanism than learning by positive reinforcement.

True, and likewise for punishment. Both learning to perform a behavior and learning not to perform it can involve both strengthening and weakening synaptic connections; it's just a matter of whether they are excitatory or inhibitory, as Torbjorn wrote.

The question of whether forgetting represents a different process from learning remains an open one, but there is evidence to argue that it does not. From a neural point of view, there is no particular reason to think that such a distinction is required.

But from a functional point of view, it's very different. Someone who's learned that touching live wires is a bad idea is much less likely to do so in the future than someone who's merely forgotten their original belief that it's a good idea.

And vanished correlations don't correspond to never performing a certain behavior, or one would have to have a specific set of synaptic connections present for every behavior one might ever perform in a lifetime.

And you see this as a problem for what reason? The number of permutations of neuronal connections is enormous, so there could well be a connection for every behavior or thought one could ever perform or experience. With less than a quadrillion synapses total in a human brain? There's far more than a quadrillion behaviors any given human could learn in a lifetime. Heck, memorizing two out-of-state phone numbers presents more than a quadrillion possibilities.

Marek 14 · 21 January 2007

Elizabeth Liddle wrote:
"I am appalled to observe that it seems to have done, especially as it so often seems to confer a clear-sightedness that can evade those of us more prone to social distraction."

I wondered about that. But of course, this clear-sightedness seems to be limited to relatively simple fields like science (as compared to extraordinarily complex human interactions). It's been about two years since I started to read this blog, and in that time I never actually managed to understand the creationists. Your dialogue with them, however, was masterful, and I find it sad how it ended. If they ruled the world, is this how we would end - banned from existence?

I also never took to religion. I think I was about eight years old before I even got the notion that such thing exists. I must confess that the whole concept deeply disturbed me, and still does; on many levels. Is this the result of AS? Did the same subtle changes which boosted my mental capacities, but impaired my emotions and social consciousness, also stripped me of capacity to believe? Or perhaps the need to?

I don't really know. But thanks for your comment :)

Elizabeth Liddle · 21 January 2007

But of course, this clear-sightedness seems to be limited to relatively simple fields like science (as compared to extraordinarily complex human interactions).

— Marek
I think for most people, human interactions are easier than science!

I also never took to religion. I think I was about eight years old before I even got the notion that such thing exists. I must confess that the whole concept deeply disturbed me, and still does; on many levels. Is this the result of AS? Did the same subtle changes which boosted my mental capacities, but impaired my emotions and social consciousness, also stripped me of capacity to believe? Or perhaps the need to?

I would have thought that was possible, although many would argue that the "capacity to believe" is what causes the world's problems. My own view is that religion is a model - akin to a scientific model, in some ways - but if it doesn't work for you - and especially if it disturbs you - then you are certainly better off without it! I think we all have to work out our own ways of relating to the universe. Nice to talk to you Lizzie

tgibbs · 21 January 2007

With less than a quadrillion synapses total in a human brain? There's far more than a quadrillion behaviors any given human could learn in a lifetime.
Yes, if you want to make a one to one correspondence between synapses and behaviors. But that wasn't what I suggested. Rather, the idea is that a behavior is encoded in the correlations between a subset of neurons. So let's pull a number out of a hat and suppose that a typical behavior or memory is encoded in 100 synapses. Assuming that any of the synapses in the brain could be involved, the number of permutations involved would be on the order of your quadrillion raised to the 100th power. Moreover, presumably the order of connectivity of those synapses probably matters as well, so you also have to consider all of the different ways in which those 100 synapses could be linked. We are talking about some quite literally astronomical numbers here, easily exceeding the number of particles in the visible unvierse.
Heck, memorizing two out-of-state phone numbers presents more than a quadrillion possibilities.
So what? Are you suggesting that to memorize two phone numbers requires enough memory to memorize the entire phone book?

Anton Mates · 21 January 2007

Yes, if you want to make a one to one correspondence between synapses and behaviors. But that wasn't what I suggested.

— tgibbs
I must have misinterpreted what you wrote previously: "The number of permutations of neuronal connections is enormous, so there could well be a connection for every behavior or thought one could ever perform or experience." I take it you meant that there might be a set of connections for every behavior?

Rather, the idea is that a behavior is encoded in the correlations between a subset of neurons. So let's pull a number out of a hat and suppose that a typical behavior or memory is encoded in 100 synapses. Assuming that any of the synapses in the brain could be involved, the number of permutations involved would be on the order of your quadrillion raised to the 100th power. Moreover, presumably the order of connectivity of those synapses probably matters as well, so you also have to consider all of the different ways in which those 100 synapses could be linked. We are talking about some quite literally astronomical numbers here, easily exceeding the number of particles in the visible unvierse.

Sure. But those 100 synapses must each be usable to help encode many other possible behaviors--not necessarily simultaneously--rather than being associated from birth with that behavior and only that one. Otherwise there would be less than a quadrillion/100 behaviors which a given human could conceivably perform...even though the number of behaviors performable by any human would be astronomical, as you say, due to interindividual differences in neural wiring. And if each synapse may be used to encode multiple behaviors, then the learned inhibition of a behavior--even if it's totally stopped--is unlikely to push either the associated synapse strengths or the correlations between associated neurons to zero.

So what? Are you suggesting that to memorize two phone numbers requires enough memory to memorize the entire phone book?

It would if you had to rely on a particular pre-existing neuron array suited to that particular number.

Marek 14 · 22 January 2007

I think for most people, human interactions are easier than science!

— Elizabeth Liddle
Because science requires reason, while interacting with fellow humans works mostly via instincts; evolution preferred such instincts because humans live in groups. In my mind, I always connected this with the notion of power - group is substantially powerful than an individual, so living in groups is advantageous. But what if you simply lack those instincts? What if you don't see the patterns in humans, if you have only the foggiest idea of what they want to hear from you? In that case, science is certainly easier, because it's easier to analyse by reason than humans. It's like arguing whether it's easier to be a painter than a musician. Being a painter is much easier - as long as you're deaf.

I would have thought that was possible, although many would argue that the "capacity to believe" is what causes the world's problems. My own view is that religion is a model - akin to a scientific model, in some ways - but if it doesn't work for you - and especially if it disturbs you - then you are certainly better off without it! I think we all have to work out our own ways of relating to the universe. Nice to talk to you Lizzie

— Elizabeth Liddle
I wasn't complaining, mind you. And I relate to universe, in my own way. I see the universe as a blank slate, waiting for what we, or others like us, make of it. I see all the galaxies and stars as sources of knowledge just waiting to be tapped and understood. I see the universe as order to be learned, combined with chaos to be amazed with.

Popper's ghost · 22 January 2007

has AS already gone the way of idiocy and cretinism, from medical term to widely-used disparaging word?

That's not how it was used here. I used in connection with an apparent difficulty in grasping the subtexts of social interaction.

tgibbs · 22 January 2007

Sure. But those 100 synapses must each be usable to help encode many other possible behaviors---not necessarily simultaneously---rather than being associated from birth with that behavior and only that one. Otherwise there would be less than a quadrillion/100 behaviors which a given human could conceivably perform...even though the number of behaviors performable by any human would be astronomical, as you say, due to interindividual differences in neural wiring. And if each synapse may be used to encode multiple behaviors, then the learned inhibition of a behavior---even if it's totally stopped---is unlikely to push either the associated synapse strengths or the correlations between associated neurons to zero.
Learned inhibition of a behavior would apply only to those synapses that are simultaneously active and associated with that behavior. Different behaviors would share few if any synapses. Think of it like a "hash code." So crosstalk of inhibition of one behavior on other behaviors would be minimal. Inhibition of a behavior could be due to individually small reductions in the strength of multiple synapses that add up over the entire subset, with little impact on another behavior that might share only one of those synapses. There could even be mechanisms for resolving "hash collisions," such as adding additional neurons/synapses, changing the subset of synapses to be noncolliding, or adjusting the weighting of other synapses in the connection subset to compensate for the impact of crosstalk.
So what? Are you suggesting that to memorize two phone numbers requires enough memory to memorize the entire phone book?
It would if you had to rely on a particular pre-existing neuron array suited to that particular number.
However, even if the neuron array is static (there is some neurogenesis in the brain, which may be important for learning, so even this may not be true) the connections among them are not static. So an appropriate set of correlations (creating by adjusting the connections, i.e. synaptic strength) can be created on the fly.

Torbjörn Larsson · 23 January 2007

Dembski appears to distinguish between the two by claiming that in the case of "apparent" specified complexity, the pattern tells us nothing new (no new information) because we can predict tha pattern from knowledge of the "natural law" that created it. Which strikes me as being a perfectly circular argument,
I'm aware that you were relying on Dembski's definition for the purpose of your discussion. And I agree that some of his definitions amounts to a circular argument. But I was thinking of him (or any ID'er) not demonstrating an actual calculation.
The best description of structures produced by evolution are phylogenetic trees. When we start to discuss all possible phenotypes, alleles, or describing what live, species, genes are, there is no general description.
Well, of the basic prediction of evolution is roughly "common descent with modification", ie we will observe phylogenetic trees in the fossil record. It is a kind of structure of life. But when we start to discuss the details it becomes fuzzy. What possible phenotypes will life show (within constraints of mass et cetera)? What possible combination of genes expresses these phenotypes? What possible polymer (protein or RNA) sequences will these alleles consists of? How to define a species? (Wilkins counts 26 definitions, depending on model.) How to define genes? (Also many definitions.)
Could you explain Godel incompleteness?
First, I made a mistake; I didn't check the reference. It was Murray Gell-Mann, not Steven Weinberg who discussed complexity. "A measure that corresponds much better to what is usually meant by complexity in ordinary conversation, as well as in scientific discourse, refers not to the length of the most concise description of an entity (which is roughly what AIC is), but to the length of a concise description of a set of the entity's regularities. Thus something almost entirely random, with practically no regularities, would have effective complexity near zero. So would something completely regular, such as a bit string consisting entirely of zeroes. Effective complexity can be high only a region intermediate between total order and complete disorder. There can exist no procedure for finding the set of all regularities of an entity. But classes of regularities can be identified." [Bold added.] ( http://golem.ph.utexas.edu/category/2006/12/common_applications.html#c006990 ) Second, what Gell-Mann states is AFAIK not based on Gödel incompleteness; I did that analogy. Gödel arrives at two incompleteness theorems, that was later complemented by Tarski's indefineability theorem. Gödel's first incompleteness theorem states famously what amounts to "any theory capable of expressing elementary arithmetic cannot be both consistent and complete". (Tarski's indefineability theorem states what amounts to "arithmetical truth cannot be defined in arithmetic", which is quite another kind of 'incompleteness'.) The first Gödel theorem is what I am thinking of. We want to keep a sufficiently powerful theory, such as arithmetic (and more) consistent. When we do that, it will not be complete. What this amounts to is that we need (and thus can) add the independent theorems we will discover as axioms. AFAIK at least one such arithmetic theorem is already discovered, though I can't remember if it was analytically proven to be independent or exhaustively tested to be so by computer. In any case, we can't predict beforehand from the basic axioms all the regularities our theorems will express (for the 'entity' of our theory).

Torbjörn Larsson · 23 January 2007

Well, for one thing, I happen to have Asperger's Syndrome, but I really don't consider myself anal retentive, as so far as it refers to overt attention to detail. I usually don't notice details at all.
I am really sorry if I offended you. I was trying to pull the legs of those who throw the characterization around on rather loose grounds. And yes, it was attention to detail I was thinking of.
But really - has AS already gone the way of idiocy and cretinism, from medical term to widely-used disparaging word?
Well, it shouldn't, which was my clumsy background point - I felt that it was used in the later capacity. At my earlier work place there were two persons who self-identified as having Asperger's. Great guys both.

Torbjörn Larsson · 23 January 2007

Well, for one thing, I happen to have Asperger's Syndrome, but I really don't consider myself anal retentive, as so far as it refers to overt attention to detail. I usually don't notice details at all.
I am sorry if I offended you. I was trying to pull the legs of those who thrown the characterization around on rather loose grounds. And yes, it was attention to detail I was thinking of.
But really - has AS already gone the way of idiocy and cretinism, from medical term to widely-used disparaging word?
Well, it shouldn't, which was my clumsy point - I felt that it was used in the later capacity. At my earlier work place there were two persons who self-identified as having Asperger's. Great guys both.

Torbjörn Larsson · 23 January 2007

Maybe the 3d try is the charm:
Well, for one thing, I happen to have Asperger's Syndrome, but I really don't consider myself anal retentive, as so far as it refers to overt attention to detail. I usually don't notice details at all.
I am sorry if I offended you. I was trying to pull the legs of those who throwed the characterization around on rather loose grounds. And yes, it was attention to detail I was thinking of.
But really - has AS already gone the way of idiocy and cretinism, from medical term to widely-used disparaging word?
Well, it shouldn't, which was my clumsy point - I felt that it was used in the later capacity. At my earlier work place there were two persons who self-identified as having Asperger's. Great guys both.

Torbjörn Larsson · 23 January 2007

Humf! It's not a panda's thumb - it's a ketchup bottle.

Marek 14 · 23 January 2007

I am sorry if I offended you. I was trying to pull the legs of those who throwed the characterization around on rather loose grounds. And yes, it was attention to detail I was thinking of.

— Torbjörn Larsson
I don't know if it's a characteristic of AS or not, but it's actually VERY hard to offend me :) No problem here.