Genetic Evolution of Machine Language Software Ronald L. Crepeau NCCOSC RDTE Division San Diego, CA 92152-5000 July 1995 The Results?Genetic Programming (GP) has a proven capability to routinely evolve software that provides a solution function for the specified problem. Prior work in this area has been based upon the use of relatively small sets of pre-defined operators and terminals germane to the problem domain. This paper reports on GP experiments involving a large set of general purpose operators and terminals. Specifically, a microprocessor architecture with 660 instructions and 255 bytes of memory provides the operators and terminals for a GP environment. Using this environment, GP is applied to the beginning programmer problem of generating a desired string output, e.g., "Hello World". Results are presented on: the feasibility of using this large operator set and architectural representation; and, the computations required to breed string outputting programs vs. the size of the string and the GP parameters employed.
From Figure 5 it can be seen that this run achieved a correct output (fitness = 352) at about 150,000 spawnings (100 to 1200 generations). By about 450,000 spawnings, the agent was composed of less than 100 instructions. Ultimately, the agent size reduced to 58 instructions before the process was terminated.
Source Pretty darn good as I have shown.What is the probability of arriving at our Hello World program by random mutation and natural selection?
— GilDodgen
Source I'd love to see some research in this area. Appeal to ignorance is not really that appealing to me. These are excellent questions and should be answered before rejecting the plausibility in a somewhat ad hoc fashion. It's time to abandon these 'just so stories' and do some real scientific work. Of course, one may object to my choice of method and one may raise a myriad of objections based on the (unjustified) claim that the method required significant intelligent design or the fact that the fitness function is smooth, and so on but it shows that under 'reasonable assumptions' natural selection and variation can indeed create the required output string. In fact, this is hardly surprising given the state of knowledge about evolutionary computing. Perhaps, it would be better if ID activists would present an argument based on an analogy which shows at least some minimum similarity with evolution such as for instance a redundant genotype-phenotype mapping, self-replications, and a way to introduce selection into the process in an acceptable format or replacing a single fixed goal with a more realistic evolutionary goal. As Lenski and others have shown however, that the processes of variation and chance can indeed generate complexity and even irreducibly complex systems. In the end the question is not much dissimilar from Dawkin's "Weasel" example and thus all known limitations apply. So what have we learned from this example?How many simpler precursors are functional, what gaps must be crossed to arrive at those islands of function, and how many simultaneous random changes must be made to cross those gaps? How many random variants of these 66 characters will compile? How many will link and execute at all, or execute without fatal errors? Assuming that our program has already been written, what is the chance of evolving it into another, more complex program that will compile, link, execute and produce meaningful output?
— GilDodgen
I can't answer these questions, but this example should give you a feel for the unfathomable probabilistic hurdles that must be overcome to produce the simplest of all computer programs by Darwinian mechanisms.
— GilDodgen
145 Comments
Mark Frank · 11 June 2006
I was struck by the relationship between this posting and the previous one on Kirschner's work. If I understand Kirschner correctly he is proposing that "random" variation might be biased towards mutations that are at the subroutine or object level rather than the individual instruction. Which I guess would make the evolution of quite complex programs much quicker.
normdoering · 11 June 2006
GilDodgen muddies the waters further when he asks "How many simpler precursors are functional?" Translated to biology that can be seen as asking "what was the first self-replicator (or other-replicator plus symbiosis)?"
This changes the problem into a question of abiogenesis rather than of evolution with a given replicator. Genetic algorithms all assume self-replication in their modeling of biology.
normdoering · 11 June 2006
Here's another distortion: "what gaps must be crossed to arrive at those islands of function,...?"
The term "islands of function" is also misleading. Drain his pool a little and instead of islands you get mountain ranges and hilly valleys that let a "hill-climbing" algorithm do its work.
Island hopping algorithms might need foresight, hill climbers can be blind.
http://gaul.sourceforge.net/tutorial/hillclimbing.html
k.e. · 11 June 2006
PvM · 11 June 2006
snaxalotl · 11 June 2006
staring, intrigued, at the description "Dembski personality cult". can anybody think of an example of a personality cult which revolved around a smaller amount of personality?
Jeannot · 11 June 2006
I fail to see how this 'hello world' program can be taken as an example of evolution by mutations and natural selection. AFAIK, this process is not supposed to reach any objective or solution.
The probability to produce Homo sapiens from Homo habilis with RM + SN is almost zero, of course. This number is also completely irrelevant.
???
Corkscrew · 11 June 2006
one may raise a myriad of objections based on the (unjustified) claim that the method required significant intelligent design
I'm always amused by that claim. It's like saying that, by the mere act of painting a bunch of concentric circles on the wall, you've given someone all they need to be a master archer.
dave42 · 11 June 2006
Good to see that GP research is feeding back into the biology field. This kind of experiment is directly testing Darwin's hypothesis in the true scientific spirit and method: predict (there exists a system that, implementing Darwin's evolutionary rules, can produce a behavior meaningful to us but not to the system), design build and run the experiment so that all factors other than those operated on by the process under test are known and fixed, and see it produce the hypothesized result. Very nice.
Somewhat irrelevant are the questions about the logical level chosen for the experiment - could be any level, or all, from atomic through processor architecture thru instructions through algorithmic entities. In life all levels are simultaneously undergoing select and test. But in an experiment, it is only useful when the questioned independent variables are the only ones manipulated, in order to show the relationships. So choosing the highest level while keeping the lower levels fixed and known is the way. Point of interest in scale: this result is obtained on a very small collection of parallel processes and spawnings. Scale up several trillion trillion times in both number of parallel experiments and run time and you begin to approach what actually happened here on Earth.
Now I'll bet the virtual machine itself underwent evolutionary changes along the path to its final design - that is the normal creative process at work, and anyone whose life work involves the development of ideas is familiar with this, though most probably do not (yet) relate their intellectual experience to it's being a direct experience of the evolutionary process in fact, not merely a passing fancy.
Torbjörn Larsson · 11 June 2006
Jeannot says:
"I fail to see how this 'hello world' program can be taken as an example of evolution by mutations and natural selection."
"GEMS employs mutation of two types." (Crepeau, p5.)
"As each off-spring is bred, it is evaluated for insertion in the pool using a modification of the process which [Altenberg 1994] calls "upward mobility" selection." (Crepeau, p4.)
"AFAIK, this process is not supposed to reach any objective or solution."
The immediate objective is to increase fitness for survival to reproduce. What that means varies as the species and environment changes so there is no overall objective.
"The probability to produce Homo sapiens from Homo habilis with RM + SN is almost zero, of course. This number is also completely irrelevant."
Exactly. We know it happened, however unlikely it was. We also know evolution happens, so there is no probability associated with that either. What was your point to raising this straw man?
"???"
:| :| :| !!!
Torbjörn Larsson · 11 June 2006
Mark,
Re Kirchner, Crepeau says that he thinks his complicated multiple instruction environment succeeds in outputting strings because he intentionally specified a minimum amount of input and output instructions. He also speculates that his phase 2 of shortening the successful programs will be better if he intentionally specifies a minimum amount of halt instructions at this phase.
Corkscrew,
A nice view and metaphor.
The claim is also misunderstanding experiments. All our experiments, instruments or data handling are designed in some manner or they wouldn't give results. That doesn't make them illustrations of creationist theory. It is analogous 'thinking' to their requirement for supernatural explanations. It is also wrong to single out biology.
You are right, it is laughable.
jeannot · 11 June 2006
Gerard Harbison · 11 June 2006
Gerard Harbison · 11 June 2006
Caledonian · 11 June 2006
Jeannot · 11 June 2006
Caledonian, I think you missed something in what I said.
But we have the same ideas. What if I told you I am a graduate student in evolutionary biology? ;-)
I said that a fitness increase was not a specific objective (goal if you prefer) similar to the 'hello world' program, I didn't said it wasn't an objective concept (that's another matter).
I totally agree that nylonase never was an objective. Thus, it would be useless to calculate the probability to produce the nylonase gene in a initial bacterial population.
jeannot · 11 June 2006
nilekim · 11 June 2006
Jeannot, I think you raise an important distinction, but I also think it's a little harsh to call GAs/GP "totally irrelevant" to evolutionary biology. When we plate bacteria on nylon-rich media, we effectively define a fitness function which favors the ability to produce nylonase. When, lo and behold, nylonase-producing bacteria arise and flourish, should we not attribute this to RM/NS because we (effectively) set an artificial objective? After all, this objective is not one that would arise in nature (without manmade nylon).
I think it can be argued that in the GP example, the objective is similarly contained within the fitness function, which is this time explicitly defined; the "organisms" are not "aware" of it and cannot "consciously" work towards it. Of course, there are a lot of other issues in drawing any direct comparisons, but I think GAs can be helpful to evolutionary biologists insofar as they illustrate the general computational principles underlying RM/NS.
On the other point, I also can't believe that at this point anyone is still parading around "Sequence length n, alphabet length k, probability 1/k^n, HA!". It's a total emabarassment.
jeannot · 11 June 2006
Yes, I may have been a little harsh with my 'totally irrelevant' if these simulations can be assimilated to a climb toward an adaptive peak.
But at most, regarding evolutionary biology (not informatics), these simulations are rather useless since the fundamental theorem of natural selection has been demonstrated by Fisher 76 years ago.
Todd · 11 June 2006
Mark Isaak · 11 June 2006
Another point, quite apart from genetic algorithms, may be worth mentioning. The beginning programmer quite likely did not get his "Hello world" program correct the first time. Then he had to make modifications to it and select the version which worked best. In other words, his program evolved. This is certainly the case in more complicated computer programs. The programmers of these take much of their code from previous programs, and even then most of their work goes into fixing bugs which were inadvertently introduced. The process is different in important ways from biological evolution, to be sure, but the process still embodies the basics of evolution: modification and selection of existing forms. In short, design is a kind of evolution. To accept design is to accept evolution, and to reject evolution is to reject design.
secondclass · 11 June 2006
jeannot · 11 June 2006
The frequency of structures that reproduce more efficiently increases, independently of the notions of problem and solution. These notions only exist in intelligent minds that can identify them.
Similarly, adaptive peaks don't exist before they are reached.
But this is just a problem of semantics.
stevaroni · 11 June 2006
Several things pop out immediately from this data.
The first is the incredible power of a little bit of natural selection pressure.
The odds of generating the final 58 line program by random chance is truly huge (660^58). That kind of number has the flavor of the improbability numbers that the ID proponents like to throw out everyday. I doubt that there's enough storage space on the planet to hold all those permutations.
Yet throw in a little survival-of-the-fittest pressure and you can get answers like that in a few thousand generations.
Secondly, look at the graph of fitness over time. Damned if there wasn't a little mini "Cambrian Explosion" right at generation 400, where the program suddenly became much, much more fit in a very short span.
stevaroni · 11 June 2006
Oh, and the third thing is how quickly a primitive precursor of "Hello World" established itself.
secondclass · 11 June 2006
stevaroni · 11 June 2006
Todd · 11 June 2006
Jim Harrison · 11 June 2006
Circuits can be and are designed both by versions of the genetic algorithm and by conscious human design. While both kinds of solutions work, those produced by artificial natural selection tend to be more robust in actual use than those made rationally. Like living systems such as metabolic pathways, artificially evolved systems go on functioning even when some of their parts are damaged while designed systems tend to be much more brittle. Andreas Wagner discusses this constrast in his book Robustness and Evolvability in Natural Systems. It is the zillionth reason to believe that living things were produced by chance and selection rather than conscious design.
normdoering · 11 June 2006
steve s · 11 June 2006
steve s · 11 June 2006
stevaroni · 11 June 2006
Caledonian · 11 June 2006
steve s · 11 June 2006
PvM · 11 June 2006
'Rev Dr' Lenny Flank · 11 June 2006
Sir_Toejam · 11 June 2006
steve s · 11 June 2006
Lenny would love AFDave
David B. Benson · 11 June 2006
Second Law of Thermodynamics and Biology: Read "Into the Cool", a cool book about NET(Non-Equilibrium Thermodynamics) and a relationship to biological processed.
PvM · 11 June 2006
PvM · 11 June 2006
RBH · 11 June 2006
Discussions of computer models of evolutionary processes typically dissolve into confusion due to the failure to carefully distinguish between two kinds of models that differ in the information used to calculate fitness.
1. Models with global fitness calculations. These are Dawkinsian METHINKSITISLIKEAWEASEL sorts of models, where the fitness of a replicator is calculated as the distance of its phenotype from some target phenotype. The fitness equation "knows" the target state, and replicators are more or less fit (and therefore survive to replicate and/or recombine) based on relative similarity (e.g. the Hamming distance) to that target state. These kinds of models are not models of biological evolution, and claims that they are such models flatly misconstrue biological evolution. However, they are useful in demonstrating the power of cumulative selection, which is all Dawkins sought to do with his METHINKS illustration. He explicitly said that the METHINKS program was not a model of evolution, but only of cumulative selection and its power to transform tiny probability into high probability. Creationist have consistently and persistently misconstrued that program since it was published, and Dodgen's post is yet another example of that misconstrual.
2. Models with local fitness calculations. These are models in which the algorithm does not "know" what a target phenotypic state might be, but "knows" only what is better or worse in the local environment, where "local environment" means the values of relevant environmental variables in the volume of phenotype space actually occupied by the current population. The fitness equation of the algorithm can calculate relative fitness among the replicators in the population, based on some defined properties of the replicators in that environment. Most GAs are of this nature. If I want to evolve a population of artificial stock traders, I cannot write down the specific target phenotype -- if I could, I wouldn't bother to use a GA, I'd just write it down and trade on it. However, I know some properties a good artificial trader should have, and I can write a fitness function that tests for values of those properties. For example, I might use risk-adjusted return over some historical data as the relevant property. All members of the population paper trade over that period, and the algorithm calculates each artificial trader's risk-adjusted return and ranks the traders on that measure to determine which survive to replicate, their probability of entering into recombination, and so on. The algorithm "knows" a property of artificial traders -- relative risk-adjusted return, where better traders are higher -- but "knows" neither what an excellent trader's phenotype would look like nor what the global maximum risk-adjusted return might be. It "knows" only how risk-adjusted return differs among the members of the current population.
Biological evolution is an algorithm of the second sort. The algorithm does not "know" a target phenotype in order to determine fitness on the basis of similarity to that target phenotype. Rather, the algorithm of biological evolution "knows" only locally determined fitness, where fitness is "calculated" implicitly as survival and relative reproductive success of the actual replicators in the population in a specific environment composed of physical variables and biological variables (conspecifics and other species).
As a consequence, any algorithm that incorporates a fitness calculation that refers to some phenotype (or genotype) not currently in the population is not a model of biological evolution. Biological evolution "knows" what's better or worse in the current population only by virtue of the differential survival and reproduction of the members of that population; it does not "know" an optimal phenotype or genotype toward which it should evolve.
RBH
PvM · 11 June 2006
Caledonian · 11 June 2006
METHINKS is a representation of biological evolution; it's just that it explicitly defines what the environment will favor. Other simulations implicitly define this by generating environments in which certain traits will prove more robust.
From a mathematical and theoretical perspective, it makes no difference if you set up the fitness space explicitly or implicitly. The best solutions within the fitness space will be the states which the simulation will tend towards either way.
steve s · 11 June 2006
jeannot · 11 June 2006
steve s · 11 June 2006
If Dembski was serious about having an intellectually respectable blog, he'd fire all those idiots and get somebody who had some familiarity with science. A guy with a degree in some kind of actual science.
I presume he doesn't do this because he enjoys the comedy as much as I do.
PvM · 11 June 2006
jeannot · 11 June 2006
Flint · 11 June 2006
So would it be correct to say that theistic evolutionists picture their god as surreptitiously diddling with the environment, manipulating the fitness function so as to direct RM+NS to produce the desired lineages? Would this manipulation be sufficient in and of itself, or would this flavor of god also be required to interfere in other ways, such as drift control, or directing mutations according to the biase required to get the target results?
In any case, it's quite an elegant technique if the target result isn't known in advance, but its capabilities are.
RBH · 11 June 2006
Reed A. Cartwright · 11 June 2006
I'm going to have to disagree with RBH. Models that involve a global optima can be models of biological evolution. They are not going to approximate a complete biological fitness function, but they can approximate parts of fitness functions, i.e. areas with optima. I've used them to look at adaptation to environmental distrubance and stochasticity. I have a friend who's used them to look at domestication and recombination.
I should also point out that the WEASLE program is not a model of mutation and selection, but rather of substitution and selection.
Reed A. Cartwright · 11 June 2006
One can take a goal-oriented fitness function and cast it in terms that are goal-less.
The optimal phenotype is 1111.
versus
Each 1 adds 1/4 to the fitness.
Wheels · 11 June 2006
It's like these chuckleheads don't understand what the word "MODEL" means.
normdoering · 11 June 2006
Caledonian · 11 June 2006
Torbjörn Larsson · 11 June 2006
jeannot says,
"Torbjörn, you didn't understand me. But my English might not be perfect.
Evolution doesn't have any specific objective nor problem to resolve. A fitness increase is not a specific objective, contrary to the program described here."
No, it is my english that is failing. "Objective" implies teology, so I shouldn't have used that, it is antropomorhising of software.
However, the software or algorithmn solves a problem and finds a solution (at least if we naively think of the fitness function as constant) since it can find a local maxima of fitness. This is what it's called in math, physics or software, so it should preferably be possible to say so without confusing it with antropomorphic purpose.
"Therefore, I concluded that this computer program is totally irrelevant to evolutionary biology."
I see that you resolved this later.
Torbjörn Larsson · 11 June 2006
"DaveScot wrote:
"The way this is accomplished is not disclosed and if it were disclosed I'm sure we'd find the program is cheating by sneaking information in via the filter which ranks the "fitness" of the intermediate outputs.""
IDiots still doesn't understand peer review! The paper should not pass peer review if not enough information is disclosed for repetition of experiments. If it does anyway, an attempted repetition with the help of the authors will resolve the issue, or it is easily refuted, perhaps even deemed a fake. This makes it desirable for the authors to save the information they feel is redundant or too complex to put in the paper for a reasonable time.
Perhaps this explain why ID so easily claim 'peer review' on some of their papers.
Torbjörn Larsson · 11 June 2006
""Objective" implies teology"
"Objective" implies teleology. (Not that the difference matters much. :-)
RBH · 11 June 2006
steve s · 11 June 2006
steve s · 11 June 2006
Astrophysicists program astrophysics models.
Chemists program chemistry models.
Biologists program biology models.
The IDists try to dismiss any computer simulations of evolution with hand waving garbage like "Well, a programmer acted intelligently on the system, thereby injecting information, so it's not really evolution."
Take that stupidity to it's conclusion--it can only be a computer model of evolution if nobody set up the model. If that were true, evolution would be uniquely prohibited from ever being modeled by scientists on a computer.
That's the kind of Grade A thinking you can only get from The Short Bus on the Information Superhighway, Uncommonly Dense®.
Caledonian · 11 June 2006
PvM · 12 June 2006
Reed A. Cartwright · 12 June 2006
RBH, I agree that in biological evolution fitnesses are not determined by the distance to an optimal genotype. That does not mean that they can't in some way correlate with distance from an optimal genotype. (Phages are an example where optimal genotypes have been observed.) Model builders can exploit such correlation when employing simple fitness functions in their research.
Jim Harrison · 12 June 2006
In a corporation, the CEO always has an advantage on his engineers because they are judged by how well they meet their goals while the CEO can always claim that whatever happened was what he had in mind. Natural selection has the same kind of executive edge. Just as the Big Cheese in a company only cares about profits, evolution only searches for higher levels of fitness. To that end, anything goes, including becoming a blind parasite in the bladder of an octopus.
The American philosopher C.S.Pierce made a similar point about human creativity, which involves a search of what Pierce called "the Sea of Musement" for results that fit vague and ambiguous goals such as Beauty, Goodness, or Truth. Whenever people have to find something specific, they automatically become stupider since the probabilities of locating the desired result are so small. Finding something interesting and then retroactively declaring it's what you wanted is much easier. Unfortately, in most areas of human endeavor only a minority of individuals have the right to operate in this fashion. As Mel Brooks pointed out, "It's good to be the king."
jeffw · 12 June 2006
Caledonian · 12 June 2006
It doesn't matter one whit whether the slope of the fitness function is calculated by referencing a known strategy, or whether it's generated by a specific equation that doesn't explicitly refer to any particular strategy, if the shapes of the functions are identical.
As long as the fitness function is continuous, there are going to be local regions where the surrounding terrain slopes towards optima. The size of the regions depends on how steep that slope is, granted, but the slope remains.
Claiming that a simulation with such slopes doesn't speak about biological evolution not only incorrect, but almost certainly disingenuous.
jeannot · 12 June 2006
Sir_Toejam · 12 June 2006
RupertG · 12 June 2006
Reed A. Cartwright · 12 June 2006
Caledonian · 12 June 2006
jeannnot · 12 June 2006
JAllen · 12 June 2006
stevaroni · 12 June 2006
It seems like we're arguing about the shape of the container when the important point is that the fluid flows in such a way to fill it.
The given criterion, spell "Hello World", is a simple random filter and the fact that some intelligent force picked it out with the highly technical method of "hmm, that's kinda funny, I think I'll test this one" doesn't mean anything significant.
There are endless possible "fitness criteria". I have a bowl of M&M's on my desk and it seems in that particular species the red M&Ms die young and dark brown M&M's live into old age.
Even in a single environment there may be many possible survival strategies competing at the same time. You can stay away from the lions by getting faster (gazelle) growing bigger (elephants) learning to climb trees (monkeys) or out-thinking them (h.erectus).
The real important point isn't that the fitness criterion draws you to a pre-arranged solution, it's that it it makes you move in the first place.
Secondly, the date on the paper we're talking about is 1995. That's at least a century and a half ago in computer years. Isn't there follow-on data available about this kind of experiments that shows their strengths and weaknesses?
Sir_Toejam · 12 June 2006
jeannot · 12 June 2006
David B. Benson · 12 June 2006
Re #105001: Jim Harrison --- You state something to the effect that circuits which have been artificially evolved are more robust than designed circuits. I would greatly appreciate a reference. Thanks.
RBH · 12 June 2006
Jim Harrison · 12 June 2006
David Bensen asks: Re #105001: Jim Harrison ---- You state something to the effect that circuits which have been artificially evolved are more robust than designed circuits. I would greatly appreciate a reference. Thanks.
The complete references are in a late chapter of Andreas Wagner's book Robustness and Evolvability in Natural Systems (2005)---I'd give you page numbers, but I think I've loaned out my copy of the book.
David B. Benson · 12 June 2006
Jim, thanks. I check it out of my library.
jeffw · 12 June 2006
steve_h · 12 June 2006
Warning: Well-adjusted people should just skip this.
The problem faced by the PT-linked program is much simpler than the one posed by UD. AIUI, the PT program simulates a computer which uses a subset of a Z80 machine in which a sequence of 8 bit bytes represents instructions and/or data. Any 256 8-bit byte value represents a hexadecimal number and most will also also represent a valid instruction of some sort depending on context.
[ Warning the follow may contain a lot of errors in detail, especially
with interpreting byte ordering 0102hex as 258Dec or 513Dec) or the
order in which bytes get loaded into a register]
For instance the sequence: 21 11 01 65 66, occupying consecutive memory locations from 0100 to 0104 can be interpreted rather differently depending on which of those bytes you look at first.
[geeky aside]
The Z80 has several 8 bit registers (which each can hold a number from 00 to FF (hex) or 0-255 (unsigned decimal), or -128-127 (signed decimal), or a character such as "A", depending on how you are looking at them).
These registers are labeled A, B, C, D, E, F, H, and L. Some of the registers are combined to form 16 bit numbers by some instructions. Eg AF, BC, DE, HL. In such a case, if D=1F and E=03, then DE=1F03 (or 7939 decimal).
There are some other 16 bit registers IX, IY which I'll skip, and one called PC (Program Counter) which contains the memory address of the next instruction. Generally 16 bit values can be numbers from 0-65536 (or -32768 to +32767) or can reference any single byte of memory in a computer with 65536 bytes of RAM (=64k).
If your Program Counter (PC) contains 0100, those numbers will be interpreted as
0100 21 11 01 LD HL, 1101 (21 means load register pair H and L with the next two bytes,
17 and 1 respectively [11Hex represents the decimal 16*1 + 1],
Alternatively HL combined =273 decimal)
0103 65 LD H,L (65 means copy the number in register L to register H,
so now both H and L contain 01)
0104 66 LD H, (HL) (66 means take the 16 bit value of H and L combined,
use that to form an address. 0101 in hex is 257 in decimal
so we take whatever 8-bit value is stored in memory location
257 and copy that to register H)
and would have the effect of setting L to 01, and H to whatever was in memory location 257.
If, OTOH, the program counter started at 0101, these instructions would be interpreted as
0101 11 01 65 LD DE, 0165 (Or load register D with 1 and E with 65(hex)=101(decimal)='e' (Ascii)
0104 66 LD H, (HL) (As in previous example)
and if PC was 0102 you'd get
0100 11 21 01 LD DE, 2101
[/geeky aside][resume slightly less geeky mode]
0102 01 65 66 LD BC, 6566 (BC=25216 decimal or B=101/'e' and C=102/'f')
if you swap 21 and 11 you get
0103 65 LD H,L
0104 66 LD H, (HL)
Almost any input is valid to the computer, but the results vary in usefulness to its human owner.
OTOH, if you write "rpintf("Hello World");" or mis-spell printf in any way in a c program you get a link error (function 'rpintf' not found) and your program doesn't run. If you forget the ";" or either of the quote characters you get a compiler error and your program doesn't run. In fact, almost any mistake not inside the string "Hello World" will result in your program failing to compile. This is because high level compilers are designed to trap common human errors - that is, really designed, not just "I don't understand it therefore it was designed"-designed. They allow us to write human-friendly programs which free the programmer from a lot of tedious detail. One mistake anywhere in a 10000 line program
and the program prevents the program from working. This gives the human programmer a change to fix the program before it causes any damage; A program which doesn't run is better than a program which introduces subtle errors. Conversely, in simple machine code almost any input causes something to happen, but often not what was intended.
DNA is rather different from high level computer language; Everyone's DNA contains some replication errors. These errors are not
caught by any compiler or any programming tool - instead you live your life and then maybe one day you get some "access violation" or infinite loop and you die (and maybe get submitted for analysis), or you're just constantly sick. There's no friendly "'G' expected but got 'T' at location 1F374C47" message or "some quote was unmatched" from the compiler which the midwife corrects.
But that's not to say that any DNA sequence will produce something - I just don't know - but is by no means clear to me that DNA is better represented by a high level langauge computer language (which I think was chosen to inflate the numbers) or a low-level one, if indeed it's a formal language at all and not just a part of a really complex series of chemical reactions.
And of course, I am not arguing that the Z80 was not designed; It was, and by humans.
Also, a lot of the posts at the UD site are going on about the additional complexity required by the operating system and the BIOS and the etc, but the PT paper doesn't need those. Operating systems,
assemblers and compilers came later in the '(non-biological) evoltion' of computers in order to make life easier for humans, but the computer does fine without them. C is easier for humans but harder for machines or evolution models. To a computer, the horrible hex stuff is a given, and it doesn't matter if it was preprogrammed, random, entered bit by bit using electrical switches, paper tape or
fancy compiler program. Output is produced by executing an OUT instruction or by copying the number representing output into a memory location mapped to the display unit, etc. The paper dealt with a simple virtual computer chip using a subset of Z80 with no operating system or BIOS. Their VM writes output by writing to 'physical' ports directly. The chances of hitting an "OUT" instruction and producing some sort of output on the Z80 are better than 1 in 256 whereas your chance of stumbling
upon 'printf("X") is much worse (1 in 256^8). Also they didn't implement block transfers which would allow the output of whole strings with one two-byte sequence and a few bytes of setup.
Granted, the Z80 architecture itself was no accident, but we are taking that as a given.
One thing that bugs me. Many IDers and creationists often describe DNA as a computer code. So, OK you guys, what are the basic instructions in DNA? How many bits are used to represent
them. Does the architecture have registers, busses, stacks, formal syntax, debugging tools, formal methadologies, data structures, standard algorithms? What's the DNA equivalent of a GOTO, or a CALL/RETURN or indirect or indexed addressing mode, or, at the level of 'C', formal function parameters, preprocessor directives, loops, conditionals, BNF. What library functions are available (and what arguments do they take)?
We've known about DNA for about half a century now, and the ID/creationist side argue that an inability to produce a full mutation by mutation history is proof that science is bogus and the
desiIMeanGoddidit, so I think it's only fair that I ask for a detailed blow by blow account of the DNA based computer architecture.
When we have that, we can look of how a mutation in an instruction affects the program, and how that affects the individual, and, ultimately, how individual mutations affect the individual's survival in a complex environment of interactive DNA machines. After that, we can decide of the required step by step mutation history of any given life form makes it impossible or not.
steve s · 12 June 2006
Caledonian · 13 June 2006
jeannot · 13 June 2006
Rilke's Granddaughter · 13 June 2006
RBH · 13 June 2006
Sir_Toejam · 13 June 2006
RupertG · 13 June 2006
stevaroni · 13 June 2006
steve_h · 13 June 2006
Andreas Bombe · 13 June 2006
Henry J · 13 June 2006
Andreas,
Re "that grew to one of those long posts nobody ever reads anyway..."
Oops, guess that makes me nobody? ;) Course, being a software engineer myself I found that analysis interesting. If it's computer code, it's not the traditional sequential. Either multiple CPU's or multiple threads on one CPU.
I wonder though if recipe might be a better analogy than computer code (and never mind that my idea of "cooking" is punching buttons on a microwave), since at least a large part of the function seems to be adding of "ingredients" when they're called for.
Henry
Caledonian · 14 June 2006
Sir_Toejam · 14 June 2006
Caledonian · 14 June 2006
Erik 12345 · 14 June 2006
RBH · 14 June 2006
jeannot · 14 June 2006
I agree with RBH. Even if an optimum exists, estimations of fitness using this optimum as a parameter are approximations, which may be usefull and precise enough in many cases, but may not be applicable in others.
It reminds me of the comparison between Newton's mechanics and relativity.
RBH · 14 June 2006
Erik 12345 · 14 June 2006
Erik 12345 · 14 June 2006
Erik 12345 · 14 June 2006
RBH · 14 June 2006
Caledonian · 14 June 2006
These objections leveled against the example are inane.
IDists frequently claim that because the statistical likelihood of producing a pattern through raw chance is small, that pattern could not have evolved. This simulation does not need to reproduce fitness spaces commonly found in nature to refute that claim, nor does it need to attempt to emulate the development of any organism, nor does it need to utilize randomly generated fitness spaces.
All it has to do is have a defined fitness algorithm and a system for "reproducing" with occasional mutations programs to which the fitness algorithm can be applied. It fulfils both of those criteria, and in the process, nicely demonstrates how natural selection can very quickly produce seemingly improbable results.
Rilke's Granddaughter · 14 June 2006
Caledonian · 14 June 2006
Rilke's Granddaughter · 14 June 2006
Rilke's Granddaughter · 14 June 2006
Erik 12345 · 14 June 2006
Erik 12345 · 14 June 2006
Rilke's Granddaughter · 14 June 2006
Erik 12345 · 14 June 2006
jeannot · 14 June 2006
Caledonian · 14 June 2006
Rilke's Granddaughter · 14 June 2006
Rilke's Granddaughter · 14 June 2006
Andreas Bombe · 14 June 2006
Erik 12345 · 14 June 2006
Caledonian · 14 June 2006
Henry J · 14 June 2006
Andreas,
Re "the genome is equivalent to a chip."
Except that evolution of the gene pool can rewire that "chip" a whole lot easier than a hardware chip can be rewired. ;)
Come to think of it, I guess some aspects of that "chip" would get rewired during development of the organism, as well.
Henry
Gil Dodgen · 14 June 2006
The fact of the matter remains: Random mutation and natural selection as an explanation for all of life's complexity, functionally integrated machinery, and information content is wishful speculation, unsupported by convincing hard evidence. This should simply be admitted.
steve s · 14 June 2006
Oh, don't worry, Gil. In a week or so, Paul Nelson's going to be presenting Ontogenetic Depth v 2.0 at the Society of Developmental Biology meeting, and I'm sure that will obliterate Darwinism, you know, like the Explanatory Filter did, and the NFL theorems, and your analogies to computers, and Irreducible Complexity, and Sal's plane anecdotes, and the last 400-500 dumb things you guys have said, and Intelligent Evolution will in the future, &c, &c, &c....
Gil Dodgen · 14 June 2006
Dear Steve,
I appreciate your intellectually satisfying refutation of my thesis.
'Rev Dr' Lenny Flank · 14 June 2006
steve s · 14 June 2006
Glad you simply admitted it.
steve s · 14 June 2006
And I look forward to all the analogies I'm sure you'll present in the future, and the concomitant incredulity.
Rilke's Granddaughter · 15 June 2006
David B. Benson · 15 June 2006
I suppose this is old-fashioned of me, but
optimum == the best
hence, 'global optimum' is redundant whilst 'local optima' is at best confusing. For the latter, 'local maxima' is certainly to be preferred.
Rilke's Granddaughter · 15 June 2006
Erik 12345 · 16 June 2006
Erik 12345 · 16 June 2006
Caledonian · 16 June 2006
I believe what he's trying to say is that any perceived difference between the model and real life, no matter how trivial or irrelevant, will be seized upon as rhetorical evidence that evolutionary theory is false.
Erik 12345 · 16 June 2006
Aureola Nominee, FCD · 16 June 2006
I apologize in advance for what will surely turn out to be a totally clueless question:
since in reality organisms do not appear to be "assessed for fitness" by reference to an abstract Platonic ideal, isn't there something fundamentally wrong with doing so in modelling?
Erik 12345 · 16 June 2006
Aureola Nominee, FCD · 16 June 2006
Thank you, Erik.
First, I don't see those two equations as different; I see them as the same equation, written in two different ways. Therefore they both suffer from the same fundamental flaw (if it is a flaw), or neither does (if it isn't).
Second, my point is that "reproductive fitness", in my layman's eyes, does not depend from how close or how far a given organism is from a theoretical optimum, because it is relative, not absolute.
In other words, if I have a population of organisms, the reproductive success of each individual depends on how much better or worse than the others it is, not on how much worse than the local optimum it is.
So, for instance, if I take the function you mention
w(z) = 1 - (z - 0.17)^2
and use it to calculate a relative fitness
w(z1) - w(z2) = (z2 - 0.17)^2 - (z1 - 0.17)^2
I obtain
w(z1) - w (z2) = (z2)^2 - (z1)^2 - (0.34 * (z2 - z1))
which seems to me to be a very different kettle of fish!
As I said, I do not presume to correct people who have devoted their careers to this stuff; but I would really like to understand why, instead of using relative fitness (which would avoid the whole problem of "comparing to a non-existing ideal", we seem to be using absolute fitness. Where is my mistake?
P.S. Your remark on modelling trajectories seems to me not to address this aspect.
jeannot · 16 June 2006
Erik 12345 · 16 June 2006
'Rev Dr' Lenny Flank · 16 June 2006
Mathematical equations . . . now my head hurts. Owwwwwwwwwww.
(grin)
Sorry, but I've always been mathematically-challenged. It's why I was an English major and not a science major.
Aureola Nominee, FCD · 16 June 2006
Erik:
My point is precisely that using the difference is not equivalent to using the absolute value. I'm not entirely convinced that the effects of modelling absolute fitness vs. relative fitness are negligible; however, not being a professional in this field, I'll defer to expert opinion.
Erik 12345 · 17 June 2006