Evolution of Signaling in Artificial Agents
Some time ago I wrote about the evolution of novel strategies for cooperation in computer models of evolutionary processes involving artificial agents with very rudimentary sensory-motor capabilities. Now another such study has appeared showing evolution of the communication of meaningful signals among artificial agents. I was in the process of writing a PT post on it when I was beaten to it by Carl Zimmer. So I'll only say that starting from scratch (random neural nets), robots who could sense their environments and move and emit light themselves, evolved in a 'field' in which there was a food source and a poison source, both of which also emitted light. Under those conditions the robots evolved to signal either the location of the food or the location of the poison. Especially in populations composed of 'kin' -- genetically related robots -- the evolution of signaling resulted in substantially more efficient food gathering and poison avoidance.
Zimmer's post is here and the original paper is here. Read and enjoy.
RBH
85 Comments
paul flocken · 25 February 2007
realpc · 25 February 2007
Fitness may increase, but I doubt there will be any increases in complexity or information. Depending on how you define complexity and information, of course.
Genetic algorthms have been around a long time, as arguments for NDE. Now, many decades later, they are still primitive.
I am not saying it means nothing. Just hold off on congratulating yourselves on knowing all about evolution.
caerbannog · 25 February 2007
Fitness may increase, but I doubt there will be any increases in complexity or information. Depending on how you define complexity and information, of course.
So you are basically acknowledging that evolution can take place without increases in complexity or information. Depending on how you define complexity and information, of course.
Glad you cleared things up.
Sir_Toejam · 25 February 2007
Sir_Toejam · 25 February 2007
in case anybody is wondering where this realpc troll came from, just do a google search on "RealPC evolution" to see where he usually posts and some of the "gems" he's put up.
hilarious.
stevaroni · 25 February 2007
PvM · 25 February 2007
By any measure, information and complexity increased, however IDers are quick to point out that 'they don't know' and likely won't tell.
Why is it that ID revels in ignorance time after time when confronted with real science.
Sir_Toejam · 25 February 2007
Henry J · 25 February 2007
Re "Under those conditions the robots evolved to signal either the location of the food or the location of the poison."
"Danger, Will Robinson!"
Henry
realpc2 · 26 February 2007
While a great number of things are explained by reductive theory (eg. galaxies, solar systems, etc), biology is somewhat novel. Is there a reductive theory for the natural evolution of intelligence? Is it a fluke, or is there a guiding being that led to this particular outcome?
Evolution might be acceptable, but without guidance, would it lead to the forms we have now? Is the general notion of complexity -- especially that of complex forms -- yet addressable by science. For a materialist there is a lot of research potential; for a spiritualist there is a lot of gaps for god.
I have no agenda, but I do think that all the realpc's out there are tipping over the same problem: how does simplistic in/organic chemistry result in an average animal that could write what I have written?
M
KL · 26 February 2007
realpc, you might stay on previous threads and answer questions posed to you before moving to a new thread. Folks might get the impression that you are dodging those questions.
djlactin · 26 February 2007
Flint · 26 February 2007
paul flocken · 26 February 2007
realpc · 26 February 2007
"Dawkins Weasel" programs, for instance, annoyingly manage to create computer programs that write Shakesperian quotes all by themselves.
What exactly is that? Are you citing a computer trick as evidence for NDE?
Flint · 26 February 2007
GuyeFaux · 26 February 2007
GuyeFaux · 26 February 2007
And now there are three threads where Realpc has failed to back-up his assertions and answer his critics.
PvM · 26 February 2007
Raging Bee · 26 February 2007
Depending on how you define complexity and information, of course.
Well, that seems a rather crucial thing to leave out of an argument based on "information theory," doesn't it?
Do you have a definition of these terms? If so, you have yet to describe it here, or stand by it. If not, you have no argument.
Just because the "conservation of energy" theory has repeatedly proven "true," does not mean you can simply replace "energy" with "information" and still have a true theory.
djlactin · 26 February 2007
PvM · 26 February 2007
MarkP · 26 February 2007
David vun Kannon · 26 February 2007
Picking nits on Stevaroni's comment 162812 -
Weasel programs write strings, not programs. In this sense, they are GA, not GP.
Saying GA/GP successfully create complex information "out of nothing without outside information" is an invitation to a front loading argument.
What makes this experiment a much stronger argument is that the factor of co-evolution is explicit. The typical GA example (of the Weasel type) does not make the phenotypes of the population part of the fitness function.
GuyeFaux · 26 February 2007
realpc · 26 February 2007
The program is a vivid demonstration that the preservation of small changes in an evolving string of characters (or genes) can produce meaningful combinations
(Dawkins)
No, Dawkins' program did not produce meaningful combinations. The original string was "random," or meaningless, and the final string just happened to mean something within our culture.
It could have gone from
METHINKS IT IS LIKE A WEASEL
to
WDLMNLT DTJBKWIRZREZLMQCO P
with exactly the same kind of process. The program doesn't know the difference.
Yes, I understand Dawkins' point -- that shuffling plus selection works much faster than shuffling alone. Well of course, what else would we expect?
But Dawkins, probably without realizing it, makes a great unjustified leap by saying the program has produced something meaningful.
If there were no target string, and the program spontaneously generated "Hey Dawkins, you're my creator!" for example, then I would have to acknowledge the accidental generation of meaning.
Carl Rennie · 26 February 2007
MarkP · 26 February 2007
Sir_Toejam · 26 February 2007
amazing.
is there any thread on this board where RealPC hasn't managed to show his utter lack of knowledge of the subject matter?
AFAICT, he's batting 1000.
anyone who can point to where anything he has said actualy exhibits even basic level understanding of the material?
really, am I missing something, or is this person just about as pure a troll as one can get?
stevaroni · 26 February 2007
realpc · 27 February 2007
nonintelligent entities can produce intelligent results, such as solving problems that in some cases the programmers themselves didn't know the answer
Of course a program can give me an answer I didn't know -- there would be no reason to write a program if it couldn't!
But I have to give it the algorithms, or algorithms to generate the algoritms, etc.
stevaroni,
So Dawkins improved his little program by creating an algorithm-generator. And you are madly impressed.
I don't know how much you know about low-level computer languages (assembly language), but it's jaw dropping.
What is "jaw dropping" about low-level computer languages? What does that have to do with evolution?
Flint · 27 February 2007
realpc · 27 February 2007
the routines can organize themselves into amazingly complex and creative programs.
No, nothing described here has been amazingly complex. And I studied artificial intelligence, and never heard of anything convincing. Designers are always required, ultimately, and machine intelligence is always very limited.
I think this is yet one more example of people seeing what they prefer in ambigous data. AI has always been used as an important argument for scientific materialism. Look at all the disappointing dead ends AI has traveled.
I think AI has turned out to be one more reminder of human limitations and our lack of real understanding. Our technology is great and, at least to us, amazing. But our understanding of nature is minimal. The more we learn about it, the more confusing it all becomes. Look at string theory.
Flint · 27 February 2007
Raging Bee · 27 February 2007
But our understanding of nature is minimal. The more we learn about it, the more confusing it all becomes.
Speak for yourself, pal. Just because you can't handle the complex reality, doesn't mean no one else can either. Also, if you don't understand "nature" yourself, you're not in any position to judge anyone else's understanding of it.
Look at string theory.
Some of us already are. Yes, it's mind-boggling, but one reason for that is that it's a very new theory, and has a long way to go before scientists even agree on what it is, exactly, let alone how to test it. Most new theories begin that way.
KL · 27 February 2007
And NO ONE is asking that we teach string theory in high school.
GuyeFaux · 27 February 2007
GuyeFaux · 27 February 2007
realpc · 27 February 2007
Evolution clearly happens.
Ok, this makes about the billionth time I have said it here -- I BELIEVE IN EVOLUTION.
A couple more times:
I BELIEVE IN EVOLUTION.
I BELIEVE IN EVOLUTION.
try playing your computer in chess.
There are lots of things my computer does better than I can. That is why I have a computer. But it only does work that can be automated (broken down into explicit algorithms and programmed). It has no autonomous intelligence.
A chess program follows the rules it was given. It can out-perform humans because it does certain kinds of operations much faster than we can. Not because it's smarter.
Like any machine, a computer is something we design for a purpose. Machines are superior to us in many ways, but they are still our creations and they have no independent existence or consciousness (and they never will).
And by the way I studied experimental psychology as a branch of cognitive science. I'm a computer scientist, not a shrink.
GuyeFaux · 27 February 2007
GuyeFaux · 27 February 2007
realpc · 27 February 2007
connectionists in cognitive science use the same types of models.
Oh yes, connectionism was supposed to be the answer to how the brain works. AI researchers are worried soon the machines will be smarter than we are, and become our masters. How long have they been worrying about that?
I am not losing sleep over that scenario.
GuyeFaux · 27 February 2007
Flint · 27 February 2007
realpc · 27 February 2007
algorithms entailing positive feedback mechanisms (such as selection) can be creative
I don't know how you define "creative" and I don't want to get tangled in that debate. But certainly, hardly anyone would describe the Dawkins trick as an example of creativity. Or any similar genetic programs. The goal is programmed, and guidance toward the goal is, if not programmed, at least laid out pretty clearly.
As I said, we do not have to start worrying, at least not yet, about these things getting out of control and taking over the world.
And by the way, I am not kidding. Some well-known person in the computer industry expressed that concern very recently. Oh yeah, it was Bill Joy (thank you google).
GuyeFaux · 27 February 2007
David vun Kannon · 27 February 2007
At the risk of running my own little sidebar conversation with stevaroni...
I wouldn't call Weasel generators a good example of the process of evolution. Again, their fitness function doesn't include the (phenotypes of) the other members of the population.
On the Crepeau paper, and the long ramble last year that you cite, the paper isn't a major contribution to the "peer reviewed literature". (3 citations within 1 year, by one author) The author conflates several claims about the use of large, generic instruction sets in GP with claims about the use of memory in GP. He then undercuts his own argument that generic instruction sets are useful by biasing the instruction set in favor of the test problem. There is only one test problem, which makes drawing a conclusion about his thesis difficult, at best.
Futher, it wasn't quite apropos of the original question posed by GilDodgen, a point not well appreciated in the subsequent discussion.
It's a pity, because there is a possibility of doing real science is comparing the difficulty of evolving a "Hello, World" generator in machine language with the difficulty of evolving a "Hello, World" generator in C. Crepeau gives us one data point (for Z80 machine language). GilD throws down the gauntlet, but doesn't do the experiment himself. Neither do any evo proponents, they just assume its possible based on Crepeau's data point.
Perhaps I'm giving GilDodgen too much credit, but to me it would be interesting to know that linear strings of machine instructions take effort X, regular expressions take effort Y, and BNF specified programs take effort Z, to evolve. You could then make predictions like "It is unlikely that DNA has certain features (statement boundaries, block structure, function invocation and return) because these features are very hard to evolve."
ID proponents sometimes yammer about DNA being a language, but if it is any more complex than
S = (G|A|T|C)*
I haven't read about it.
Flint · 27 February 2007
realpc · 27 February 2007
They're [the Dawkins programs] examples, at least, of information gain, which you claimed couldn't happen.
What information is gained? Random shuffling of characters, guided by artificial selection towards a pre-determined goal.
It is hard to agree on a definition of "information," though. We generally mean something that "informs" us, tells us some news, makes us know something we did not know before.
Biological evolution has shown a great increase in information. New forms and functions grow out of previously-existing forms and functions. The same with human cultural evolution.
GuyeFaux · 27 February 2007
Flint · 27 February 2007
MarkP · 27 February 2007
GuyeFaux · 27 February 2007
MarkP · 27 February 2007
realpc · 27 February 2007
What are you talking about when you say "information"?
It can be defined in different ways, none of them perfect.
A message contains no information if it is entirely predictable, or if it's entirely unpredictable (random). So one way to define information is as a message where the probably is somewhere between 1 and 0.
But this can be interpreted in various ways. The probability of a coin landing heads is between 1 and 0, but the probability of landing heads OR tails is 1.
Sometimes the set of possibilities is defined, and sometimes it isn't. When I turn on the news I can predict it to some extent -- I know there will be something about Anna Nicole and Britney -- but there is always a chance of something surprising.
The surprises generated by computer programs are, as far as I know, more like coin-tossing than like news. There is a set of expected possibilities -- we know with certainty the result will be from that set, we just don't know which one it will be.
We have never seen human-like creativity in a computer program. They never make jokes, for example.
And life-like programs have been around for half a century. So it doesn't seem like a breakthrough is around the corner.
MarkP · 27 February 2007
GuyeFaux · 27 February 2007
GuyeFaux · 27 February 2007
And if by "creativity" you mean a novel approach to solve problems, there are many examples. The paper I referenced above, "Evolution of lambda-expressions through Genetic Programming", described an experiment that solves a problem Church himself could not solve. Also, the solutions are completely novel.
MarkP · 27 February 2007
Here's a great article about some evolutionary algorithms in robots:
http://scienceblogs.com/loom/2007/02/24/evolving_robotspeak.php
The robots, in the total absence of any programming telling them to do so, not only developed languages, but developed different languages. They are certainly simple languages, granted, but the fact that they created them at all is the death knell of all this "design implies a designer", "unintelligence cannot produce intelligence", "the target is frontloaded" nonsense. It's been refuted, empirically, in the only way creationists have said they would accept: right in front of their primate faces.
It's reality. Deal.
stevaroni · 27 February 2007
stevaroni · 27 February 2007
realpc · 28 February 2007
What's information? Creativity? Give a definition that actually means something.
They are philosophical concepts that cannot be defined precisely. Everyone thinks the meanings are obvious, but they cannot state them logically.
There are things we can do, many others we cannot. Giving precise logical definitions of words like creativity, information, love, evolution, for example, is beyond our ability.
Flint · 28 February 2007
GuyeFaux · 28 February 2007
realpc · 28 February 2007
while still insisting that evolution is not creative and adds no information
I believe that evolution is creative and adds information. I think I have stated this repeatedly.
realpc · 28 February 2007
Ever heard of information theory?
We can define information within a restricted context. But when talking about things like biological or cultural evolution, the context is unrestricted.
Glen Davidson · 28 February 2007
GuyeFaux · 28 February 2007
MarkP · 28 February 2007
realpc · 28 February 2007
As information increases, entropy decreases. But you have to decide what you mean by entropy.
If high entropy means low predictability (high randomness), then high information would mean high predictabilty. Which of course it doesn't.
In information theory, the quantity of information carried by a message increases as the set of possible states selected from by the message increases.
A coin toss has only two possible outcomes. If the coin evolves so that it has three sides instead of two, then maybe we can say the complexity of the system has increased.
realpc · 28 February 2007
But measuring complexity really has to be subjective. If I compare a fruit fly to a zebra, for example, I feel that the zebra is much more complex. But how do you quantify that?
realpc · 28 February 2007
Ignoring the insults, what I am trying to express is the idea (maybe hard to grasp for someone as literal-minded as MarkP) that it can be hard to quantify information in complex contexts. Human language and culture, or biological evolution, for example.
You are claiming simple computer programs can lead to an increase in information, and assuming we can apply the same measures to real life. How do we measure the complexity of a species, so as to compare it to others?
Raging Bee · 28 February 2007
...what I am trying to express is the idea...that it can be hard to quantify information in complex contexts.
You've expressed this idea quite well, and repeatedly, thank you. And we're quite aware of this fact. And we've been trying to express to you that your argument is completely invalid unless, and until, you manage to define and quantify "information." Bleating about how "hard" that is doesn't reinforce any of your arguments; it only makes you sound like one of those talking Barbie dolls.
Just let us know when you manage it, okay? Then -- and only then -- we can do the math and see whether this argument of yours has any substance. Until then, it's nothing but brown air.
MarkP · 28 February 2007
Science is as literal-minded as it gets. Want to be flowery and vague, the poetry room is down the hall.
Glen Davidson · 28 February 2007
realpc · 28 February 2007
Raging Bee · 28 February 2007
Progress in science depends on intuition and philosophical speculation.
Progress in science consists ENTIRELY of, and is measured by, literal, physical, measurable, repeatable results. "Intuition and philosophical speculation" are worthwhile only insofar as they lead to such results. No literal results, no progress.
...In a way, that means I'm an agnostic.
I thought you said you were Catholic. Of course, since you completely ignore the Catholic Church's well-reasoned position on evolution, and science in general, that previous statement was probably meaningless.
I would prefer to say that Darwinism (or neo-Darwinism, whatever you prefer) has not been demonstrated, and can never be demonstrated. It is neither falsifiable nor verifiable.
What you "prefer to say" is irrelevent. Your statement is just plain false, and the fact that you make it proves that you have no clue what you are talking about. Stop pretending that pseudo-mystical know-nothingism is an enlightened position, and go back to bed.
GuyeFaux · 28 February 2007
Sir_Toejam · 28 February 2007
realpc · 28 February 2007
Sir_Toejam · 28 February 2007
MarkP · 28 February 2007
Sir_Toejam · 28 February 2007
Flint · 28 February 2007
MarkP · 28 February 2007
I stand corrected.