Welcome to Weevolution. The applet that is running here is a demonstration of what I hope you will be able to construct in Weevolution at a later date. For now, there will be a series of applets that demonstrate different principles of learning, behavior and evolution. Java 1.1 is required to run these models. This means that you should be running Netscape 4.05 or better on PC, or Internet Explorer 4.01 or better on Mac or PC. The instructions below are very brief. For more information or for troubleshooting please email me email@example.com.
In this model, the weevols start out as different shades of green and blue (a genetic trait encoded by a green variable (0-255) and a blue variable (0-255). They exist on a mixed environment of light green (125 on the green scale) and light blue (125 on the blue scale) patches. Weevols are more likely to get energy (food) on patches that are close to their own color, but more likely to suffer damage (from predators) on patches that are different from them in color. The extent to which these rules hold is controlled by the two tolerance sliders. As you move the sliders towards the max (right) end, weevols must be very close to the patch color to gain food or avoid injury. At the opposite end of the sliders, the probablility of finding food or getting injured is more equitable across colors. Weevols have an energy state and a damage state. When their energy state gets above 25 they can give birth to a new weevol to which they pass on their genetic traits with a small probability of mutation. If their energy goes below 0 they die. Similarly if their damage is greater than 10 they die.
The weevols are constantly rotating 90 degrees clockwise and only see the patch that is in front of them. They simply must decide whether to rotate again or move forward. To help them reach this decision, they are empowered with a form of evolutionary reinforcement learning (see Ackley and Littman). That is, the weevols keep track of the state (energy and damage) from step to step and evaluate whether they are better off at time t+1 than they were at time t. If they are better off, then they reinforce (make more probable) the behavior that they took during that time step. Their sensory input and behavioral repertoire is quite limited. They can only see whether the patch in front of them is green or blue, and can only elect to move or not move. Consequently, the behaviors that they are reinforcing consist of move forward when I see blue and move forward when I see green, each of which has an associate probability.
The catch here is that the weevols need to evolve the ability to know that damage is bad and energy is good. They are given two genetically encoded traits that indicate how much the weevols value energy and damage. The average values of these genes is indicated on the two bar graphs entitled energy and damage. The other graph indicates how closely the weevols are matching their environmental colors with zero indicating a perfect match and 100 showing random colors and movement.
If you would like to track the behavior of individual weevols, stop the simulation and double-click on a weevol. You will then see the weevol and its environment. The text boxes on the right will also show the probability of moving to green and blue patches (upper set of text boxes), the values of the damage and energy weighting genes (middle set of text boxes), and the current value of energy and damage (lower set of text boxes). The color of the Reinforcement box shows green if the weevol believes it has received positive reinforcement, and red if it has received negative reinforcement. Finally, you can double-click on the weevol to zoom back out.
That should be enough to get started, please write if you have feedback.