Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice. Interested to see where this leads.

The network in the article doesn't have explicit layers. It's a graph which is initialised with a completely random connectivity matrix. The inputs and outputs are also wired randomly in the beginning (an input could be connected to a neuron which is also connected to an output for example, or the input could be connected to a neuron which has no post-synaptic neurons).

It was the job of the optimisation algorithm to figure out the graph topology over training.



I did a similar project previously and had what I considered "good" results (creatures that did effectively control their bodies to get food) but not the kind of advanced brains I had naively hoped for.

The networks were really configurable (number of layers, number of "sections" within a layer (section=semi-independent chunk), number of neurons, synapses, types of neurons, type of synapses, amount of recurrence, etc.), but I tended to steer the GA stuff in directions that I saw tended to work, these were some of my findings:

1-Feed forward tended to work better than heavily recurrent. Many times I would see a little recurrence in the best brains, but that might have been because due to percentages it was difficult to get a brain that didn't have any of it.

2-The best brains tended to have between 6 and 10 layers, and the middle layers tended to be small like information was being consolidated before fanning out to the motor control neurons.

3-Activation functions: I let it randomly choose per neuron or per section of layer, or per layer or per brain, etc. I was surprised that binary step frequently won out compared to things like sigmoid or others.


Were the brains event-driven? How did you implement the GA? What did individual genes encode?


This was the setup:

1-Creature shape: hexagon with a mouth/proboscis perpendicular to one side of hexagon

2-Senses:

2.1-Mouth could detect if it was touching plant food or creature (which is also food), and would transfer energy from food surce if touching it

2.2-Sight:

Two eyes each sent out 16 lidar-like rays (spread out a bit)

Eye neurons triggered based on distance of object and type of object (wall,plant,hazard,creature)

2.3-Touch:

Each body segment had 32 positions for touch

Each position had neurons for detecting different types of thing: wall, plant, hazard, creature, sound

2.4-Sound:

Each creature emanated sounds waves (slower than light rays but faster than creature movement)

Sound detected by touch senses

3-Motor: multiple neurons controlled forward, backward and rotation left/right.

4-Brain:

Layer 1=all sense neurons plus creature state (e.g. energy level)

Layers 2 - N=Randomly created and connected and evolved

Final Layer=motor

5-World:

A simple 2D space with randomly placed items

Walls+obstructions blocked movement

Hazards sucked energy from creature

Plants added food to creature if mouth touched plant

Between 20 and 50 Creatures

> Were the brains event-driven?

It was a sequential flow as follows:

1-Move creatures based on motor neurons and perform collision detection

2-Set sensory neurons in layer 1 based on current state (e.g. is mouth touching plant, eye ray detection, creatures bodies touching, etc.)

3-Calculate next state of brain in a feed-forward fashion through the layers. For recurrence it means, for example, a layer 2 neuron receiving input from a layer 6 neuron is using the value calculated from the previous cycle.

goto step 1

> How did you implement the GA? What did individual genes encode?

I did not have a DNA or genes that drove the structure of the NN. I played with many ideas for a long time, but nothing seemed to be able to encode a higher level capability without dependence on a very specific NN circuit structure. I looked at varous ideas from other people, like NEAT from U Texas but I never found anything that I felt worked at the abstraction level I was hoping for when I started. It's a really fun, interesting and challenging problem, I really wonder how nature does it.

I ended up creating an "evolution control" object that had many parameters at many different levels (entire network, specific leyers, specific section, etc.) that would guide (somewhat control, but mixed with randomness) the initial structure of the brains (layers, sections per layer, conectivity, etc.) and also the extent it could change each generation.

Example of config parameters:

"Chance of Changing A Neurons Activation Function=3%"

"Types of Activation Functions Available for Layer 2, Section 3=..."

After each generation, the creatures were ranked and depending on how good or bad they would be assigned a level of change for next generation.

The level of change drove how much of the NN was eligible to possibly change (e.g. 10% of layers, sections, neurons, synapses, etc.)

The evolution control object drove how possible different types of changes were (e.g. 3% chance switch activation function) and the magnitude of changes (e.g. up to 20% change in synapse weight)

I'm curious how you handled ga/dna/gene stuff?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: