Apart from NEAT/HyperNEAT there are also other approaches to neuroevolution (I think in this context it is referred to as "Evolutionary Neural Architecture Search" [0]). Evolution in general can be applied in different ways (e.g. optimizing the architecture, replacing training gradient descent etc.).
A while ago I co-authored a paper in this space [1] and released some code for interested folks [2].
I also find the related ideas of neuroevolution of the weights of a neural networks to be fascinating in its own right.
I've implemented "cooperative coevolution" which felt like magic when I considered how good it is (on some tasks like continuous control RL problems) relative to known good methods like anything involving gradients.
I wish that this stuff was explored a bit more. Seems we are leaving the paradigm of evolutionary methods...
They seem to be similar to Gene Sher's TWEANNS - Topology and Weight Evolving Neural Networks - that I learned about in his 2012 book, "Handbook of Neuroevolution Through Erlang" (sure, it didn't pick up because Erlang ;))
Gene's specific implementation is DXNN (Discover and eXplore Neural Network) implemented in 2010 [1]
I missed that, thanks! I'll have to take another look.
When I think of neural networks that evolve changes in their topology and the weights between those connections, over and over towards optimizing a solution to function, I get the vision of the recursive Life in Life video[1] where Conway's Game of Life is emulated in Game of Life.
I'm by no means an expert in the field but I do find it exceptionally interesting so i try and keep tabs on some of the research done by people who originated from the same group as Kenneth Stanley.
Seems like many from this group now pursue open-endedness in AI and view evolution as a way towards this goal (or lack thereof).
A very interesting evolution (ha!) of these ideas was presented in POET[0] towards evolution of agents in evolving environments.
There is also an interesting paper about accelerating neural architecture search when generating fake training data in generative teacher networks[1].
Lastly, a paper that i find very very interesting but might not be as relevant but still is 'First return, then explore'[2]
A while ago I co-authored a paper in this space [1] and released some code for interested folks [2].
[0]: https://arxiv.org/pdf/2008.10937.pdf
[1]: https://arxiv.org/abs/1801.00119
[2]: https://gitlab.com/pkoperek/pytorch-dnn-evolution/-/tree/mas...