Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's also for example HyerNEAT. I'm out of the field now. Has recent progress been made recently with these technologies?


I'm by no means an expert in the field but I do find it exceptionally interesting so i try and keep tabs on some of the research done by people who originated from the same group as Kenneth Stanley.

Seems like many from this group now pursue open-endedness in AI and view evolution as a way towards this goal (or lack thereof).

A very interesting evolution (ha!) of these ideas was presented in POET[0] towards evolution of agents in evolving environments.

There is also an interesting paper about accelerating neural architecture search when generating fake training data in generative teacher networks[1].

Lastly, a paper that i find very very interesting but might not be as relevant but still is 'First return, then explore'[2]

[0] : https://eng.uber.com/poet-open-ended-deep-learning/

[1] : http://proceedings.mlr.press/v119/such20a.html

[2] : https://arxiv.org/pdf/2004.12919.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: