Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is the million dollar question. I'm not qualified to answer it, and I don't really think anyone out there has the answer yet.

My armchair take would be that watt usage probably isn't a good proxy for computational complexity in biological systems. A good piece of evidence for this is from the C. elegans research that has found that the configuration of ions within a neuron--not just the electrical charge on the membrane--record computationally-relevant information about a stimulus. There are probably many more hacks like this that allow the brain to handle enormous complexity without it showing up in our measurements of its power consumption.



My armchair is equally comfy, and I have an actual paper to point to:

Jaxley: Differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics [1]

They basically created sofware to simulate real neurons and ran some realistic models to replicate typical AI learning tasks:

"The model had nine different channels in the apical and basal dendrite, the soma, and the axon [39], with a total of 19 free parameters, including maximal channel conductances and dynamics of the calcium pumps."

So yeah, real neurons are a bit more complex then ReLU or Sigmoid.

[1] https://www.biorxiv.org/content/10.1101/2024.08.21.608979v2....


My whole point is that it maybe possible to do perception using a lot of computational power, or alternatively, there could be another kind of smart ideas that allows to do it in a diferent way with much less computation. It is not clear it requires it.


There could definitely be a chance. I was just responding to what in your comment sounded like a question.

That said, I think there is a good reason to be skeptical that it is a good chance. The consistent trend of finding higher complexity than expected in biological intelligences (like in C. Elegans), combined with the fact that the physical nature of digital architectures versus biological architectures are very different, is a good reason to bet on it being really complex to emulate with our current computing systems.

Obviously there is a way to do it physically--biological systems are physical after all--but we just don't understand enough to have the grounds to say it is "likely" doable digitally. Stuff like the Universal Approximation Theorem implies that in theory it may be possible, but that doesn't say anything about whether it is feasible. Same thing with Turing completeness too. All that these theorems say is our digital hardware can emulate anything that is a step-by-step process (computation), but not how challenging it is to emulate it or even that it is realistic to do so. It could turn out that something like human mind emulation is possible but it would take longer than the age of the universe to do it. Far simpler problems turn out to have similar issues (like calculating the optimal Go move without heuristics).

This is all to say that there could be plenty of smart ideas out there that break our current understandings in all sorts of ways. Which way the cards will land isn't really predictable, so all we can do is point to things that suggest skepticism, in one direction or another.


Following the trend of discovering smaller and smaller phenomena that our brains use for processing, it would not be surprising if we eventually find that our brains are very nearly "room temperature" quantum computers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: