LLMs are increasingly trained on synthetic data. A LLM can learn logic, and ultimately the rules governing physics and everything else, without human input.
The first versions being trained on "human knowledge" seems kind of like a proof-of-concept. Future iterations will be much, much smarter.
The first versions being trained on "human knowledge" seems kind of like a proof-of-concept. Future iterations will be much, much smarter.