Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Genuine question: given LLMs' inexorable commoditization of software, how soon before NVDA's CUDA moat is breached too? Is CUDA somehow fundamentally different from other kinds of software or firmware?


Current Gen LLMs are not breaching the moat yet.


Yeah they are. llama.cpp has had good performance on cpu, amd, and apple metal for at least a year now.


Thw hardware is not the issue. It's the model architectures leading to cascading errors




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: