Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Definitely! ROCm is getting really solid for inference. LM Studio (and therefore the underlying llama.cpp) work out of the box already, and we see AMD pushing forward on PyTorch and other areas rapidly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: