Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 12GB max is a non-starter for ML work now.

Can you even do ML work with a GPU not compatible with CUDA? (genuine question)

A quick search showed me the equivalence to CUDA in the Intel world is oneAPI, but in practice, are the major Python libraries used for ML compatible with oneAPI? (Was also gonna ask if oneAPI can run inside Docker but apparently it does [1])

[1] https://hub.docker.com/r/intel/oneapi



There is ROCm and Vulkan compute.

Vulkan is especially appealing because you don't need any special GPGPU drivers and it runs on any card which supports Vulkan.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: