Can you even do ML work with a GPU not compatible with CUDA? (genuine question)
A quick search showed me the equivalence to CUDA in the Intel world is oneAPI, but in practice, are the major Python libraries used for ML compatible with oneAPI? (Was also gonna ask if oneAPI can run inside Docker but apparently it does [1])
Can you even do ML work with a GPU not compatible with CUDA? (genuine question)
A quick search showed me the equivalence to CUDA in the Intel world is oneAPI, but in practice, are the major Python libraries used for ML compatible with oneAPI? (Was also gonna ask if oneAPI can run inside Docker but apparently it does [1])
[1] https://hub.docker.com/r/intel/oneapi