Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We’re already using vllm as our inference server for our standard models. We can run whatever inference server for custom deployments.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: