Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ollama swaps models from the local library on the fly, based on the request args, so you can test against a bunch of models quickly


Once you've tested to your heart's content, you'll deploy your model in production. So, looks like this is really just a dev use case, not a production use case.


In production, I'd be more concerned about the possibly of it going off on it's own and autoupdating and causing regressions. FLOSS LLMs are interesting to me because I can precisely control the entire stack.

If Ollama doesn't have a cli flag that disables auto updating and networking altogether, I'm not letting it anywhere near my production environments. Period.


If you’re serious about production deployments vLLM is the best open source product out there. (I’m not affiliated with it)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: