Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What I'm getting into, big corps would probably be better off if all the models were limited for some sort of specific use cases (e.g. programming) rather than aiming for "catch-all-do-all".

I actually agree with this. What bothers me to some extent is how they're obfuscating instructions given to the model, its (rather extreme) biases, while presenting it as a safe, universal model.

If people want to have access to this kind of "safe" output, I don't mind, but it's my opinion that general purpose LLMs should have some semblance of neutrality in their default configuration (fully realizing that this goal is impossible to achieve with perfection) and should behave like tools that simply do what they're instructed to do, drawing from the source material without any further instructions or explicit bias.



Yeah, fair enough. I am just jaded, as I understand a significant chunk of data that was used for training came from Reddit, Twitter, and other sources alike. Good luck drawing anything that resembles the reality from that source material without explicit alignment and bias.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: