Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've noticed this as well, though I've also noticed that you can sometimes avoid it if you're more explicit and actually say things like "can you write these endpoints in a way that ___, ___, and ____." Or I'll mention some context that I'm worried the LLM will miss (for example pointing out when there are already existing functions for doing certain things).

The broader a request is, the more likely I am to get a bunch of bloat. I think this is partly because the LLM will always try to fully solve the problem entirely from your initial prompt. Instead of stopping to clarify something, it'll just move forward with doing something that technically works. I find it's better to break things into smaller steps so that you can "intervene" if it starts to do things wrong.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: