Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Good knowledge of facts

I haven’t seen this demonstrated in gpt-4 or Claude sonnet when asking anything beyond the most extreme basics.

I consistently get subtly wrong answers and whenever I ask “oh okay, so it works like this” I always get “Yes! Exactly. You show a deep understanding of…” even though I was wrong based on the wrong info from the LLM.

Useless for knowledge work beyond RAG, it seems.

Search engines that I need to double check are worse than documentation. It’s why so many of us moved beyond stack overflow. Documentation has gotten so good.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: