Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have the opposite experience with Chat and Compose. I use the latter much more. The "intelligence level" of the Chat is pretty poor and like the author says, starts with pointless code blocks, and you often end up with AI slop after a few minutes of back and forth.

Meanwhile, the Compose mode gives code changes a good shot either in one file or multiple, and you can easily direct it towards specific files. I do wish it could be a bit smarter about which files it looks at since unless you tell it about that file you have types in, it'll happily reimplement types it doesn't know of. And another big issue with Compose mode is that the product is just not really complete (as can be seen by how different the 3 UXes of applying edits are). It has reverted previous edits for me, even if they were saved on disk and the UI was in a fresh state (and even their "checkout" functionality lost the content).

The Cmd+K "edit these lines" mode has the most reliable behavior since it's such a self-contained problem where the implementation uses the least amount of tricks to make the LLM faster. But obviously it's also the least powerful.

I think it's great that companies are trying to figure this out but it's also clear that this problem isn't solved. There is so much to do around how the model gets context about your code, how it learns about your codebase over time (.cursorrules is just a crutch), and a LOT to do about how edits to code are applied when 95% of the output of the model is the old code and you just want those new lines of code applied. (On that last one, there are many ways to reduce output from the LLM but they're all problematic – Anthropic's Fast Edit feature is great here because it can rewrite the file super fast, but if I understand correctly it's way too expensive).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: