Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not necessarily. I have used LLMs to write unit tests based on the intent of the code and have it catch bugs. This is for relatively simple cases of course, but there's no reason why this can't scale up in the future.

LLMs absolutely can "detect intent" and correct buggy code. e.g., "this code appears to be trying to foo a bar, but it has a bug..."



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: