Popular HN threads about anything AI related always attract stories highlighting AI failures. It's such a common pattern I want to analyze it and get numbers. (which might require AI...)
Popular HN threads about anything AI related always attract stories highlighting AI success. It's such a common pattern I want to analyze it and get numbers. (which might require to use my brain...)
It's relevant because it shows models haven't improved as much as the companies delivering them would like you to believe no matter what mode (or code) they work under. Developers are quickly transforming from code writers to code readers and the good ones feel demoralized knowing that they could do it better themselves but are instead forced to read gibberish produced by a machine in the volume of dozens of lines per second. Moreover, when they are reviewing that gibberish and it doesn't make sense, even if they provide arguments, that same gibberish-producing machine can, in a matter of seconds, write counter-arguments that look convincing but lack any kind of substance for those who understand and try to read it.
Edit: I am saying it as a developer who is using LLMs for coding, so I feel that I can constructively criticize them. Also, sometimes the code actually works when I put enough effort to describe what I expect; I guess I could just write the code myself but the problem is that I don't know which way it will result in a quicker delivery.