This is what I keep coming back to. I'm sure I'm not the only one here who frequently writes the code, or at least a PoC, then writes the design doc based on it. Because the code is the most concise and precise way to specify what you really want. And writing it gives you more clarity on things you might not have thought about when writing it in a document. Unrolling that into pseudocode/English almost always gets convoluted for anything but very linear pieces of logic, and you're generally not going to get it right if you haven't already done a little exploratory coding beforehand.
So to me, even in an ideal world the dream of AI coding is backwards. It's more verbose, it's harder to conceptualize, it's less precise, and it's going to be more of a pain to get right even if it worked perfectly.
That's not to say it'll never work. But the interface has to change a lot. Instead of a UX where you have to think about and specify all the details up front, a useful assistant would be more conversational, analyze the existing codebase, clarify the change you're asking about, propose some options, ask which layer of the system, which design patterns to use, whether the level of coupling makes sense, what extensions of the functionality you're thinking about in the future, pros and cons of each approach, and also help point out conflicts or vague requirements, etc. But it seems like we've got quite a way to go before we get there.
Agreed although AIs today with simple project based rules can do things like check and account for error cases, and write the appropriate unit tests for those error cases.
I personally have found I can often create equivalent code in less English than typing.
Also it works very well where the scope is well defined like implementing interfaces or porting a library from one language to another.
Yeah, I guess it depends how much you care about the details. Sometimes you just want a thing to get done, and there are billions of acceptable ways to do it, so whatever GPT spits out is within the realm of good enough. Sometimes you want finer control, and in those cases trying to use AI exclusively is going to take longer than writing code.
Not much different from image generation really. Sometimes AI is fine, but there's always going to be a need to drop down into photoshop when you really care about some detail. Even if you could do the same thing thing with very detailed AI prompts and some trial and error, doing the thing in photoshop will be easier.
Another issue I see is the "Machine Stops" problem. When we come to depend on a systems that fails to foster the skills and knowledge needed to reproduce it (i.e. if programming comes to be so easy to so many people that they don't actually need to know how it works under the hood) you slowly loose the ability to maintain and extend the system as a society.
So to me, even in an ideal world the dream of AI coding is backwards. It's more verbose, it's harder to conceptualize, it's less precise, and it's going to be more of a pain to get right even if it worked perfectly.
That's not to say it'll never work. But the interface has to change a lot. Instead of a UX where you have to think about and specify all the details up front, a useful assistant would be more conversational, analyze the existing codebase, clarify the change you're asking about, propose some options, ask which layer of the system, which design patterns to use, whether the level of coupling makes sense, what extensions of the functionality you're thinking about in the future, pros and cons of each approach, and also help point out conflicts or vague requirements, etc. But it seems like we've got quite a way to go before we get there.