I agree with this. My bosses boss thinks that AI is going to end up doing 95% of our work for us. From my experience (so far) AI coding follows the 80/20 rule, it can get you 80% of what you want for 20% of the time/effort. And the ratio might be more like, it'll get you 80% of what you want IMMEDIATELY, but it can't get you the last 20%, it needs a human to get it over the finish line.
It's super impressive in my opinion, but if you think it's going to straight up replace humans right now, I think you probably aren't a software developer in the trenches cranking out features.
I'm sort of a Neanderthal when it comes to understanding AI, but I don't think AI in it's current form works like a human. Right now, it kind of just cranks out all the code in one fell swoop. A human on the other hand works more iteratively. You write a little bit of code then you run it and look at an iPhone simulator, look at Figma designs, and see if you're getting closer to what you want. AI doesn't appear to know how to iterate, run code, look at designs, and debug things. I imagine in 100 years it will know how to do all that stuff though. And who knows, maybe in 1 year it will be able to do that. But as of right now, September 28th, 2025 it can't do that yet.
It depends on which application you're using. Applications like "RooCode", which is a free extension for VSCode, have several "modes" which allow the user to create an outline of the project using an "architect" LLM, followed by coding the project with a "Coding" LLM, followed by debugging the project with a "Debugging" LLM if there are bugs. There's also an LLM that answers questions about the project. Only the coding and the debugging LLMs do actual coding but you can set it so you have to approve each change it makes.
I agree about the 80/20 part. On the workflow front, there’s been enormous progress from Copilot to Cursor to Claude Code just in the last 2 years. A lot of this is down to the plumbing and I/O bits rather than the mysterious linear algebra bits, so it’s relatively tractable to regular software engineering.
This tracks with my AI usage as well. I often use AI to get the first 80% of the work done (kinda like a first draft), and then I finish things off from there.
It's super impressive in my opinion, but if you think it's going to straight up replace humans right now, I think you probably aren't a software developer in the trenches cranking out features.
I'm sort of a Neanderthal when it comes to understanding AI, but I don't think AI in it's current form works like a human. Right now, it kind of just cranks out all the code in one fell swoop. A human on the other hand works more iteratively. You write a little bit of code then you run it and look at an iPhone simulator, look at Figma designs, and see if you're getting closer to what you want. AI doesn't appear to know how to iterate, run code, look at designs, and debug things. I imagine in 100 years it will know how to do all that stuff though. And who knows, maybe in 1 year it will be able to do that. But as of right now, September 28th, 2025 it can't do that yet.