The first is that the entire global codebase starts to become an unstable shitpile, and eventually critical infrastructure starts collapsing in a kind of self-inflicted Y2k event. Experienced developers will be rehired at astronomical rates to put everything back together, and then everyone will proceed more cautiously. (Perhaps.)
The second is that AI is just about good enough and things muddle along in a not-great-not-terrible way. Dev status and salaries drop slowly, profits increase, reliability and quality are both down, but not enough to cause serious problems.
The third is that the shitpile singularity is avoided because AI gets much better at coding much more quickly, and rapidly becomes smarter than human devs. It gets good enough to create smart specs with a better-than-human understanding of edge cases, strategy, etc, and also good enough to implement clean code from those specs.
If this happens development as we know it would end, because the concept of a codebase would become obsolete. The entire Internet would become dynamic and adaptive, with code being generated in real time as requirements and condition evolve.
I'm sure this will happen eventually, but current LLMs are hilariously short of it.
So for now there's a gap between what CEOs believe is happening - option 3. And what is really happening - option 1.
I think a shitpile singularity is quite likely within a couple of years. But if there's any sane management left it may just about be possible to steer into option 2.
I agree with you three scenarios. But I would assign different probabilities. I think the second option is the most likely. Things will get shittier and cheaper. Third option might not ever come to pass.
Just like clothing and textile work. They are getting cheaper and cheaper, true, but even with centuries of automation, they are still getting shittier in the process.
There are many more scenarios, though. One of them is that AI slop is impressive looking to outsiders, but can't produce anything great on itself, and, after the first wave of increased use based on faith, it just gets tossed in the pile of tools somewhere above UML and Web Services. Something that many people use because "it's the standard" but generally despise because it's crap.
The first is that the entire global codebase starts to become an unstable shitpile, and eventually critical infrastructure starts collapsing in a kind of self-inflicted Y2k event. Experienced developers will be rehired at astronomical rates to put everything back together, and then everyone will proceed more cautiously. (Perhaps.)
The second is that AI is just about good enough and things muddle along in a not-great-not-terrible way. Dev status and salaries drop slowly, profits increase, reliability and quality are both down, but not enough to cause serious problems.
The third is that the shitpile singularity is avoided because AI gets much better at coding much more quickly, and rapidly becomes smarter than human devs. It gets good enough to create smart specs with a better-than-human understanding of edge cases, strategy, etc, and also good enough to implement clean code from those specs.
If this happens development as we know it would end, because the concept of a codebase would become obsolete. The entire Internet would become dynamic and adaptive, with code being generated in real time as requirements and condition evolve.
I'm sure this will happen eventually, but current LLMs are hilariously short of it.
So for now there's a gap between what CEOs believe is happening - option 3. And what is really happening - option 1.
I think a shitpile singularity is quite likely within a couple of years. But if there's any sane management left it may just about be possible to steer into option 2.