"AI is like having a very eager junior developer on your team"
That's a perfect summary, in my opinion. Both junior devs and AI tools tend to write buggy and overly verbose code. In both cases, you have to carefully review their code before merging, which takes time away from all the senior members of the team. But for a dedicated and loyal coworker, I'm willing to sacrifice some of my productivity to help them grow, because I know they'll help me back in the future. But current AI tools cannot learn from feedback. That means with AI, I'll be reviewing the exact same beginner's mistakes every time.
And that means time spent on proofreading AI output is mostly wasted.
A very eager junior developer who is supremely confident, always says yes, does trivial work in seconds but makes very critical mistakes in the difficult stuff and when you thought he was learning and improving, he forgets everything and starts from square zero again.
In my experience, if I'm looking how to do something pretty standard with an API I'm unfamiliar with, it's usually correct and faster than trying to trawl through bad, build-generated documentation that would rather explain every possible argument than show a basic example.
And in the case it's wrong, I will know pretty quickly and can fall back to the old methods.
> Like economists, who have predicted 7 of the last 3 recessions, AI knows 17 out of 11 API calls!
It's definitely been said before, but the fact that they're calling out these non-existent functions in the first place can tell library devs a lot about what new features could be there to take up the namespace.
I love this idea. In the past I stored our coding style guidelines and philosophy in our wiki. Putting it into git brings it closer to where it is used. Also, it makes it more easily accessible to AI tools, which is an added bonus.
Interesting idea. I have been using a SPECIFICATION.md and TODO.md to keep my models on track. What kind of stuff do you put in LESSONS.md that can't just live in the system prompting?
Nothing, that's roughly the same idea I think. it's just when I'm using Aider I don't really have a good way to feed a system prompt in, so I just put REPOPROMPT.md in the root folder.
TODO.md and FEATURE_TODO.md are also very valuable for keeping on track.
I dont think people (in this context) are suggesting replacing the junior developers with AI, but to treat the AI like a junior: to be clear with what you need, and to be defensive with what you accept back from them; to try and be conscious of their limitations when asking them to do something, and to phrase your questions in a way that will get you the best results back.
They might not be but using language which equates these generative LLMs with junior developers does allow a shift of meaning to actually equate juniors with LLMs, meaning they are the interchangeable, and therefore generative LLMs can replace juniors.
LLMs are advancing as well, just not from your/my direct input. Or from our direct input ( considering they learn from our own questions ) and from 100k others that are using them for their work.
Juniors today can learn exponentially faster with LLMs and don't need seniors as much.
Take me for example, I've been programming for 20 years, been through C, C++, C#, Python, JS, PHP but recently had to learn Angular 18 and Fastapi. Even though I knew JS and Python before hand these frameworks have ways of doing things I'm not used to so I've been fumbling with them for the first 100 hours. However when I finally installed Copilot and had a little faith in it I boosted my productivity 3-4x. Of course it didn't write everything correct, of course it used outdated angular instead of latest (which is why I was so reluctant to ask stuff for it at the start) but it still helped me a lot because it is much easier (for me) to modify some bad/outdated code and get it to where I want it than write it from scratch without the muscle memory of the new framework.
So for me it's been a godsend. I expect for stuff that's not as cutting edge as new framework oddities that appeared in the last 12 months it is even more helpful and % of it being correct would be way higher so for juniors that are doing say Python coding on frameworks that have at least 3-4 years and are stable enough the seniors would need to intervene much much less in correcting the junior.
> Juniors today can learn exponentially faster with LLMs and don't need seniors as much. [...] Take me for example, I've been programming for 20 years
You are not a junior, you already rely on 20 years of experience.
Last time i did any sort of web development was 20 ago, but i thought to try some C# (touched last time ~10 years ago) + Blazor for an idea i had and it took me a couple of days to feel comfortable and start making stuff. While i haven't written for the web in a very very long time, my experience with other tech helped a lot.
His experience is the same in mine , the juniors in our team are super productive in a way that realistically would not have been possible for them before these tools. They just don't get stuck that much anymore so they don't need the seniors as much. I do think the field will be somewhat commoditized in the coming decade.
The web, especially frontend feels far more foreign than any backend or "traditional" programming. The errors suck, sometimes you get no error and have no idea why it isn't working etc. So in a sense I feel like a junior
It's interesting because it actually endangers the junior dev job market in the present.
And in the near future the mid/senior level will have no replacements as we've under-hired juniors and therefore don't have a pipeline of 5YOE/10YOE/etc devs who have learned to stop being juniors.
I see it the other way, assuming these tools keep on improving you will only need junior developers as there's no point on knowing more than the basics about programming to get a job done.
You say this like it is incremental improvement needed, or that we can see signs of a major shift in capabilities coming. Yes, people are predicting this. People were predicting personal travel by jet pack at one point as well.
My favorite is when I gave chatgpt a serializer function that calls a bunch of "is enabled" functions and asked to implement those according to the spec, then hit enter before adding the actual spec to the prompt.
And it happily wrote something. When I proceeded to add an actual spec he happily wrote something reasonable which couldn't work, because it assumed all 'is_something' functions can be used as guard statements. Ah oh.
Funny, I think it's a perfect summary, but in a positive sense. Some of the tools you can modify the prompt, or include a .md file in context to help direct it. But even without that, I don't find it a waste of time because I have lower expectations. "This just saved me 15 minutes of typing out html+css for this form, so I don't mind taking 2 minutes to review and tweak a few things."
My experience is that junior devs write most of the code at a company and senior devs spend most of their time making sure the junior devs don't break anything.
Which seems to work pretty well, in my experience.
Which is something that in principle AI tools could do, i.e. learn from feedback is how they got created in the first place.
However the current generation of models needs a specific form of training set that is quite different from what a human would produce through direct interaction with the model.
For one it needs many more examples than a human would need. But also the form of the example is different: it must be an example of an acceptable answer. This way a model can compute how far it is from the desired outcome.
Further research in how to efficiently fine tune models will make this gap narrower and perhaps senior devs will be able to efficiently give learnable feedback through their normal course of interaction
Well the time isn't wasted - you get code! In my experience even with the added work of checking the AI's output, overall it is faster than without coding assistants.
I think one of OPs points is that it is more of a boost for juniors and a "tax" for seniors. The senior engineer wouldn't open a PR without cleaning up the code; the junior can't tell the difference.
> But for a dedicated and loyal coworker, I'm willing to sacrifice some of my productivity
probably the more we sacrifice of our own productivity the quicker they gain experience (and seniority) right? the only confusing thing that confused me personally in your statement was that they would have to be loyal. Isn't that something that one can only hope but must be proven over time. Meaning that at the time you trust that they turn out well you have no way of proving that they are "loyal" yet. Loyalty is nigh impossible to request upfront? I mean, ... you have to deserve it. And a lot can also go wrong on the way.
"AI is like having a very eager junior developer on your team"
I think this also applies to AI having an early or intermediate senior engineer on your team.
So in effect it would be having less engineers and probably 1 or 2 at best senior engineers and the rest are guiding the AI senior engineer in the codebase.
I didn't need to hire any senior engs for a while for my SaaS and only needed good juniors for 3 months.
Everyone in the future is going have access to senior engineers building projects.
I call AIs "aggressive interns". They're fantastic, very fast, and eager... but often go off the rails or get confused. And as you noted, never learn.
Just the dialog with an AI I find instructive. Sometimes it suggests things I don't know. Often after 1-2-3 mediocre AI solutions I'll code up something that re-uses some AI code but has much better code that I write.
Not only that, but one who is infected with terminal Dunning-Kruger syndrome. Of all the things that LLMs are great at, demonstrating a hopeless case of Dunning-Kruger has to be at the very top.
For the most part, engineering interview processes haven't adapted to this yet. I think a lot of engineering orgs are kind of head in the sand about this shift.
There is a surprising lack of focus on code reviews as part of that process.
A few months back, I ran into one company (a YC company) that used code reviews as their first technical interview. Review some API code (it was missing validation, error handling, etc.), review some database code (missing indices, bad choices for ID columns, etc.), and more.
I think more companies need to rethink their interview process and focus on code reviews as AI adoption increases.
That's a perfect summary, in my opinion. Both junior devs and AI tools tend to write buggy and overly verbose code. In both cases, you have to carefully review their code before merging, which takes time away from all the senior members of the team. But for a dedicated and loyal coworker, I'm willing to sacrifice some of my productivity to help them grow, because I know they'll help me back in the future. But current AI tools cannot learn from feedback. That means with AI, I'll be reviewing the exact same beginner's mistakes every time.
And that means time spent on proofreading AI output is mostly wasted.