I think the pay is going to skyrocket for senior devs within a few years, as training juniors that can graduate past pure LLM usage becomes more and more difficult.
Day after day the global quality of software and learning resources will degrade as LLM grey goo consumes every single nook and cranny of the Internet. We will soon see the first signs of pure cargo cult design patterns, conventions and schemes that LLMs made up and then regurgitated. Only people who learned before LLMs became popular will know that they are not to be followed.
People who aren't learning to program without LLMs today are getting left behind.
Yeah, all of this. Plus companies have avoided hiring and training juniors for 3 or 4 years now (which is more related to interest rates than AI). Plus existing seniors who deskill themselves by outsourcing their brain to AI. Seniors who know actually what they're doing are going to be in greater demand.
That is assuming that LLMs plateau in capability, if they haven't already, which I think is highly likely.
My, aren't you're a talkative one? Just click on the links in your web browser and read! It's much more efficient than asking me to repeat what I already wrote and linked to. The ball is in your court now. If you have any questions you can't answer yourself by reading, then I'd be glad to answer them for you, but then you're going to have to read my answers too, so if reading just isn't your thing, you will never understand what I'm talking about, or be able to use it. Good luck with the rest of your life!
Instead you can create multiple Wireguard interfaces and use policy routing / ECMP / BGP / all the layer 3 tricks, that way you can achieve similar things to what vxlan could give you but at layer 3.
There's a performance benefit to doing it this way too, in some testing I found the wireguard interface can be a bottleneck (there's various offload and multiple core support in Linux, but it still has some overhead).
This is the correct answer, routing between subnets is how it’s suppose to work. I think there are some edge cases like DR where it seems like stretching L2 might sound like a good idea, but it practice it gets messy fast.
Agreed. They've also been extremely finnicky from my experience - had cases where large EVPN deployments just blackholed some arbitrary destination MAC until GARPs were sent out of them.
Also IME EVPN is mostly deployed/pushed when clueless app developers expect to have arbitrary L2 reachability across any two points in a (cross DC!) fabric [1], or when they want IP addresses that can follow them around the DC or other dumb shit that they just assumed they can do.
[1] "What do you mean I can't just use UDP broadcast as a pub sub in my application? It works in the office, fix your network!" and the like.
The good clouds don't support L2, they use a centralized control plane instead of brittle EVPN, and they virtualize in the hypervisor instead of in the switches. People are being sold EVPN as "we have cloud at home" and it's not really true.
AWS/GCE/Azure's network implementations pre-date EVPN and are proprietary to their cloud. EVPN is for on-premise. You don't exactly have the opportunity to use their implementation unless you are on their cloud, so I am not sure comparing the merits of either is productive.
> Also IME EVPN is mostly deployed/pushed when clueless app developers expect to have arbitrary L2 reachability across any two points in a (cross DC!) fabric [1], or when they want IP addresses that can follow them around the DC or other dumb shit that they just assumed they can do.
Sorry, but that's really reductive and backwards. It's usually pushed by requirements from the lower regions of the stack, operators don't want to let VMs have downtime so they live migrate to other places in the DC. It's not a weird requirement to let those VM's keep the same IP once migrated. I never had a developer ask me for L2 reachability.
The full picture of what exactly? How that fact is even relevant to this post? Do you expect anyone affiliated with AI to mention that every time they talk about AI? That's just ridiculous.
I expect someone writing a blog about AI agents help you run your home server to disclose that they are "helping companies automate operations with AI" as their job, which they get money for.
Why wouldn't you bring it up, or even lead with it?
Doesn't it make sense to want to know this? It's not far fetched at all that there is a conflict of interest. How can they be unbiased in the validity of the approach if this is exactly the same stuff they sell for money?
But the examples you've posted have nothing to do with communication skills, they're just hacks to get particular tools to work better for you, and those will change whenever the next model/service decides to do things differently.
I'm generally skeptical of Simon's specific line of argument here, but I'm inclined to agree with the point about communication skill.
In particular, the idea of saying something like "use red/green TDD" is an expression of communication skill (and also, of course, awareness of software methodology jargon).
Ehhh, I don't know. "Communication" is for sapients. I'd call that "knowing the right keywords".
And if the hype is right, why would you need to know any of them? I've seen people unironically suggest telling the LLM to "write good code", which seems even easier.
I sympathize with your view on a philosophical level, but the consequence is really a meaningless semantic argument. The point is that prompting the AI with words that you'd actually use when asking a human to perform the task, generally works better than trying to "guess the password" that will magically get optimum performance out of the AI.
Telling an intern to care about code quality might actually cause an intern who hasn't been caring about code quality to care a little bit more. But it isn't going to help the intern understand the intended purpose of the software.
I'm not making a semantic argument, I'm making a practical one.
> prompting the AI with words that you'd actually use when asking a human to perform the task, generally works better
Ok, but why would you assume that would remain true? There's no reason it should.
As AI starts training on code made by AI, you're going to get feedback loops as more and more of the training data is going to be structured alike and the older handwritten code starts going stale.
If you're not writing the code and you don't care about the structure, why would you ever need to learn any of the jargon? You'd just copy and paste prompts out of Github until it works or just say "hey Alexa, make me an app like this other app".
Why do you bother with all this discussion? Like, I get it the first x times for some low x, it's fun to have the discussion. But after a while, aren't you just tired of the people who keep pushing back? You are right, they are wrong. It's obvious to anyone who has put the effort in.
It's also useful for figuring out what I think and how best to express that. Sometimes I get really great replies too - I compared ethical LLM objections to veganism today on Lobste.rs and got a superb reply explaining why the comparison doesn't hold: https://lobste.rs/s/cmsfbu/don_t_fall_into_anti_ai_hype#c_oc...
Yes and no. Knowing the terminology is a short-cut to make the LLM use the correct part of its "brain".
Like when working with video, if you use "timecode" instead of "timestamp", it'll use the video production part of the vector memory more. Video production people always talk about "timecodes", not "timestamps".
You can also explain the idea of red/green testing the long way without mentioning any of the keywords. It might work, but just knowing you can say "use red/green testing" is a magic shortcut to the correct result.
Thus: working with LLMs is a skill, but also an ever-changing skill.
The 40 minutes of the presentation before the hack gives a lot more context: there are 2 journalists in addition to this anonymous pink Power Ranger, and they investigated the Nazi network, which is international. And Martha Root (the pink power ranger) was trolling them by creating an account and using LLM. The LLM didn't work properly, the account was blocked for suspicions of being a bot (and maybe for having "= 1 OR 1" as eg. gender), she talked her way out of it, and incredibly the admin that unblocked her asked if she wanted to meet up with him, and the site's founder. She said yes, didn't show up, but used that opportunity to covertly follow them and uncover the founder's identity - the journalists found that it's a 57-year old lady who's never been known in the scene, who was married to a French banker whose parents survived the holocaust, but in the last decade fell into the rabbit hole of white-victimization-theory.
> the journalists found that it's a 57-year old lady who's never been known in the scene, who was married to a French banker whose parents survived the holocaust, but in the last decade fell into the rabbit hole of white-victimization-theory.
Just to clarify, she fell into the rabbit hole, not him. He divorced her. Your comment can easily be read both ways.
Day after day the global quality of software and learning resources will degrade as LLM grey goo consumes every single nook and cranny of the Internet. We will soon see the first signs of pure cargo cult design patterns, conventions and schemes that LLMs made up and then regurgitated. Only people who learned before LLMs became popular will know that they are not to be followed.
People who aren't learning to program without LLMs today are getting left behind.
reply