Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that's mostly just HNers assuming AI like Claude Code is already penetrating the day to day work of the workforce.

"If I use, then everyone is probably using it".

Yet AI penetration is so low right now that it probably has zero role in the job market.

And it keeps us distracted from talking about the real reasons behind job opening decline.

That said, once AI ubiquity picks up within the next few years, we'll have all of the existing problems we're not talking about... plus AI. And we'll probably be even less capable of talking about the complexities of the market intelligently.





I think parent comment was talking about hype vs reality rather than disagreeing with you.

"We're not hiring but AI is in the news" = "We're not hiring because of AI! Don't sell our stock!" It's independent of actual current or future AI adoption.


Maybe. I am likely not a typical HNer, but my company actually has use of AI our 2026 goals. I am not guessing. I know majority of people in this company have those goals baked in. Now, can I suspect other like companies do the same? No. But even if they don't, it does not matter. Because the companies that don't allow AI, have people who use it anyway..

FOMO.

https://www.axios.com/2025/08/21/ai-wall-street-big-tech

Have we forgotten this? It'll find its niche, but it isn't yet a truly transformative one.


<< it isn't yet a << yet

That is a lot pressure to put on a conjunction. It is up there along with 'it will never be'.

In all seriousness ( and some disclosure ), I like this tech so I am mildly biased in my stance. That said, I almost fully disagree with yours.

As much as I dislike Nadella, his last blog entry is not that far off. Using LLMs for stuff like email summaries is.. kinda silly at best. The right use cases may have not emerged yet, but, in a very real sense, it already has been transformative..


"it already has been transformative.."

Yea, at being a search interface. But what else? Not that it can't be, but the failure rate for AI is absurd right now. What happens if it collapses and all its used for is answering questions on your phone and maybe better search of your emails? That seems to be a real and probably likely outcome. What then? Ironically, I think it will improve the economy because there are a lot of decisions that are on hold until we know what LLMs will be used for. Probably isn't going to be good for SEs either way.


<< but the failure rate for AI is absurd right now.

I keep a personal log of specific failures for simple CYA reasons. I do get some, but I can't honestly say it does not seem high to me. A lot likely depends on what is defined as a failure ( to me it typically is a clearly wrong result ). But those clearly wrong results do not seem to cross 10% of output.. so about the same as average human.


The writing is on the wall for AI. It is coming fast and it is transformative. That your company is still trying to ramp up AI adoption and processes for 2026 supports my point.

But we've been blaming AI for a couple years now, yet I suspect it's still too early in the adoption curve to have a meaningful impact on hiring compared to more boring explanations.


Even if AI wasn't being used for daily tasks by general employees, it's being used by HR and staff sourcing firms to sort through applications, so it already has had a large (negative) impact on hiring.

Maybe we should do an "Ask HN" for those in HR or adjacent roles to poll for experiences there.


"it's being used by HR and staff sourcing firms to sort through applications"

I think you are correct, but is anyone happy about the current situation? I suspect it will change and that change very likely will intentionally not involve AI. I suspect it will be an economic solution, not a technological one.


I hear what you are saying. In a very practical sense, I have no real way to measure either of those factors and the company I work for is international so that does not allow for an easy extrapolation. I guess what it really means is: we will find out:P

Really? I see H1B as the tiniest drop in the bucket compared to AI, at least in software. It's not that AI is filling 1 human role with 1 AI, it's that everyone who has a job knows that they need to keep it because the market is insanely cutthroat right now. Everyone has an AI-polished resume, and employers no longer see the value in having talented employees. Even if they did have talented employees they don't trust them enough to know how to do the work. If your employer says "I need you to start using AI" they may as well be saying "I don't trust you to know what's worth is worth your time." I see even a lot of people who have jobs as acting in a way that's consistent with on the verge of being fired, which I think is most of the real "value" of AI so far.

Same here, basically word for word.

What industry and roles?

Finance, but tech adjacent. I am not super comfortable going into more detail.

yes, AI isn't penetrating those fields with high job losses at all

AI isn’t penetrating but all the money needed to invest in the economy has moved over. Maybe that’s also part of the problem

Bespoke AI has not gotten everywhere but generic AI absolutely has.

The workforce is happily making themselves more efficient by using AI on their phones for what used to be multi step look it up in the literature or your supplier's catalog or consult the instructions or read the rules process when performing cookie cutter tasks they know but don't remember exact specifications for.


Do you have a source for that? Everyone I know who works outside of tech is complaining about how AI is making their jobs harder because it’s wrong so much of the time that they’re spending more time correcting it than it saves, and it’s been a boon for cheaters looking to remove obvious tells from their attacks.

I'm talking about people who shower after work not people who shower before work.

I have no doubt that people who are having AI foisted upon them by admins at the behest of someone else hate it.

They use AI as basically a leveled up version of the summaries google used to provide for certain search types. Saves them a bunch of obnoxious clicking around on the internet or in software that was never designed for mobile or to make giving up the kind of info they're seeking easily.


That’s usually also followed shortly by learning that you can’t trust the results or you’ll be making customers whole.

These people usually know enough to know when it's "not quite right". Same "don't trust the docs" story that existed in many workplaces long before AI

An example I saw recently was someone asked for a modern equivalent of a grease that's no longer made/relevant and it replied back with some weird aviation stuff. The "real" answer wound up being "just use anything, the builders intent in specifying was to prevent you from using tallow or some other crap 100yr ago"


Sources for this?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: