I think I am in this AI fatigue phase. I am past all hype with models, tools and agents and back to problem and solution approach, sometimes code gen with AI , sometimes think and ask for a piece of code. But not offloading to AI and buying all the bs, waiting it to do magic with my codebase.
Yeah, at this point I want to see the failure modes. Show me at least as many cases where it breaks. Otherwise, I'll assume it's an advertisement and I'll skip to the next headline. I'm not going to waste my time on it anymore.
It's clear far beyond our little tech world to everyone this is going to collapse our entire economic system, destroy everyone's livelihoods, and put even more firmly into control the oligarchic assholes already running everything and turning the world to shit.
I see it in news, commentary, day to day conversation. People get it's for real this time and there's a very real chance it ends in something like the Terminator except far worse.
I hesitate to lump this into the "every new technology" bucket. There are few things that exist today that, similar to what GP said, would have been literal voodoo black magic a few years ago. LLMs are pretty singular in a lot of ways, and you can do powerful things with them that were quite literally impossible a few short years ago. One is free to discount that, but it seems more useful to understand them and their strengths, and use them where appropriate.
Even tools like Claude Code have only been fully released for six months, and they've already had a pretty dramatic impact on how many developers work.
With the exception of GPT-5, which was a significant advance yet because it was slightly less sycophantic than gpt-4o the internet decided it was terrible for the first few days.
Google didn't sit back and watch, they basically built the whole foundations for all of this. They were just not the first ones to release a chatbot interface.
Not trying to challenge you, and I'd sincerely love to read your response. People said similar things about previous gen-AI tool announcements that proved over time to be overstated. Is there some reason to put more weight in "what people on HN said" in this case, compared to previous situations?
Only reasonable thing is to not listening to anyone who seem to be hyping anything, LLMs or otherwise. Wait until the thing gets released, run your private benchmarks against it, get a concrete number, compare against existing runs you've done before.
I don't see any other way of doing this. The people who keep reading and following comments either here on HN, from LocalLlama or otherwise will continue to be misinformed by all the FUD and guerilla marketing that is happening across all of these places.
My test for the state of AI is "Does Microsoft Teams still suck?", if it does still suck, then clearly the AIs were not capable of just fixing the bugs and we must not be there yet.
it's not AI fatigue, its that you just need to shift mode to not pay attention too much to the latest and greatest as they all leap frog each other each month. Just stick to one and ride it thru ups and downs.