Hacker Newsnew | past | comments | ask | show | jobs | submit | epiccoleman's commentslogin

Which companies do you expect to be taken out?

Google and Microsoft will obviously remain. I have a hard time envisioning that OpenAI or Anthropic will go under - especially Anthropic, who are reportedly raking in billions from Claude subscriptions.

Just from my armchair predictions, it's not really any of the juggernauts who have to worry, but rather the many companies springing up to try SaaS offerings with LLMs at the core. A bubble pop there could certainly cause some strife, but I'm just not seeing the mechanism by which these too-big-to-fail tech companies and the heavily invested "frontier AI companies" are going to suddenly cease to exist.

I think the dotcom bubble is a fairly apt metaphor in the key sense that the web didn't go anywhere - just a lot of small players lost their tickets on the gravy train. "Big tech" as it existed at the time of the bubble pop trundled along and continued making gobs of money.


Mozilla is probably doomed in the long term. I think they're in the exact same boar as Microsoft, and wholly lack the self-reflection required to turn the ship around.

Firefox will continue to languish while Mozilla execs receive 8-figure bonuses until there's nothing left to extract.


OpenAI and Anthropic are bleeding money and both need hundreds of billions of dollars in the next couple of years to break even. Oracle is highly overleveraged and I am hoping that the bubble takes them out. You can find the gory details at Ed Zitron's blog. https://www.wheresyoured.at/premium-how-the-ai-bubble-bursts...

Anthropic at least has Amazon’s backing. OpenAI is where the industry is stuffing all of its bad debt and transparently bad deals. It’s the sacrificial company this time around.

I would dance on Oracle’s grave, but they have too much staying power because of their core database and ERP business.


> Google and Microsoft will obviously remain

Microsoft seem to be pushing all kinds of users away in all directions at the moment while focused on the AI bubble*. Once it bursts/deflats, will they come back?

Or are we looking at a post-Windows future, where MS just focuses on cloud stuff?

(Or will there be a 'we learned from our mistakes, honest' Windows 12 that wins people back in the same way that Win10 did after Win8?)


Great link, thanks for sharing!

A little different than what you're saying, but you reminded me of an experience I had with Inside - which I enjoyed a lot overall, but -

There were a number of puzzles involving pushing boxes around, and something that really irritated me was that I would understand the solution but then have to go implement it by moving around and doing the pushing with somewhat clunky controls.

It was sort of interesting from a gameplay perspective - that feeling of "eureka" followed by "dammit, now I've gotta do this schlep work".


I found that many times with some of the new Zelda games - ok, what do I have, how do I do this, hhmmm, aha!

And then I know what I need to do, I know it's doable, and then I get frustrated trying to do it in game.


I completely agree. I let LLMs write a ton of my code, but I do my own writing.

It's actually kind of a weird "of two minds" thing. Why should I care that my writing is my own, but not my code?

The only explanation I have is that, on some level, the code is not the thing that matters. Users don't care how the code looks, they just care that the product works. Writing, on the other hand, is meant to communicate something directly from me, so it feels like there's something lost if I hand that job over to AI.

I often think of this quote from Ted Chiang's excellent story The Truth Of Fact, The Truth of Feeling:

> As he practiced his writing, Jijingi came to understand what Moseby had meant: writing was not just a way to record what someone said; it could help you decide what you would say before you said it. And words were not just the pieces of speaking; they were the pieces of thinking. When you wrote them down, you could grasp your thoughts like bricks in your hands and push them into different arrangements. Writing let you look at your thoughts in a way you couldn’t if you were just talking, and having seen them, you could improve them, make them stronger and more elaborate.”

But there is obviously some kind of tension in letting an LLM write code for me but not prose - because can't the same quote apply to my code?

I can't decide if there really is a difference in kind between prose and code that justifies letting the LLM write my code, or if I'm just ignoring unresolved cognitive dissonance because automating the coding part of my job is convenient.


To me, you are describing a fluency problem. I don't know you or how fluent you are in code, but what you have described is the case where I have no problem with LLMs: translating from a native language to some other language.

If you are using LLMs to precisely translate a set of requirements into code, I don't really see a problem with that. If you are using LLMs to generate code that "does something" and you don't really understand what you were asking for nor how to evaluate whether the code produced matched what you wanted, then I have a very big problem with that for the same reasons you outline around prose: did you actually mean to say what you eventually said?

Of course something will get lost in any translation, but that's also true of translating your intent from brain to language in the first place, so I think affordances can be made.


> what is your calf, how does it do?

... it's a calf, dad, just like yesterday


+1. Termux absolutely rules and makes the dream of a cyberdeck actually viable. I use it at least once a week for various homelab stuff.

And with DeX it's like a +2 :)

> With their "don't put the cat inside the microwave" stickers

not sure what this means, my microwave does not have such a sticker

> "coffee is too hot" lawsuits

I'd encourage you to look into the case you refer to[1] and decide for yourself whether the lawsuit feels frivolous given the facts. My read is that the lawsuit was justified.

[1]: https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau...


If caring that people might burn themselves with hot water is nanny state, then caring that people might burn themselves with macdonalds coffee is also nanny state.

Caring that some restaurant employee is negligent enough to pour coffee hot enough to require an 8 day hospital stay isn't a nanny state, that is basic public safety. If I got in a hot tub expecting it to be hot tub temperature and it burnt my skin off I'd expect them to get in trouble for endangering me by misleading me into believing it was normal hot tub temperature.

That argument is specious to begin with, because typically a hot water heater should be set such that its maximum temperature would not cause a burn (just like how coffee should typically be served at a temperature that is not capable of melting skin), but leaving that aside - the coffee case was a private tort case - a civil suit - and therefore does not and could not by definition support calling the country in which it occured a "nanny state".

Ok, so an airport is a private business and it chose to put "hot water" labels on the taps, and therefore does not and could not by definition support calling the country in which it occured a "nanny state".

This definitely seems true to me, from my limited short content usage. I try to avoid getting sucked into the feed (Youtube Shorts is the one I have used), but if I do find myself scrolling through the morass of clips from Shark Tank or Family Guy [1], the one guy I'll almost always stop for is FunkFPV, who just does a duet on clips of stupid "hacks" and incidences of dumb stuff happening in factory / warehouse / construction settings.

He's just a blue-collar type guy who is mildly funny when critiquing the stupidity of, say, a guy walking up a badly placed ladder with a mini split condenser on his shoulder - but it's a niche that for whatever reason I enjoy, and I don't think I'd remember his handle if it wasn't for his very specific niche.

Interestingly enough [2] I've noticed a number of other creators seem to have sprung up in this niche and will occasionally find a video of some other blue-collar-lookin-dude doing the same schtick. I doubt FunkFPV is the first (in fact he sort of reminds me of an "AvE-lite") to tap this weird market, but he's my touchpoint, at least.

[1]: Yes, it is embarrassing that the algorithm has determined that these are likely to garner my attention

[2]: it's actually not really interesting because almost nothing on the topic of short-form video is actually interesting by any reasonable definition of that word, so this is just a turn of phrase


It definitely bears all the LLM hallmarks we've come to know. emdash, the "this isn't X. it's Y" structure - and then, to cap it off, a single pithy sentence to end it.


Also bears all the hallmarks of an ordinary post (by someone fairly educated) on the Internet. This would make sense, because LLMs were trained on lots of ordinary posts on the Internet, plus a fair number of textbooks and scientific papers.


The — character is the biggest cause of suspicion. It's difficult to type manually so most people - myself included - substitute the easily typed hyphen.

I know real people do sometimes use it, but it's a smell.


I think some software will automatically substitute "smart quotes" for regular quotes and an em-dash for a double hyphen -- I know MS Word used to do this. Curious if any browsers do. This comment was typed in Brave, which doesn't appear to, but I didn't check if Chrome or IE or Opera does.


The comment was not wrong though so I am not sure I understand if flagging it for the sole "it was most likely written by the use of AI" reason is completely valid.


I've noticed people who are using LLMs more, myself included, are starting to talk like that.

Oops I mean, you're absolutely right, those ARE hallmark signs of an LLM. Let me breakdown why this isn't just your imagination but actually...


I see scanlines, I upvote. Simple as.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: