Hacker Newsnew | past | comments | ask | show | jobs | submit | anonymous908213's commentslogin

I don't know how you can write down those numbers and come to the conclusion they sound reasonable at all. Corporations literally can't give this trash away for free without consumers being unhappy about it (eg. the Copilot malware infesting every aspect of Windows). ChatGPT had 800m MAU at one report, but that's a chat interface and free. Do you really believe over half of those users are going to convert from "free" to paying $60/mo for access to the chat interface, when all potential applications for actually improving their lives are failing badly? I think you are out of touch with the finances of non-tech-indsutry workers if you think they will.

> I don't know how you can write down those numbers and come to the conclusion they sound reasonable at all.

Half this board is in the most hyped echo chamber I’ve ever seen.


I don't know a single person in my (non-tech!) life that doesn't use AI, shy of toddlers and geriatric people.

The famous MIT study (95% of AI initiatives fail, remember that one?) actually found that pretty much every worker was using AI almost daily, but used their personal accounts (hence the corporate ones not being used).

If you are brand new to the tech world, and this is your first new product cycle, the way it works is that there is a free-cool-we're-awesomely-generous phase, and then when you are hooked and they are entrenched, the real price comes to fruition. See...pretty much every tech start-up burning runway cash.

Right now they are getting us hooked, and like the dumbasses consumers are, they will become totally dependent and think it will stay this cheap.


I use AI frequently. I am frequently let down. Occasionally satisfied and very rarely impressed. My results seem typical for everyone else I know. It's a free and widely promoted tool that has the potential to be useful, of course people will use it. The features I find most useful, is not providing me new knowledge. It's formalizing something I wrote or summarizing some other text, that I am going to read anyway or can at least reference as needed and confirm the output. This is also where the local models Excel.

I also often see people post AI generated advice and answers that are simply incorrect in Facebook groups and get roasted with 100s of people chiming in on how you can trust ChatGPT.

I just can't see regular people are going to pay more than (NetFlix + HBO + Prime + WM+) for an AI subscription. I think you would see tons of competitors pop up if that were at all viable.


"If you are brand new to the tech world, and this is your first new product cycle, the way it works is that there is a free-cool-we're-awesomely-generous phase, and then when you are hooked and they are entrenched, the real price comes to fruition. See...pretty much every tech start-up burning runway cash."

That has indeed been the strategy, but it's not like it always or even usually works out. We've seen plenty of companies that try to raise their prices and people aren't hooked. (Though I am almost certain in this case at least professionals if not the general public will indeed be hooked.)


If you really don't know a single such person, you live in a very odd bubble. I know lots of people who used ChatGPT a lot when it first came out, found it funny and occasionally useful, then changed their mind to just finding it funny occasionally, and then eventually stopped because it wasn't that useful and was no longer funny.

None of them ever considered getting a paid account, nor would they have. I'm not saying nobody will, but if you actually don't know any such people then there is something unusual about the crowd you run with.


> actually found that pretty much every worker was using AI almost daily

What they found is that people search the Internet for things and an AI bot is right there. What they didn't find is people using Vibe coded apps, learning from AI or buying AI services. They did find companies buying AI services, but as an experiment. Also, blaming AI is easy when someone messes up and costs a customer or sale. The more that happens, the sooner the company stops experimenting. If that happens in a widespread way, then this bubble collapses.


You sound very naive.

A good way to think about it is that ChatGPT is well on its way to becoming a verb like Google did. Doesn't roll off the tongue as easily but in terms of brand awareness it feels ubiquitous.

> ChatGPT had 800m MAU at one report, but that's a chat interface and free. Do you really believe over half of those users are going to convert from "free" to paying $60/mo for access to the chat

Even if these things worked great for everyone, the percent of free uses who convert to paid users is low single digits per cent. For OpenAI to have any chance of breaking even in the consumer space, they need to develop an ad biz that makes around 20-25% of G does. That's a tall order in that G doesn't make good dough from search anymore as SERP page clicks are down 80% with AI summaries being good enough for most.


And let's not forget that for the bubble to sustain itself, people would currently use different LLMs would need to create a separate account in each one. There's absolutely no way most people will be paying more than one LLM unless they have a lot of disposable income.

Good code was approximately never valued in enterprise. How many companies worth billions or even trillions have webpages that take 5+ seconds to load text, and use Electron for their desktop applications? In that regard, nothing has changed.

There is still a market for good code in the world, however. The uses of software are nearly infinite, and while certain big-name software gets a free pass on being shitty due to monopoly and network effects, other types of software will still find people who will pay for them if they are responsive, secure, not wildly buggy, and can add new features without a 6 month turnaround time because the codebase isn't a crime against humanity.

On another note, there have been at least four articles on the front page today about the death of coding. As there are every other day. I know I'm tired of reading them, but don't people get bored of writing them?


> I know I'm tired of reading them, but don't people get bored of writing them?

I understand the sentiment here but it shouldn't be surprising that people are upset that their profession and livelihoods are being drastically changed due to advances in AI.


> I know I'm tired of reading them, but don't people get bored of writing them?

Look, it's either this or a dozen articles a day about Claude Code.


By good code people mean extensible usually (growth), not performant. Performant code has different value,often lower than extensible in eneterprise saas.

> Good code was approximately never valued in enterprise.

Nope, the value the code creates was always what was valued.

Now we can refactor more easily than ever. And quite a lot of code was throwaway to begin with.. so there’s no need to deliver good code. Not in the first iteration. But if it is going to be improved upon, part of the improvement will be to prepare it for that improvement.


Good code is extremely subjective, most bad code is built on a good code foundation. And most foundational software (think linux, ffmpeg, curl, v8, etc.) maintainers are pushing back.

Once AI/Agents actually master all tools we currently use (profilers, disassembly, debuggers) this may change but this won't be for a few years.


So funny when people point at electron as if it singlehandedly makes every program unusable.

Also, I would assume there are not many significant pages on $B/Trillion companies that take 5 seconds to load text that are used frequently.

> I know I'm tired of reading them, but don't people get bored of writing them?

People never get tired of reading or commenting on commentary on their hobbies.


New Reddit and Outlook.com are two off the top of my head. It is not uncommon to be looking at a spinner for several seconds. There are other websites that are not primarily for text but are still insane. Twitch.TV, an old favorite of mine, now routinely takes 10+ seconds despite having Amazon money behind it. Youtube routinely takes several seconds to load the page, which is still unacceptable even for a video website. These sites are maintained by FAANG-tier engineers being paid mid-high 6 figures or 7 figures, who I'm sure are mostly perfectly competent, and yet they are completely dysfunctional because enterprise environments inevitably create structural disincentives to producing good code.

I use Electron applications. They are usable, for some value of the word. I am certainly not happy about it, though. I loathe the fact that I have 32GB RAM and routinely run into memory issues on a near-daily basis that should literally never happen with the workloads I'm doing. With communication-based apps like Slack and Discord where your choice of software to use comes down entirely to where the people you're communicating are, you will use dogshit because there is no point to communicating to the void on a technically superior platform.


Not to mention that it seems like everything has to go through like 10 redirects just to log in, and then you have to dismiss various pop-ups before you can just do whatever you went there to do. Some will say that the last part is just a UX problem, but in my opinion, sluggishness is the king of UX prooblems.

On the topic of Electron, I’m really torn. I can’t help but feel some gratitude for the fact a few of the work tools I need work on Linux (stuff like Slack, Teams, Zoom).


Aww, then you missed the best part! Who wouldn't be head over heels for the opportunity to follow this financial advice and lose all of their "monopoly money" (funsie term for real cash!)?

  Call To Action  
  This won't just be the big one. This could be the last one. If you've been preparing your whole life, knowing that something's coming, then this could be the thing you've been preparing for. One final opportunity to get the guys who did this. [...] The worst that can happen is you lose your monopoly money, but that's been happening anyway.

Yeah that’s a huge red flag

LLM-generated global financial theories rivalling the best of the GME "due diligence" posting, wonderful.

Doubt that it’s LLM-generated given this is Justine Tunney’s project.

It is most certainly LLM generated. Nobody but an AI prompted with “connect the unwind of the yen carry trade with Trump’s threats to acquire Greenland” would have ever written something like that.

My guess is that she did a lot of research on the topic with AI then created this article partially with AI generated text.


I definitely got a strong feel of LLM output reading it. Not sure if the points themselves have any merit, but I don't think that I'll go and run to buy jpy.

Yes and feelings are real

The better rational counter argument is that "privacy is a human right enshrined in international law". Society has zero business knowing anyone's private communications, whether or not that person is a terrorist. There is nothing natural about being unable to talk to people privately without your speech being recorded for millions of people to view forever. Moreover, giving society absolute access to private communications is a short road to absolute dystopia as government uses it to completely wipe out all dissent, execute all the Jews or whatever arbitrary enemy of the state they decide on, etc.

You do not get to dispense with human rights because terrorists use them too. Terrorists use knives, cars, computers, phones, clothes... where will we be if we take away everything because we have a vested interested in denying anything a terrorist might take advantage of?


Who decided absolute privacy in all circumstances is a fundamental human right? I don’t think any government endorses that position. I don’t know what international law you speak of. You’re basing your argument on an axiom that I don’t think everyone would agree with.

This sounds like a Tim Cook aphorism (right before he hands the iCloud keys to the CCP) — not anything with any real legal basis.


Article 12 of the United Nation's Declaration of Human Rights:

> No one shall be subjected to arbitrary interference with his privacy [...]

which has later been affirmed to include digital privacy.

> I don’t think any government endorses that position.

Many governments are in flagrant violation of even their own privacy laws, but that does not make those laws any less real.

The UN's notion of human rights were an "axiom" founded from learned experience and the horrors that were committed in the years preceding their formation. Discarding them is to discard the wisdom we gained from the loss of tens of millions of people. And while you claim that society has a vested interest in violating a terrorist's privacy, you can only come to that conclusion if you engage in short-term thinking that terminates at exactly the step you violate the terrorist's rights and do not consider the consequences of anything beyond that; if you do consider the consequences it becomes clear that society collectively has a bigger vested interest in protecting the existence of human rights.


> No one shall be subjected to arbitrary interference with his privacy

“Arbitrary” meaning you better have good reasons! Which implies there are or can be good reasons for which your privacy can be violated.

You’re misreading that to mean your privacy is absolute by UN law.


Admittedly "arbitrary" is something of a legal weasel word that leaves a lot of room for interpretation. I lean towards a strong interpretation for two reasons: the first is because it is logically obvious why you must give it a strong interpretation; if the people responsible for enforcing human rights can arbitrarily decide you don't have them, you don't have human rights. The second is because we have seen this play out in the real world and it is abundantly clear that the damage to society is greater than any potential benefits. The US in particular has made an adventure out of arbitrarily suspending human rights, giving us wonderful treats like Guantanamo Bay and the black sites across the Middle East. I don't know what part of that experiment looked remotely convincing to you, but to me they only reinforced how clearly necessary inviolable human rights are for the greater good of society.

>if the people responsible for enforcing human rights can arbitrarily decide you don't have them, you don't have human rights

But the "arbitrary" there is too account for the situation where the democratic application of the law wants to inspect the communications of suspected terrorists, and where a judge agrees there is sufficient evidence to grant a warrant.

Unfortunately, that law does nothing against situations like the USA/Russia regime where a ruler dispenses with the rule of law (and democratic legal processes too).

You can't practically have that sort of liberalism, where society just shrugs and chooses not to read terrorists communications, those who wish to use violence make it unworkable.


But if you want to make it possible for the Feds to break into a terrorist's secure phone, you have to make it impossible for anyone to have a secure phone.

That is arbitrary interference with all our privacy.


Usually such "international laws" are only advisory and not binding on member nations. After decades of member nations flouting UN "laws" I can't see them as reliable or effective support in most arguments. I support the policy behind the privacy "laws" of the UN, but enforcing them seems to fall short.

Enforcement mechanisms are weak, but they still exist to set a cultural norm and an ideal to strive towards. Regardless, I have also laid out an argument at length as to why society would logically want to have this be a human right for its own good, regardless of any appeal to existing authority.

sam altman types like this, so this is what is cool to the agi believers.

Maybe he writes in lower case because he targets "lower ages"?

https://www.bbc.com/news/articles/cz6lq6x2gd9o


this is cultural appropriation, i learned to type like this on irc in the 90s

also i don't want to be mistaken for a phone poster


There are two notable differences between when the AGI-posters do it and when IRC-posters do it. AGI-posters extend their lowercase posting to what would normally be seen as more formal communication. They also tend to stick to using punctuation despite the lowercase. IRC posters usually keep it to informal communications, where it's a sign of casualness. That said, there is overlap, and it's of course not possible to instantly distinguish someone as a Sama devotee because of how they type; but it is clear that a lot of people in that bubble are intentionally adopting the style.

This is a bot account. Last post in 2024, then in the last 25 minutes it has spammed formulaic comments in 5 different threads. If you were not able to instantly recognise this post as LLM-generated, this is a good example to learn from, I think. Even though it clearly has a prompt to write in a more casual manner, there's a certain feel to it that gives it away. I don't know that I can articulate all the nuances, but one of them is this structure of 3 short paragraphs of 1-2 sentences each, which is a favorite of LLMs posting specifically on HN for some reason, together with a kind of stupidly glazy tone ("killer app", "always felt 5 years away", randomly reinforcing "comparison to a human assistant you've never met" as though that's a remotely realistic comparison; how many people in the world have a human assistant they've never met and trust with all of their most sensitive information?).

Thanks, we've banned it and some related accounts.

All: generated comments and bots aren't allowed here. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


for me, it's the bullet points with bold one-word sentences

Have you considered that it is possible for two things to be problems?

No, because the comment is in bad faith, it just introduced an unrelated issue (poor sentencing from authorities) as an argument for the initial issue we are discussing (AI nudes), derailing the conversation, and then using the new issue they themselves introduced to legitimize their poor argument when one has nothing to do with the other and both can be good/bad independently of each other.

I don't accept this as good faith argumentation nor does HN rules.


You are the only one commenting in bad faith, by refusing to understand/acknowledging that the people using Grok to create such pictures AND Grok are both part of the issue. It should not be possible to create nudes of minors via Grok. Full stop.

>You are the only one commenting in bad faith

For disagreeing on the injection of offtopic hypothetical scenarios as an argument derailing the main topic?

>It should not be possible to create nudes of minors via Grok.

I agree with THIS part, I don't agree with the part where the main blame is on the AI, instead of on the people using it. That's not a bad faith argument, it's just My PoV.

If Grok disappears tomorrow, there will be other AIs from other parts of the world outside of US/EU jurisdiction, that will do the same since the cat is out of the bag and the technical barrier to entry is dropping fast.

Do you keep trying to whack-a-mole the AI tools for this, or the humans actually making and distributing fake nudes of real people?


> Do you keep trying to whack-a-mole the AI tools for this, or the humans actually making and distributing fake nudes of real people?

Both, obviously. For example, you go after drug distributors and drug producers. Both approaches are effective in different ways, I am not sure why you are having such trouble understanding this.


This is textbook whataboutery. The law is perfectly clear on this, and Musk is liable.

Other AIs have guardrails. If Musk chooses not to implement them, that's his personal irresponsibility.


Then log off.

Punishing kids after the fact does not stop the damage from occurring. Nothing can stop the damage that has already occurred, but if you stop the source of the nudes, you can stop future damage from occurring to even more girls.

[flagged]


I'm sorry, did the article or anyone in this subthread suggest banning AI? That seems like quite a non-sequitur. I'm pretty sure the idea is to put a content filter on an online platform for one very specific kind of already-illegal content (modified nude images of real people, especially children), which is a far cry from a ban. Nothing can stop local diffusion or Photoshop, of course, but the hardware and technical barriers are so much higher that curtailing Grok would probably cut off 99% or more of the problem material. I suppose you'll tell me if any solution is not 100% effective we should do nothing and embrace anarchy?

Edit for the addition of the line about bullying: "Bullying has always happened, therefore we should allow new forms of even worse bullying to flourish freely, even though I readily acknowledge that it can lead to victims committing suicide" is a bizarre and self-contradictory take. I don't know what point you think you're making.


Child sexual abuse material is literally in the training sets. Saying "banning AI" as though it's all the same thing, and all morally-neutral, is disingenuous. (Yes, a system with both nudity and children in its dataset might still be able to produce such images – and there are important discussions to be had about that – but giving xAI the benefit of equivocation here is an act of malice.)

[flagged]


I'm not defending CP, WTF is wrong with you? You're just hallucinating/making stuff up in bad faith.

Y Combinator supports doing anything that makes money

Nobody wants to ban AI they want to regulate it. Which is what we do with all new technology.

To paraphrase "your tech bros were so preoccupied with whether or not they could they never considered if they should"


Have they actually been a huge success, though? You're one of the most active advocates here, so I want to ask you what you make of "the Codex app". More specifically, the fact that it's a shitty Electron app. Is this not a perfect use case for agents? Why can OpenAI, with unlimited agents, not let them loose on the codebase with instructions to replace Electron with an appropriate cross-platform native framework, or even a per-platform native GUI? They said they chose Electron for ease of portability for cross-platform delivery, but they could allocate 1, 10, or 1000 agents to develop a native Linux and native Windows port of the MacOS codebase they started with. This is not even a particularly serious endeavour. I have coded a cross-platform chat application myself with more advanced features than what Codex offers, and chat GUIs are really among the most basic thing you can be doing; practically every consumer-targeted GUI application finds a time when they shove a chat box into a significantly more complex framework.

The conclusion that seems readily apparent to me, as it has always been, is that these "agents" are completely incapable of creating production-grade software suitable for shipping, or even meaningfully modifying existing software for a task like a port. Like the one-shot game they demo'd, they can make impressive proof-of-concepts, but nothing any user would use, nor with a suitable foundation for developers to actually build upon.


"Why isn't there better software available?" is the 900 pound gorilla in the LLM room, but I do think there are enough anecdotes now to hypothesize that what agents seem to be good at is writing software that

1. wasn't economical to write in the first place previously, and

2. doesn't need to be sold to anyone else or maintained over time

So, Brad in logistics previously had to collate scanned manifests with purchase requests once a month, but now he can tell Claw to do it for him.

Which is interesting given the talk of The End of Software Development or whatever because "software that nobody was willing to pay for previously" kind of by definition isn't going to displace a lof of people who make software.


I do agree with this fully. I think LLMs have utility in making the creation of bad software extremely accessible. Bad software that happens to perfectly match some person's super specific need is by no means a bad thing to have in the world. A gap has been filled in creating niche software that previously was not worth paying anyone to create. But every single day we have multiple articles here proclaiming the end of software engineering, and I just don't get how the people hyping this up reconcile their hype with the lack of software being produced by agents that is good enough to replace any of the software people actually pay for.

My experience is that coding agents as-of November (GPT-5.2/Opus 4.5) produce high quality, production-worthy code against both small and large projects.

I base this on my own experience with them plus conversations with many other peers who I respect.

You can argue that OpenAI Codex using Electron disproves this if you like. I think it demonstrates a team making the safer choice in a highly competitive race against Anthropic and Google.

If you're wondering why we aren't seeing seismic results from these new tools yet, I'll point out that November was just over 2 months ago and we had the December holiday period in the middle of that.


I'm not sure I buy the safer choice argument. How much of a risk is it to assign a team of "agents" to independently work on porting the code natively? If they fail, it costs a trivial amount of compute relative to OAI's resources. If they succeed, what a PR coup that would be! It seems like they would have nothing to lose by at least trying, but they either did not try, or they did and it failed, neither of which inspires confidence in their supposedly life-changing, world-changing product.

I will note that you specifically said the agents have shown huge success over "the past 12 months", so it feels like the goalposts are growing legs when you say "actually, only for the last two months with Opus 4.5" now.


Claude Code was released in February, it just had its 1 year birthday a few days ago.

OpenAI Codex CLI and Gemini CLI followed a few months afterwards

It took a little while for the right set of coding agent features to be developed and for the models to get good enough to use those features effectively.

I think this stuff went from interesting to useful around Sonnet 4, and from useful to "let it write most of my code" with the upgrades in November.


Aider with Gemini 2.5 was way ahead of its time, and with O3 it was best in class until Claude Code with Sonnet 4.

The bottleneck in development is human attention and ability to validate now (https://sibylline.dev/articles/2026-01-27-stop-orchestrating...). OpenAI could unleash the Kraken, but in order to ensure they're releasing good software that works, they still need the eyeball hours and people who can hold the idea of the thing being built in their head and validate against that ideal.

Agents default to creating big balls of mud but it's fairly trivial to use prompting/tools to keep things growing in a more factored, organized way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: