Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He claims to be ideologically driven. OpenAI's actions as a company up til now point otherwise


Sam didn't take equity in OpenAi so I don't see a personal ulterior profit motive as being a big likelihood. We could just wait to find out instead of speculating...


CEO of the first company to own the «machine that’s better than all humans at most economically valuable work» is far rarer than getting rich.


Yeah, if you believe in the AI stuff (which I think everyone at OpenAI does, not Microsoft though) there is a huge amount of power in these positions. Much greater power in the future than any amount of wealth could grant you.


Except the machine isn't.


I'd say it is. Not because the machine is so great but because most people suck.

It was described as a "bullshit generator" in a post earlier today. I think that's accurate. I just also think it's an apt description of most people as well.

It can replace a lot of jobs... and then we can turn it off, for a net benefit.


This sort of comment has become a cliché that needs to be answered.

Most people are not good at most things, yes. They're consumers of those things, not producers. For producers there is a much higher standard, one that the latest AI models don't come anywhere close to meeting.

If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.


> If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.

This assumes too much. GPUs may not hold the throne for long, especially given the amount of money being thrown at ASICs and other special-purpose ICs. Besides, as with the Internet, it's likely that AI adoption will benefit industries in an unpredictable manner, leaving little alpha for direct bets like you're suggesting.


I'm not betting on the gpus. I'm betting that whole categories of labor will disappear. They're preserved because we insist that people work, but we don't actually need the product of that labor.

AI may figure into that, filling in some work that does have to be done. But it need not be for any of those jobs that actually require humans for the foreseeable future -- arts of all sorts and other human connections.

This isn't about predicting the dominance of machines. It's about asking what it is we really want to do as humans.


So you think AI will force a push out of economic growth? I'm really not sure how this makes sense. As you've said a lot of labor these day is mostly useless, but the reason it's still here is not ideological but because our economy can't survive without growth (useless can still have some market value, of course). If you think that somehow AI displacing actual useful labor will create a big economic shift (as would be needed) I'd be curious to know what you think that shift would be.


Not at all. Machines can produce as much stuff as we can want. Humans can produce as much intellectual property as is desired. More, because they don't have to do bullshit jobs.

Maybe GDP will suffer but we've always known that was a mediocre metric at best. We already have doubts about the real value of intellectual property outside of artificial scarcity, which we maintain only because we still trade intellectual work for material goods which used to be scarce. That's only a fraction of the world economy already and it can very different in the future.

I have no idea what it'll be like when most people are free to do creative work when the average person doesn't produce anything anybody might want. But if they're happy I'm happy.


> but the reason it's still here is not ideological but because our economy can't survive without growth

Isn't this ideological though? The economy can definitely survive without growth, if we change from the idea that a human's existence needs to be justified by labor and move away from a capitalist mode of organization.

If your first thought is "gross, commies!" doesn't that just demonstrate that the issue is indeed ideological?


By "our economy" I meant capitalism. I was pointing out that I sincerely doubt that AI replacing existing useful labor (which it is doing and will keep doing, of course) will naturally transition us away from this mode of production.

Of course if you're a gross commie I'm sure you'd agree since AI, like any other mean of production, will remain first and foremost a tool in the hands of the dominant class, and while using AI for emancipation is possible, it won't happen naturally through the free market.


I’d bet it won’t. A lot of people and services are paid and billed by man-hours spent and not by output. Even values of tangible objects are traced to man-hours spent. Utility of output is mere modifier.

What I believe will happen is, eventually we’ll be paying and get paid for depressing a do-everything button, and machines will have their own economy that isn’t on USD.


It's not a bullshit generator unless you ask it for bullshit.

It's amazing at troubleshooting technical problems. I use it daily, I cannot understand how anyone dismisses it if they've used it in good faith for anything technical.


In this scenario, the question is not what exists today, but what the CEO thinks will exist before they stop being CEO.


i would urge you to compare the current state of this question to appx one year ago


He's already set for life rich


Plus, he succeeded in making HN the most boring forum ever.

8 out of 10 posts are about LLMs.


The other two are written by LLMs.


In terms of impact, LLMs might be the biggest leap forward in computing history, surpassing the internet and mobile computing. And we are just at the dawn of it. Even if not full AGI, computers can now understand humans and reason. The excitement is justified.


Nah. LLM's are hype-machines capable of writing their own hype.

Q: What's the difference between a car salesman and an LLM?

A: The car salesman knows they're lying to you.


Who says the LLM’s don’t know?

Testing with GPT-4 showed that they were clearly capable of knowingly lying.


This is all devolving into layers of semantics, but, “…capable of knowingly lying,” is not the same as “knows when it’s lying,” and I think the latter is far more problematic.


Nonsense. I was a semi-technical writer who went from only making static websites to building fully interactive Javascript apps in a few weeks when I first got ChatGPT. I enjoyed it so much I'm now switching careers into software development.

GPT-4 is the best tutor and troubleshooter I've ever had. If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.


> If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.

That’s a bold statement coming from someone with (respectfully) not very much experience with programming. I’ve tried using GPT-4 for my work that involves firmware engineering, as well as some design questions regarding backend web services in Go, and it was pretty unhelpful in both cases (and at times dangerous in memory constrained environments). That being said, I’m not willing to write it off completely. I’m sure it’s useful for some like yourself and not useful for others like me. But ultimately the world of programming extends way beyond JavaScript apps. Especially when it comes to things that are new and challenging.


I don't mean new and challenging in some general sense, I mean new and challenging to you personally.

I have no doubt someone with more experience such as yourself will find GPT-4 less useful for your highly specialized work.

The next time you are a beginner again - not necessarily even in technical work - give it a try.


Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary. LLMs are a useful documentation interface, but struggle to take even simple problems to the hole, let alone do something truly novel. There's no reason to believe they'll necessarily lead to AGI. This stuff may seem earth-shattering to the layman or paper pusher, but it doesn't even begin to scratch the surface of what even I (who I would consider to be of little talent or prowess) can do. It mostly just gums up the front page of HN.


>Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary.

I disagree with this characterization, but even if it were true I believe it's still revolutionary.

A mentor that can competently get anyone hundreds of hours of individualized instruction in any new field is nearly priceless.

Do you remember what it feels like to try something completely new and challenging? Many people never even try because it's so daunting. Now you've got a coach that can talk you through it every step of the way, and is incredible at troubleshooting.


>If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.

Please quote me where I say it wasn't useful, and respond directly.

Please quote me where I say I had problems using it, or give any indications I was using it wrong, and respond directly.

Please quote me where I state a conservative attitude towards anything new or challenging, and respond directly.

Except I never did or said any of those things. Are you "hallucinating"?


'Understand' and 'reason' are pretty loaded terms.

I think many people would disagree with you that LLMs can truly do either.


There's 'set for life' rich and then there's 'able to start a space company with full control' rich.


I don't understand that mental illness. If I hit low 8 figures, I pack it in and jump off the hamster wheel.


Is he? Loopy only sold for $40m and then he managed YC and then OpenAI on a salary? Where are the riches from?



But if you want that, you need actual control. A voting vs non voting shares split.


is that even certain, or is that his line to mean that one of his holding companies or investment firms he has a stake in holds openai equity but not him as an individual


That's no fun though


openai (the brand) has complex corporate structure with split for profit non profit entities and afaik the details are private. It would appear that the statement “Sam didn’t take equity in OAI” has been PR engineered based on technicalities related to this shadow structure.


I would suspect this as well...


What do you mean did not take equity? As a CEO he did not get equity comp?


It was supposed to be a non-profit


Worldcoin https://worldcoin.org/ deserves a mention



Hmm, curious, what this is about? I click.

> On a sunny morning last December, Iyus Ruswandi, a 35-year-old furniture maker in the village of Gunungguruh, Indonesia, was woken up early by his mother

...Ok, closing that bullshit, let's try the other link.

> As Kudzanayi strolled through the mall with friends

Jesus fucking Christ I HATE journalists. Like really, really hate them.


I mean it's Buzzfeed, it shouldn't even be called journalism. That's the outlet that just three days ago sneakily removed an article from their website that lauded a journalist for talking to school kids about his sexuality. After he recently got charged with distributing child pornography.

Many of the people working for mass media are their own worst enemy when it comes to the profession's reputation. And then they complain that there's too much distrust in the general public.

Anyway,the short regarding that project is that they use biometric data, encrypt it and put a "hash"* of it on their blockchain. That's been controversial from the start for obvious reasons although most of the mainstream criticism is misguided and by people who don't understand the tech.

*They call it a hash but I think it's technically not.

https://whitepaper.worldcoin.org/technical-implementation


How so? Seems they’re doing a pretty good job of making their stuff accessible while still being profitable.


To be fair, we don't really know if OpenAI is successful because of Altman or despite Altman (or anything in-between).


do you have reason to believe none of the two?


Profit? It's a 501(c).


As someone who is the Treasurer/Secretary of a 501(c)(3) non-profit I can tell you that is it always possible for a non-profit to bring in more revenue than it costs to run the non-profit. You can also pay salaries to people out of your revenue. The IRS has a bunch of educational material for non-profits[1], and a really good guide to maintaining your exemption [2].

[1] https://www.irs.gov/charities-non-profits/publications-for-e...

[2] https://www.irs.gov/pub/irs-pdf/p4221pc.pdf


Yes. Kaiser Permanente is a good example to illustrate your point. Just Google “Kaiser Permanente 501c executive salaries white paper”.


The parent is, OpenAI Global, LLC is a for profit non-wholly-owned subsidiary with outside investors; there's also OpenAI LP, which is a for-profit limited partnership with the no profit as general partner, also with outside investors (I thought it was the predecessor of the LLC, but they both seem to have been formed in 2019 and still exist?) OpenAI has for years been a nonprofit shell around a for-profit firm.

EDIT: A somewhat more detailed view of the structure, based on OpenAI’s own description, is at https://news.ycombinator.com/item?id=38312577


Thanks for explaining the basic structure. It seems quite opaque and probably designed to be. It would be nice if someone can determine which entities he currently still has a position or equity in.

Since this news managed to crush HN's servers it's definitely a topic of significant interest.


A non-profit can make plenty of profit, there just aren't any shareholders.


Depends if you're talking about "OpenAI, Inc." (non-profit) or "OpenAI Global, LLC" (for profit corporation). They're both under the same umbrella corporation.


NFL was a non profit up until 2015ish


100%. Man I was worried he'd be a worse, more slimy elon musk who'd constantly say one thing but his actions portray another story. People will be fooled again.


Say what you will, but in true hacker spirit he has created a product that automated his job away at scale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: