Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nvidia's $30,000 is a 90% margin product at scale. They could charge 1/3 that and still be very profitable. There has rarely been such a profitable large corporation in terms of the combo of profit & margin.

Their last quarter was $35b in sales and $26b in gross profit ($21.8b op income; 62% op income margin vs sales).

Visa is notorious for their extreme margin (66% op income margin vs sales) due to being basically a brand + transaction network. So the fact that a hardware manufacturer is hitting those levels is truly remarkable.

It's very clear that either AMD or Intel could accept far lower margins to go after them. And indeed that's exactly what will be required for any serious attempt to cut into their monopoly position.



Visa doesn't actually make a ton of money off each transaction, if you divide out their revenue against their payment volume (napkin math)...

They processed $12T in payments last year (almost a billion payments per day), with a net revenue of $32B. That's a gross transaction margin of 0.26% and their GAAP net income was half that, about 0.14%. [1]

They're just a transaction network, unlike say Amex which is both an issuer and a network. Being just the network is more operationally efficient.

[1] https://annualreport.visa.com/financials/default.aspx


That’s a weird way to account for their business size. There isn’t a significant marginal cost per transaction. They didn’t sell $12T in products. They facilitated that much in payments. Their profits are fantastic.


If you have no clue how profit margins are calculated then you're better off staying quiet.

It's quite simple. Divide revenue minus costs by revenue. Transaction volume isn't revenue. Visa only gets the transaction fee.

Even if I give you the benefit of the doubt and do a proper interpretation of the number you've arrived at, its meaning is quite different and quite off topic from this discussion. What you have calculated is the total share of costs that Visa represents in that 12 trillion dollar part of the economy. It is like saying Visa's share of GDP is 0.1%.


I didn't say that was their profit margin, that's their transaction margin.

As my mother used to say if you have nothing nice to say you're better off staying quiet ;)


> And indeed that's exactly what will be required for any serious attempt to cut into their monopoly position.

You misunderstand why and how Nvidia is a monopoly. Many companies make GPUs, and all those GPUs can be used for computation if you develop compute shaders for them. This part is not the problem, you can already go buy cheaper hardware that outperforms Nvidia if price is your only concern.

Software is the issue. That's it - it's CUDA and nothing else. You cannot assail Nvidia's position, and moreover their hardware's value, without a really solid reason for datacenters to own them. Datacenters do not want to own GPUs because once the AI bubble pops they'll be bagholders for Intel and AMD's depreciated software. Nvidia hardware can at least crypto mine, or be leased out to industrial customers that have their own remote CUDA applications. The demand for generic GPU compute is basically nonexistent, the reason this market exists at all is because CUDA exists, and you cannot turn over Nvidia's foothold without accepting that fact.

The only way the entire industry can fuck over Nvidia is if they choose to invest in a complete CUDA replacement like OpenCL. That is the only way that Nvidia's value can be actually deposed without any path of recourse for their business, and it will never happen because every single one of Nvidia's competitors hate each other's guts and would rather watch each other die in gladiatorial combat than help each other fight the monster. And Jensen Huang probably revels in it, CUDA is a hedged bet against the industry ever working together for common good.


I feel people are exaggerating the impossibility of replacing CUDA. Adopting CUDA is convenient right now because yes it is difficult to replace it. Barrier to entry for orgs that can do that is very high. But it has been done. Google has the TPU for example.


The TPU is not a GPU nor is it commercially available. It is a chip optimized around a limited featureset with a limited software layer on top of it. It's an impressive demonstration on Google's behalf to be sure, but it's also not a shot across the bow at Nvidia's business. Nvidia has the TSMC relations, a refined and complex streaming multiprocessor architecture and actual software support their customers can go use today. TPUs haven't quite taken over like people anticipated anyways.

I don't personally think CUDA is impossible to replace - but I do think that everyone capable of replacing CUDA has been ignoring it recently. Nvidia's role as the GPGPU compute people is secure for the foreseeable future. Apple wants to design simpler GPUs, AMD wants to design cheaper GPUs, and Intel wants to pretend like they can compete with AMD. Every stakeholder with the capacity to turn this ship around is pretending like Nvidia doesn't exist and whistling until they go away.


I don’t disagree with what you are saying but I want to point out that the fact that the TPU is not a GPU is not really relevant. In the end what matters most is whether or not it can accelerate PyTorch.


They're not exaggerating it. The more things change, the more they stay the same. Nvidia and AMD had the exact same relationship 15 years ago that they do today. The AMD crowd clutching about their better efficiencies, and the Nvidia crowd having grossly superior drivers/firmware/hardware, including unique PhysX stuff that STILL has not been matched since 2012 (remember Planetside 2 or Broderlands 2 physics? Pepperidge Farm Remembers...)

So many billions of dollars and no one is even 1% close to displacing CUDA in any meaningful way. ZULDA is dead. ROCM is a meme, Scale is a meme. Either you use CUDA or you don't do meaningful AI work.


CUDA is not the issue. AMD have already reimplemented like 80% of it, and honestly that part of it mostly works fine. Pytorch supports it, (almost) all the big frameworks support it, if you're not doing really arcane things it just works. It's the drivers! They took like two years after the release of their flagship card to stop randomly crashing. Everything geohot has ever said about AMD drivers is 100% true. They just cannot stop shooting themselves in the foot.


What did he say?


Geohot (temporarily) giving up. https://github.com/ROCm/ROCm/issues/2198#issuecomment-157438... Sadly most of the real spicy Twitter messages are gone since he deleted all his content, but there was a really fun one where he went off on a beautifully cryptic commit message in the driver. He also begged AMD to opensource the firmware so he could debug it. Sadly, AMD promised to do it and then nothing happened, as is typical for AMD promises. That's why tinygrad nowadays is aiming to just bypass the driver and firmware entirely.


Who is tinygrad?


tinygrad = George Hotz. (It's his company, __tinygrad__ is basically his "work account".)


> The only way the entire industry can fuck over Nvidia is if they choose to invest in a complete CUDA replacement like OpenCL. That is the only way that Nvidia's value can be actually deposed without any path of recourse for their business, and it will never happen because every single one of Nvidia's competitors hate each other's guts and would rather watch each other die

Intel seems to have thrown their weight behind SYCL, which is an open standard intended to compete with CUDA. Its not clear there has been much interest from other hardware vendors though.


I do not misunderstand why Nvidia has a monopoly. You jumped drastically beyond anything I was discussing and incorrectly assumed ignorance on my part. I never said why I thought they had one. I never brought up matters of performance or software or moats at all. I matter of fact stated they had a monopoly, you assumed the rest.

It's impossible to assail their monopoly without utilizing far lower prices, coming up under their extreme margin products. It's how it is almost always done competitively in tech (see: ARM, or Office (dramatically undercut Lotus with a cheaper inferior product), or Linux, or Huawei, or Chromebooks, or Internet Explorer, or just about anything).

Note: I never said lower prices is all you'd need. Who would think that? The implication is that I'm ignorant of the entire history of tech, it's a poor approach to discussion with another person on HN frankly.


Nvidia's monopoly is pretty much detached from price at this point. That's the entire reason why they can charge insane margins - nobody cares! There is not a single business squaring Nvidia up with serious intent to take down CUDA. It's been this way for nearly two decades at this point, with not a single spark of hope to show for it.

In the case of ARM, Office, Linux, Huawei, and ChromeOS, these were all actual alternatives to the incumbent tools people were familiar with. You can directly compare Office and Lotus because they are fundamentally similar products - ARM had a real chance against x86 because wasn't a complex ISA to unseat. Nvidia is not analogous to these businesses because they occupy a league of their own as the provider of CUDA. It's not exaggeration to say that they have completely seceded from the market of GPUs and can sustain themselves on demand from crypto miners and AI pundits alone.

AMD, Intel and even Apple have bigger things to worry about than hitting an arbitrary price point, if they want Nvidia in their crosshairs. All of them have already solved the "sell consumer tech at attractive prices" problem but not the "make it complex, standardize it and scale it up" problem.


It is cheaper to pay Nvidia than it is to roll your own solution and no one else is competitive. That is the reason Nvidia can charge so much per card.


Thank you for laying it out. It's so silly to see people in the comments act like Intel or Nvidia can't EASILY add more VRAM to their cards. Every single argument against it is all hogwash.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: