Not sure what the downvotes are for, but ExaFLOP != ExaFLOP for a long time.
An A100 is an ExaFLOP GPU, if you count INT8 as FLOP.
Then you have the 10 different flavors of FP8, FP16, etc. and even the multiple flavors of FP32.
What people usually count as "ExaFLOP" is FP64, but this hardware can't even do FP32, much less FP64.
So yeah, an ExaFLOP != an ExaFLOP anymore, because these types of announcements never say "An ExaFLOP of what".
Everybody's brain has an infinite througput of FP0 (0 bit FP format). That doesn't mean that your brain can compute faster than Dojo, even though infinite ExaFLOPs >>>> 1 ExaFLOP. The reason is that we are counting different things.
So what's the throughput of this hardware in actual FP64 ExaFLOPs ? Zero, it doesn't support them. Not a very impressive marketing statement though. But what this means is that this hardware would be extremely bad at, e.g., solving linear systems of equations.
>So yeah, an ExaFLOP != an ExaFLOP anymore, because these types of announcements never say "An ExaFLOP of what".
I thought Tesla was pretty clear about which floating point formats they were talking about. When they first introduced the D1 they put up a slide with FLOPS measurements for FP16 and FP32. The impressive numbers (exaFLOP in ten cabinets for example) are based on FP16, of course.