Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For ai inference you definitely have other options, but for low end graphics? the lpddr that apple (and nvidia in grace) use would be super expensive to get a comparable bandwidth (think $3+/gb and to get 500GB/sec you need at least 128GB).

And that 500GB/sec is pretty low for a gpu, its like a 4070 but the memory alone would add $500+ to the cost of the inputs, not even counting the advanced packaging (getting those bandwidths out of lpddr needs organic substrate).

It's not that you can't, just when you start doing this it stops being like a graphics card and becomes like a cpu.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: