Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This makes sense in some ways technologically, but just having a "centralized compute box" seems like a lot more complexity than many/most would want in their homes.

I mean, everything could have been already working that way for a lot of years right? One big shared compute box in your house and everything else is a dumb screen? But few people roll that way, even nerds, so I don't see that becoming a thing for offloaded AI compute.

I also think that the future of consumer AI is going to be models trained/refined on your own data and habits, not just a box in your basement running stock ollama models. So I have some latency/bandwidth/storage/privacy questions when it comes to wirelessly and transparently offloading it to a magic AI box that sits next to my wireless router or w/e, versus running those same tasks on-device. To say nothing of consumer appetite for AI stuff that only works (or only works best) when you're on your home network.



It most likely won't be a separate device. It'll get integrated into something like Apple TV or a HomePod that has an actual function and will be plugged in and networked all the time anyway. The LLM stuff would be just a bonus.

Both are currently used as the hub for HomeKit devices. Making the ATV into a "magic AI Box" won't need anything else except "just" upgrading the CPU from A-series to M-series. Actually the A18 Pro would be enough, it's already used for local inference on the iPhone 16 Pro.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: