Curious as I’m of the same mind - what’s your local AI setup? I’m looking to implement a local system that would ideally accommodate voice chat. I know the answer depends on my use case - mostly searching and analysis of personal documents - but would love to hear how you’ve implemented.
If you are just starting up, you can try out 'open-webui' as inspiration.
After that you can just use llama.cpp to build out your own things.
Hardware side, I just have a beefy server that acts as a router (mellanox card to provider fiber optic and local fiber network), firewall, wifi access point, zigbee coordinator, host to various services, camera video feed ingestion and processing, and so on...
It has zero cost, hardware is already there. I'm not captive to some remote company.
I can fiddle and integrate with other home sensors / automation as I want.