The core idea is that the AI runs locally on the device, and all data is stored on the device. Therefore, no data will be shared or sold to other companies.
Regarding anonymization --> do you mean, what if I pointed the camera at someone else? That would be filtered out.
>what if I pointed the camera at someone else? That would be filtered out
I'm no expert at this but that sounds a lot harder to implement than you're implying, especially if it's all locally stored and not checked over by a 3rd party. What's to stop me from just doing it anyway?
That’s why the plan is to invert the usual logic: instead of capturing everything and trying to filter later, the system would reject everything by default and only respond to what the user explicitly enables --> similar to how wake word detection works.
I’ve also thought a lot about trust. Would you feel differently if the system were open source, with the critical parts auditable by the community?
I mean maybe this is just ignorant of me but can you really build an app where the AI is completely disabled out the box, is totally locally controlled by the user, who is then able to customize its activation with such granularity/control that it will only film them and not other people? Is that something one can actually build and expect to be reliable? Can this actually work…?
I mean generally speaking yes open source but the issue is that if it’s open source then people can easily disable the safeguards with a fork so idk I feel mixed on it. I’m still leaning towards yes because in general I am for open source. But I’d have to think about it and hear other people’s takes
Oh, so you'll peep on everything we do, but don't worry, only you and your team will be able to be the voyeurs. lol, lmao even. Do you ppl even hear yourselves talk?