the fact that anyone can be an editor on Wikipedia scares me. I mostly use it for STEM, current affairs and timelines, stuff that can't be easily fiddled—unlike topics around history & politics. These are highly baised on wikipedia.
For instance, look at wikipediocracy.com
> design the cockpit so that the human pilot is naturally aware of their surroundings.
This is a design interface problem. Self-driving cars can easily ingest this HUD.
This is the reason what makes Apple's AI different from other microservice-like AI. The spell checker, rewrite, proofread are naturally integrated into the UI to the extent it doesn't feel like AI powered operations.
this research is too simplified and kind of vague, as it's the inherent nature of language models for that matter any probabilistic model, to compress the information for better generalization since there is a lower bound to how much loss they can incur while decoding the information. LLMs are indeed lossy compressors