

The main issue I have as an editor is that there is no straightforward way to retrain the LLM to correct faulty training as directly or revertably as the existing method of editing an article’s wikicode. Already, much of my time updating Wikipedia is spent parsing puffery and removing phrases like “award-winning” or “renowned”, inserted by malicious advertisers trying to use Wikipedia as a free billboard. If a Wikipedia LLM began making subjective claims instead of providing objective facts backed by citations, I would have to teach myself machine learning and get involved with the developers who manage the LLM’s training. That raises the bar for editor technical competency which Wikipedia historically has been striving to lower (e.g. Visual Editor).
So much of what creating privacy busting biometric databases claim to do could be accomplished with speed-of-light geofencing, a.k.a. “distance-bounding protocol”. If a moderator decides messages from country X are problematic, then they can flag/block them for other users. It only requires carefully measuring ping times and basically involves banning traffic from places that can’t achieve certain minimum pings to certain trusted servers.