

People have the capacity to track genres and whatnot, what’s so different about this?
I think people could understand if explained probably, but unfortunately journalists rarely dive deeply enough to do that. It really doesn’t need to get too involved:
- machine learning - tell an algorithm what it’s allowed to change and what a “good” output is and it’ll handle the rest to find the best solution
- Bayesian networks - probability of an event given a previous event; this is the underpinnings of LLMs
- LLM - similar to Bayesian networks, but with a lot more data
And so on. If people can associate a technology with common applications, it’ll work a lot more like genres and people will start to intuit limitations of various technologies.
Sure, so bake in a set of default “mods” whose influence goes away as people interact with the moderator system. Start with a CSAM bot, for example (fairly common on Reddit, so there’s plenty of prior art here), and allow users to manually opt-in to make those moderators permanent.
I don’t think anyone wants a pure web of trust, since that relies on absolute trust of peers, and in a system like a message board, you won’t have that trust.
Instead, build it with transitive trust, weighting peers based on how much you align with them, and trust those they trust as bit less, and so on.
Maybe? That really depends on how you design it. If you require a lot of samples before trusting someone (e.g. samples where you align on votes), the bots would need to be pretty long-lived to build clout. And at some point, someone is bound to notice bot-like behaviour and report it, which would impact how much it impacts visible content.
That can happen with any P2P system, yet it’s not that common of a problebut
I don’t see why it would. All you need is:
Reddit/lemmy has everything but a distinction between agree/disagree and relevant/irrelevant. People tend to use votes as agree/disagree regardless, so having a distinction could lead to better moderation.
You’d need to tweak the weights, but the core algorithm doesn’t need to be super complex, just keep track of the N most aligned users and some number of “runners up” so you have a pool to swap the top group with when you start aligning more with someone else. Keep all of that local and drop posts/comments that don’t meet some threshold.
It’s way more complex than centralized moderation and will need lots of iteration to tune properly, but I think it can work reasonably well at scale since everything is local.