What is the plan for the Digg Ledger to achieve transparency?
In his AMA earlier this week, Alexis Ohanian mentioned that the goal of the Ledger is to create transparency and trust in the moderation system. I’m curious what the Digg team’s plan is to achieve that goal.
Currently, a community’s Ledger only contains messages like “@digg removed a post because it is spam or misleading about an hour ago” which shows who is doing the moderating and when, but not what is being removed and the reasons are broad “off topic”, “spam/misleading”, and “violates Digg's community guidelines”.
I’ve done a little digging, and I can see that in the API these entries are called “moderationEvents” and they have the author and the original post ID of the post that was removed - it’s just not used by the UI. The content is also no longer accessible by it’s ID, which makes sense considering the point was to remove it.
Great MVP, we get the idea, the developers can tell it’s working to the extent it’s supposed to, it’s claimed that screen real estate in the Community “about” box. However, that hasn’t tackled the sticky problems of creating a transparent moderation system yet, and the Digg team must be thinking about them.
The big three that come to mind are:
Transparency vs Harm: If you show people removed content, you haven’t removed it.
On the one hand, if people don’t know what was removed, they have no way to judge whether that removal was appropriate. On the other, you can’t be out here hosting illegal content. You can quarantine some content and completely remove the rest - but that just moves the curtain, there’s still a curtain. This is a fundamental challenge for any adjudication system to maintain it’s legitimacy while doing it’s job.
Transparency vs Privacy: You have privileged information that is helpful for determining if a post is spam that would be irresponsible to share.
In the position of designing a moderation system, I’d want to use IP addresses, device ids, location information, interaction timing, etc. to identify spammers. If I share this information, I make it easier to dox people.
Transparency vs Efficacy: You weaken the system by exposing its mechanisms.
If nefarious actors are aware of the exact signals the system uses to flag their activity, they will avoid those signals, and your job will become harder.
How are you guys planning to thread these needles? Is this how you’ve framed the challenges for yourselves or have you framed it differently?
6 Comments