logo
Banner

Moderation Guidelines

Guiding principles

Protection of our users, especially those marginalized and dealing with systemic oppression, has priority over federation reach or the preservation of existing followers/followees network. As a result, these are the guiding principles for all moderation work at eldritch.cafe:

  • Power dynamics within the team that echo systems of oppression should be prevented; and if this fails, they must be addressed and corrected immediately.
  • Moderation should be as transparent as possible.
  • Moderation should be as proactive as possible to avoid putting the burden of moderation on users and therefore unnecessarily exposing them to harmful content.

Conduct

  • Moderators are expected to uphold all server rules, without exception.
  • Moderators are expected to work actively against not only structural, but also personal biases, especially with regard to any form of institutional discrimination and systems of oppression.
  • Moderators should acknowledge and value expertise coming from lived experiences with oppression, and prioritize that expertise when making decisions regarding specific harmful content, users, and instances.
  • Concerns raised by others moderators should always be taken seriously and with the utmost priority as it affects the integrity of the moderation team and the instance as a whole.
  • When discussing issues, individual moderators should disclose when they feel that their judgement may be impaired, for reasons such as:
    • Personal history with involved users
    • Traumatic or otherwise negative experience with a topic
    • Lack of direct knowledge of culture, events, or other context critical to understanding an issue
    • etc.
  • Moderators should exercise caution and restraint when posting publicly about moderation work to ensure sensitive details, such as report contents, are treated with all necessary safety and privacy considerations. Publicly addressing grievances if a moderator feels other means are exhausted or not viable is expressly not prohibited or discouraged.

Communication

  • For the sake of transparency, all public and private communication must be made with the official account for moderation and administration: Barmaid.

  • Take care to never fully mention any user who shouldn’t be added to a private communication (for example, when notifying a reporter about their report of another user).

  • Unless it is a joint statement made with the approval of the full team, messages should be signed at the end of a post/thread with your username and role.

    -- username, moderator
    -- username, administrator
  • Public communications should be reviewed by at least one other moderator.

  • Personal opinions about moderation should come from individual accounts, never from Barmaid.

Approving accounts

Empty, obvious, or known bad actors requests’ that violate server rules or are suspected to do so, should be rejected. If you are unsure, ask the team. If you opt to give benefit of the doubt and approve - log suspicious content on a note on the account.

Hashtags

Reject any and all hashtags promoting content that violates the rules of the server. If unsure about the nature of a particular trending hashtag, ask the rest of the team.

Posts

Don’t allow posts or authors.

Don’t allow links or publishers.

Reports and appeals

  • You should assign yourself to a report before interacting with it.
  • You can always participate in a discussion but you must not make any decisions on :
    • reports about yourself
    • your own appeals
    • appeal on reports resolved by yourself
    • and when your judgement may be impaired
  • Decisions made must be explained by a note on the report. Any relevant context or related information that isn’t contained within the report itself (such as related reports) should be added as notes too.
  • You should inform reporters of the decision made, and in the case of a forwarded report, inform the contact users of the instance in their place.
  • When a decision is made against a remote moderator or administrator, the issue must be escalated to federation level and discussed for a possible equivalent measure against the instance.

Measures

The measures described below are to be used as a guideline to help moderators decide on what action to take in case of rule violation. They are not intended to be universal rules, but more a foundation to build upon. Moderators should always examine the specific circumstances surrounding a rule violation and are the ultimate arbiters of what course of action is the best.

Rules violations

  1. Hateful conduct: If the reported user seems to be consciously engaging in hateful conduct or is spreading bigoted ideas, they should be immediately suspended. If the user seems to be genuinely uninformed and parroting harmful but commonplace beliefs, it might be worth it to adopt a more nuanced approach: silence them if they are on a remote instance; talk to them if they are a local user, etc. In any case, the primary goal is to stop the harmful behaviour as soon as possible and to protect our users from it.
  2. Harassment: Once notified, the moderator should take whatever action is necessary to make the harassment stop immediately. This includes suspending accounts and instances. If appropriate, these measures can be lifted later, but you can’t undo harassment.
  3. Safeguarding: The reported content (i.e. doxxing) should be immediately removed. If the user behind it has shared it intentionally, they should be suspended as well.
  4. Sexual Content: In the case of sexual content being shared locally without a content warning, it is often enough to flag it as sensitive and remind the user of our rules. In case of sexual content being shared on the local timeline without a content warning, deleting it before contacting the user is usually the best course of action. Remote accounts sharing sexual content without a content warning on the public timeline repeatedly should usually be silenced.
  5. CSAM/Abuse: The reported user should be immediately banned. If the content came from a remote instance, the instance should be investigated to see if it allows that kind of content. If it’s the case, the instance should be suspended as well.
  6. Misinformation: If a local user posts misinformation or conspiracy theories, consider approaching first to discuss with them and having the post(s) amended or deleted if harmful or dangerous. Remote users may be silenced or suspended without intervention.
  7. Spam: Users reported for engaging in spam can be warned, silenced, or suspended depending on the frequency and severity.
  8. Illegal content: The actions to be taken are usually context-dependent. It’s advised to contact the administration team.

Users found repeatedly violating rules (having recorded multiple “strikes”) may face suspension, with or without notice, depending on the severity of the violations and the potential spread of harm caused.

In case moderators are unsure which course of action is the best in a given situation, they should ask for advice or help from other members of the administration and moderation teams. Asking for help is never wrong.

Federation

  • Federation decisions must be immediately logged in the dedicated team channel with a message containing the instance, the threat level, and the measure used by the moderator.
  • The internal note should refer a ticket number if one exists.
  • Federation decisions should be publicly and timely communicated with the #fediblock hashtag and announced using the Barmaid account.

Measures

The measures described below are for defined cases where consensus is not required nor the appropriate solution for.

Immediate threat

  • fascist/nazi/alt-right instances
  • instances with proximity to known bad actors, such as endorsing, knowingly tolerating, sharing moderation staff with, or propagating significant amounts of content from harmful instances.
  • non-isolated or systematic harassment, doxxing (such as kiwifarms)
  • non-moderated “free speech” instances
  • instances that engage in or support archiving, non-isolated spam, or other disrespect of expected federation of content (ignore visibility, etc) that threaten users’ safety or privacy

Examples of “proximity to bad actors” include deciding to limit but not suspend an instance, listing an instance using the “local bubble” or similar features, or staff following users on or sharing content from an instance.

These instances should be defederated as fast as possible. Moderators can suspend without prior peer approval.

Non-immediate threat

  • missing rules regarding the enforcement of basic protection for marginalized users
  • slow or lacking response to forwarded reports

These instances should also be defederated. However, moderators can silence them immediately and then upgrade to a suspension after two weeks without peer approval.

This silenced period should be used to:

  • connect with the instance administrator to have them fix the source(s) of the threat
  • warn our users about possible defederation, if we share followers/followees

Early suspension instead of silence or before the end of the silence period is possible with at least two moderators’ approval.

For all other non-immediate issues, such as a mismatch of server rules where another instance allows sexual content without a content warning, discussion, marking all media as sensitive, or silencing remain open alternatives.

Internal management

In addition to the code of conduct that governs interactions between moderators:

  • Reports, issues, or complaints against a moderator will be discussed by the entire team and actions decided accordingly.
  • Moderators can request the temporary and immediate demotion of any moderator due to suspicion of rogue behavior or violation of guidelines. Probation periods can last anywhere from a week to 30 days, during which time, the behavior in question will be investigated and redressed, or until action has been taken and the moderation team considers the problem resolved.

Conflict resolution

If a moderator needs to raise any concerns or complaints over the operation of the moderation team, they can initiate, either directly or through the administration team, a collective discussion of the issue on a dedicated communication channel.

Moderator addition

  • If the current moderation team becomes unable to handle moderation duties, additional moderators will be recruited in a timely manner.
  • Moderator applications and any and all reports of their time as user will be reviewed closely by the whole team.

Moderator removal

  • The moderation team may decide by consensus to have a moderator to step down.

Additional notes

We give our thanks to the moderation and admin teams who have helped shaped these guidelines and shared their own with us, such as weirder.earth and rage.love.

Those guidelines are an internal document meant to be used by moderators. They provide both a reference point to people doing moderation work and a framework to help taking decisions. Those are not hard-an-fast rules and shouldn’t be regarded as such.

This is a living document that may be expanded as online culture, technology, and eldritch.cafe users grow and change.