Skip to main content
← Back to Home

Moderator Guidelines

Thank you for helping keep ThreadChain a welcoming place for everyone. These guidelines will help you moderate effectively and fairly.

Core Principles

As a moderator, you represent ThreadChain.io. Your actions should always reflect these core principles:

Fairness

Apply rules consistently regardless of who is involved. Personal feelings should never influence moderation decisions.

Transparency

When taking action, provide clear reasons. Users should understand why their content was removed or flagged.

Empathy

Remember there's a real person behind every username. Approach situations with understanding, especially for first-time offenders.

Efficiency

Address issues promptly. Quick action prevents problems from escalating and shows users the community is well-maintained.

Your Role

Moderators are trusted community members who help maintain a positive environment. Your responsibilities include:

  • Reviewing reported content and taking appropriate action
  • Removing content that violates our Content Policy
  • Issuing warnings and strikes to users who break rules
  • Answering user questions about community guidelines
  • Escalating serious issues to administrators
  • Providing feedback on policy improvements

Note: Moderators are volunteers who help the community. You are not expected to be available 24/7. Moderate when you can, and don't hesitate to step back if you need a break.

The Moderation Queue

The moderation queue shows content that needs review. Items enter the queue through:

  • User reports - Community members flagging content
  • AutoMod flags - Automated detection of potential violations
  • Keyword triggers - Content matching monitored terms

Access your moderation queue from the Mod link in the navigation when logged in as a moderator.

Reviewing Content

When reviewing flagged content, consider these factors:

Context Matters

  • Read the full thread, not just the flagged comment
  • Consider if the content is part of a larger discussion
  • Check if the user was responding to provocation
  • Look at the user's history - is this a pattern or one-time issue?

Intent vs. Impact

Sometimes harmful content comes from ignorance rather than malice. Consider:

  • Did the user likely intend to cause harm?
  • Is this a misunderstanding that could be resolved with education?
  • Regardless of intent, is the impact harmful to the community?

While intent matters for choosing consequences, impact matters for whether action is needed. Harmful content should be addressed even if unintentional.

Taking Action

When you determine that content violates our policies, you have several options:

Approve

The content is acceptable. Use this when flagged content doesn't actually violate any rules. This clears the item from the queue.

Warn

Send a warning without removing content. Use for borderline cases or when educating the user is more appropriate than punishment. The content remains visible.

Remove

Delete the content and issue a strike. Use for clear policy violations. The user will be notified of the removal and reason.

Escalate

Send to administrators for review. Use for serious violations, ban decisions, or cases where you're unsure of the appropriate action.

Strike Guidelines

Refer to our Content Policy for the strike system details. Here's how to apply strikes:

Violation TypeStrikesExamples
Severe2 strikesThreats, doxxing, hate speech, CSAM
Moderate1 strikePersonal attacks, harassment, spam
MinorWarning onlyOff-topic posts, mild incivility

Important: For severe violations (threats, CSAM, doxxing), remove the content immediately and escalate to administrators. These may warrant immediate bans and possible law enforcement reporting.

Working with AutoMod

AutoMod is our automated moderation system. It pre-screens content and flags potential violations. Understanding how it works helps you moderate effectively:

How AutoMod Works

  • Scans all posts and comments in real-time
  • Uses pattern matching and AI to detect violations
  • Assigns severity levels (severe, moderate, minor)
  • Blocks severe content immediately
  • Warns users about moderate content before posting
  • Silently flags minor concerns for review

Overriding AutoMod

AutoMod isn't perfect. You can override its decisions when:

  • Context makes the content acceptable (e.g., discussing a news story)
  • The flagged term has a legitimate use in context
  • AutoMod misidentified harmless content as problematic

When you override AutoMod, your decision is logged for quality review. This helps improve the system over time.

Handling Appeals

Users may appeal moderation decisions. When reviewing appeals:

  • Review the original content and context objectively
  • Consider any new information the user provides
  • Check if policies were applied correctly
  • It's okay to reverse a decision if warranted
  • If you made the original decision, consider having another mod review

Respond to appeals within 48-72 hours when possible. Even if upholding the decision, explain your reasoning clearly and respectfully.

Conflicts of Interest

To maintain fairness, recuse yourself from moderating when:

  • You're directly involved in the discussion
  • The user is someone you know personally
  • You have a personal dispute with the user
  • You feel you cannot be objective

In these cases, leave the item for another moderator or escalate to an administrator.

Moderator Conduct

As a moderator, you're held to a higher standard:

Do

  • Remain professional in all interactions
  • Keep moderation discussions private
  • Ask for help when unsure
  • Document unusual situations
  • Take breaks when feeling burnt out

Don't

  • Use moderator powers for personal disputes
  • Share private user information
  • Discuss pending moderation decisions publicly
  • Make threats or use intimidating language
  • Promise outcomes you can't guarantee

Response Time Goals

While moderators volunteer their time, we aim for these response times:

PriorityTargetExamples
Critical< 1 hourThreats, illegal content, doxxing
High< 4 hoursHarassment, hate speech, spam waves
Normal< 24 hoursGeneral reports, appeals, inquiries

Need Help?

Questions about moderation? Unsure about a decision? Reach out to the admin team.

Contact Administrators