💛 Always On

Built-In, Always-On Kindness Moderation

Every post, message, and interaction on Kind Social is protected by multiple layers of automated moderation — keeping the platform safe and respectful for everyone.

Most social media platforms only moderate after harm is done — removing content after it's been reported, after the damage is already felt. Kind Social is fundamentally different. Our moderation is proactive, real-time, and always on. Here's how each layer works.

🎯 Tone Scan™ — Real-Time Kindness Scoring

What It Does

The Tone Scan™ is a real-time meter that appears as you type every post and message. It evaluates your words using a balanced scoring algorithm, measuring kindness, politeness, and warmth in real time.

How It Scores

Enforcement Levels

ScoreStatusWhat Happens
75–100😊 KindContent is shared freely
50–74😐 NeutralContent is shared freely
30–49⚠️ CautionPosts can be shared with a warning; messages are blocked
0–29🚫 UnkindContent is blocked from being sent or shared
The Tone Scan™ doesn't just punish unkindness — it rewards kindness, encouraging more positive interactions across the platform.

📊 Kindness Meter™ — Long-Term Behavior Tracking

What It Does

Every member has a personal Kindness Meter™ score ranging from 0 to 100. It starts at 100 and adjusts over time based on your behavior on the platform.

How Scores Change

Visibility

Your Kindness Meter™ score is visible only to you on your personal profile page. Parents and guardians can also view scores for managed child accounts. It is never shown on public profiles, in search results, or in headers.

Bully Detection

If a user accumulates 10 or more downvotes within a 7-day period, the system automatically flags them for review as a potential bully — no human reporting required.

🚫 Banned Word Filters — Automatic Content Redaction

What It Does

Kind Social maintains a curated, regularly-updated database of banned words and phrases. These are automatically redacted across all content on the platform:

How It Works

When a banned word is detected, it is replaced with asterisks (e.g., "****") in real time before the content reaches any other user. Each banned word match also incurs a heavy penalty on the sender's Tone Scan score, making it harder to send additional unkind content in the same message.

Words are categorized by severity, allowing the system to distinguish between mildly inappropriate language and severely harmful content.

🚩 Flagging & Reporting — Community-Powered Safety

What It Does

Every member can flag or report inappropriate behavior, content, or safety concerns. The Report a Problem tool is available in the Safety & Parental Controls menu and allows members to:

What Happens Next

Every report is reviewed by the Kind Social team. Reports are processed with context — including the flagged user's Kindness Meter™ history, recent activity, and any prior reports — to ensure fair and informed decisions.

🤖 AI-Powered Post Evaluation

What It Does

Every post on Kind Social is automatically evaluated by AI across three critical dimensions:

Posts with low kindness scores, high bias, or low veracity automatically reduce the author's Kindness Meter™ score over time, creating long-term accountability.

Have ideas to make moderation even better?

We're always improving. Tell us what features would make Kind Social even safer for your family.

Suggest Features to Enhance Moderation and Safety
← Back to Safety & Parental Controls

Explore More Features