Every post, message, and interaction on Kind Social is protected by multiple layers of automated moderation — keeping the platform safe and respectful for everyone.
Most social media platforms only moderate after harm is done — removing content after it's been reported, after the damage is already felt. Kind Social is fundamentally different. Our moderation is proactive, real-time, and always on. Here's how each layer works.
The Tone Scan™ is a real-time meter that appears as you type every post and message. It evaluates your words using a balanced scoring algorithm, measuring kindness, politeness, and warmth in real time.
| Score | Status | What Happens |
|---|---|---|
| 75–100 | 😊 Kind | Content is shared freely |
| 50–74 | 😐 Neutral | Content is shared freely |
| 30–49 | ⚠️ Caution | Posts can be shared with a warning; messages are blocked |
| 0–29 | 🚫 Unkind | Content is blocked from being sent or shared |
The Tone Scan™ doesn't just punish unkindness — it rewards kindness, encouraging more positive interactions across the platform.
Every member has a personal Kindness Meter™ score ranging from 0 to 100. It starts at 100 and adjusts over time based on your behavior on the platform.
Your Kindness Meter™ score is visible only to you on your personal profile page. Parents and guardians can also view scores for managed child accounts. It is never shown on public profiles, in search results, or in headers.
If a user accumulates 10 or more downvotes within a 7-day period, the system automatically flags them for review as a potential bully — no human reporting required.
Kind Social maintains a curated, regularly-updated database of banned words and phrases. These are automatically redacted across all content on the platform:
When a banned word is detected, it is replaced with asterisks (e.g., "****") in real time before the content reaches any other user. Each banned word match also incurs a heavy penalty on the sender's Tone Scan score, making it harder to send additional unkind content in the same message.
Words are categorized by severity, allowing the system to distinguish between mildly inappropriate language and severely harmful content.
Every member can flag or report inappropriate behavior, content, or safety concerns. The Report a Problem tool is available in the Safety & Parental Controls menu and allows members to:
Every report is reviewed by the Kind Social team. Reports are processed with context — including the flagged user's Kindness Meter™ history, recent activity, and any prior reports — to ensure fair and informed decisions.
Every post on Kind Social is automatically evaluated by AI across three critical dimensions:
Posts with low kindness scores, high bias, or low veracity automatically reduce the author's Kindness Meter™ score over time, creating long-term accountability.
We're always improving. Tell us what features would make Kind Social even safer for your family.
Suggest Features to Enhance Moderation and Safety