Content Policy
Last updated: April 2026 · Effective: April 2026
NamSeva allows users to share photos in chat and upload profile/portfolio photos. This policy describes what content is permitted, how we detect and remove illicit content, and how users can report violations.
1. Prohibited Content
The following content is strictly prohibited on NamSeva:
- CSAM (Child Sexual Abuse Material) — any sexual content involving minors. This is illegal under Indian law (POCSO Act, IT Act) and triggers an immediate report to authorities.
- Non-consensual intimate imagery — sharing intimate photos of another person without their consent.
- Hate speech or graphic violence — images intended to threaten, harass, or incite violence against any individual or group.
- Spam or fraudulent imagery — fake identity photos, doctored credentials, or misleading service representations.
- Nudity or sexually explicit content in profile photos, portfolio images, or chat.
2. Technical Controls
Upload Restrictions
- Only image file types are accepted for upload (JPEG, PNG, WebP). Non-image files are rejected at the API level.
- Maximum file size: 5 MB per image. Oversized files are rejected before storage.
- All uploads go directly to Firebase Storage with server-enforced security rules — unauthenticated uploads are blocked.
CSAM Detection
- Profile photos and portfolio images uploaded to Firebase Storage are scanned using Google Cloud's SafeSearch API (part of Cloud Vision) to detect explicit, violent, or CSAM-related content.
- Any image flagged as VERY_LIKELY explicit or containing CSAM is automatically deleted from Storage and the associated user account is immediately suspended pending review.
- CSAM detections are reported to NCMEC (National Center for Missing & Exploited Children) as required by US law (18 U.S.C. § 2258A) and to CERT-In / local law enforcement as applicable under Indian IT law.
Chat Image Sharing
- Images shared in chat are stored in Firebase Storage under access-controlled paths — only the two chat participants can view them.
- Chat images are subject to the same SafeSearch scan on upload.
- Images flagged as prohibited are deleted and both participants are notified; the sender's account is reviewed for suspension.
Note: SafeSearch scanning is the current primary automated check. We are evaluating PhotoDNA hash-matching integration for comprehensive CSAM detection as the platform scales.
3. Human Review
| Trigger | Action | Timeline |
| User report submitted |
Admin reviews reported content; account suspended if confirmed |
Within 48 hours |
| SafeSearch flag (explicit/violent) |
Image deleted automatically; admin notified for account review |
Immediate (automated) |
| SafeSearch CSAM flag |
Image deleted, account suspended, report filed with authorities |
Immediate (automated) |
| Repeated violations |
Permanent account ban |
Upon admin review |
4. How to Report Illicit Content
If you encounter any content that violates this policy:
- In-app: Use the "Report" option on any profile or in any chat (feature in development — see below).
- Email: Send a report to abuse@namseva.in with a description and, if possible, a screenshot. Include the username or phone number of the offending account.
We acknowledge all abuse reports within 24 hours and take action within 48 hours of receipt.
In-app reporting (tap-to-report button on profiles and chat) is currently under development and will be added in a future release.
5. Consequences for Violations
- First offence (non-CSAM): Content removed, warning issued, 24-hour suspension.
- Repeated offence: Permanent account ban; phone number blocked from re-registration.
- CSAM / illegal content: Immediate permanent ban, content preserved as evidence, report filed with law enforcement and NCMEC.
6. Appeals
If you believe your content was removed or your account suspended in error, you may appeal by emailing support@namseva.in with the subject "Content Appeal". We will review and respond within 5 business days.