IT Rules 2021 Amended: 3-Hour Takedown, Mandatory Labels for AI Deepfakes
India has tightened the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to specifically regulate Synthetically Generated Information (SGI) (including deepfakes) from 20 February 2026, with strict takedown deadlines and mandatory labelling to curb non-consensual and unlawful synthetic content.
Why this matters
AI-generated deepfakes can spread rapidly and cause irreversible harm to dignity, privacy, and public order before victims or authorities can respond.
The new framework signals a shift from reactive moderation to time-bound compliance obligations, increasing the operational and legal pressure on intermediaries to act quickly.
What the amendments do
1) Definition of SGI (Synthetic Media)
The rules formally define “Synthetically Generated Information (SGI)” as audio/visual/audio-visual content artificially created, generated, modified, or altered using computer resources in a way that makes it appear real or indistinguishable from authentic events or persons.
They also carve out exclusions for routine or good-faith activities (editing/formatting/transcription/translation/accessibility improvements, educational/training materials, and research outputs) so long as these do not produce false or misleading electronic records.
2) New takedown timelines (compressed)
The amendments drastically reduce the time platforms get to acknowledge and act on complaints/orders, aiming to prevent “virality-first” harm.
Key time limits (as reported):
| Compliance trigger | New deadline |
|---|---|
| Court/Government-declared illegal content | Within 3 hours |
| Non-consensual intimate imagery/deepfakes | Within 2 hours |
| Grievance acknowledgement | Within 2 hours |
| Final grievance resolution | Within 7 days |
A 3-hour window is also flagged for emergency compliance linked to national security or public order.
Exam angle: These timelines create a constitutional and governance trade-off—faster protection of dignity/privacy (Article 21) versus higher risk of hasty or excessive takedowns that may chill speech (Article 19(1)(a)).
Labelling, provenance, and traceability
1) Mandatory prominent labelling
Intermediaries that enable creation or dissemination of AI-generated content must ensure such content is clearly and prominently labelled as synthetically generated.
Compared to earlier proposals, the specific “10% of image space” labelling requirement was diluted, while the core “prominent labelling” obligation remains.
2) Provenance markers (where feasible)
Where technically feasible, platforms must embed permanent metadata/provenance mechanisms (including a unique identifier) to help identify the computer resource used to generate/modify synthetic content.
Platforms are prohibited from enabling removal/suppression/alteration of these labels or metadata, to prevent laundering of deepfakes into “authentic-looking” media.
3) User disclosure + platform verification
Platforms must seek user disclosure for AI-generated content and may be expected to proactively label content if disclosure is absent, alongside acting against non-consensual deepfakes.
For Significant Social Media Intermediaries (SSMIs), the amendments emphasize stronger technical measures and user notices to improve compliance and deterrence.
Safe harbour, liability, and legal alignment
Section 79 of the IT Act, 2000 provides “safe harbour” protection to intermediaries for third-party content, subject to due diligence.
The amendments clarify that intermediaries can remove/disable access to unlawful or synthetic content (including via automated tools) without losing safe harbour, as long as due diligence obligations are met.
They also update statutory references by replacing the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023, aligning the Rules with India’s newer criminal law framework.
Challenges and way forward (UPSC perspective)
Key implementation challenges include determining “illegality” within 2–3 hours, risk of over-censorship/defensive removals, and the compliance burden of real-time detection plus human review (especially for smaller platforms).
Way forward measures suggested in policy discussions include clearer standards for illegality, structured takedown protocols, oversight/review mechanisms to prevent arbitrary action, and investments in robust detection and provenance tooling consistent with privacy safeguards.
Prelims quick facts (for revision)
- Effective date: 20 February 2026.
- Core additions: formal SGI definition, prominent labelling, provenance/metadata where feasible, and sharply reduced takedown timelines (3 hours; 2 hours for non-consensual deepfakes).
- Safe harbour: continues under Section 79 if due diligence is met; takedown actions (even automated) are clarified as compatible with safe harbour.
Mains practice questions
- “Compressed takedown timelines under the amended IT Rules, 2021 reflect a tension between digital dignity and free speech.” Discuss with constitutional provisions and safeguards.
- Evaluate the effectiveness and limitations of mandatory labelling and provenance markers in curbing deepfakes in India.
- How do the amendments reshape intermediary liability and safe harbour under Section 79 of the IT Act, 2000?
Conclusion
The IT Rules, 2021 amendments on Synthetic Media (SGI) mark India’s move toward faster, victim-centric action against deepfakes through strict takedown timelines and mandatory labelling/provenance. These measures strengthen platform due diligence and can preserve safe-harbour benefits if complied with, but they also raise implementation challenges around speed, accuracy, and speech safeguards.
FAQs (IT Rules 2021 Amendments – Synthetic Media/Deepfakes)
Q1. What are the new IT Rules amendments about?
They introduce specific obligations for platforms to regulate Synthetically Generated Information (SGI), including AI-generated deepfakes, through faster takedowns and mandatory transparency measures.
Q2. When do these SGI/deepfake rules come into effect?
They take effect from 20 February 2026.
Q3. What is “Synthetically Generated Information (SGI)” in the amended rules?
SGI refers to content artificially created/altered using computer resources to appear real or indistinguishable from reality, including deepfakes.
Q4. Are normal photo edits and educational content also treated as SGI?
Routine editing and certain accessibility/academic uses are generally excluded, provided they don’t create misleading “real-looking” fake representations.
Q5. What are the new takedown timelines for unlawful content?
Platforms must comply with lawful government/court directions within 3 hours (as reported for these amendments).
Q6. What is the deadline for removing non-consensual deepfakes?
Non-consensual deepfakes must be acted upon within 2 hours.
Q7. What is the timeline for grievance handling under these changes?
Reported timelines include grievance acknowledgement within 2 hours and final resolution within 7 days.
Q8. What does “mandatory prominent labelling” mean?
Photorealistic AI-generated/AI-altered SGI must carry a clear, prominent label so users can easily identify it as synthetic.
Q9. What are provenance markers/metadata requirements?
Where feasible, platforms should embed non-removable identifiers/metadata to help trace the source or generation pathway of synthetic content.
Q10. Can users or uploaders remove the AI label/metadata after posting?
No—labels and attached metadata/provenance information are intended to be non-removable/non-modifiable to prevent “deepfake laundering.”
Q11. What extra duties apply to Significant Social Media Intermediaries (SSMIs)?
SSMIs face higher accountability measures such as stronger user notices and technical steps to improve compliance and prevent prohibited synthetic content.
Q12. Will platforms lose safe harbour protection because of these rules?
Safe harbour under Section 79 continues if intermediaries follow the required due diligence; compliance with takedown and transparency duties is central to retaining protection.
Q13. How do these amendments align with new criminal laws?
They update legal references by replacing IPC citations with the Bharatiya Nyaya Sanhita, 2023 where applicable.







