The Central government has notified amendments to the IT intermediary rules that formally bring AI-generated content—including deepfake videos, synthetic audio and algorithmically altered visuals—under a structured regulatory framework for the first time.The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, will come into effect from February 20. Under the updated framework, social media platforms and digital intermediaries must ensure all synthetically generated information (SGI) is clearly and prominently labelled so users can immediately tell it apart from real content.Platforms are also required to embed persistent metadata and unique identifiers to make such content traceable back to the source. Crucially, intermediaries cannot allow the removal or suppression of these labels or metadata once applied.
Bigger platforms, bigger responsibilities
Significant social media intermediaries—think Instagram, YouTube, Facebook—face tighter obligations. Before any upload, they must obtain a user declaration on whether the content is synthetically generated and deploy automated tools to verify those claims. If content is flagged as AI-made, it must carry a visible disclosure before it goes live.Notably, the government dropped an earlier proposal from the October 2025 draft that would have required visible watermarks covering at least 10% of screen space on AI-generated visuals. Industry groups, including IAMAI, had pushed back hard, calling the rule too rigid and technically impractical across formats. The final version still requires clear labelling—just not a fixed-size watermark plastered across the content.The amendments also sharply compress takedown timelines. In specific cases, platforms now have just three hours to act on lawful orders, down from 36 hours earlier. Other response windows have been cut from 15 days to seven and from 24 hours to 12.The rules also mandate that platforms warn users at least once every three months about penalties for violating the new provisions, including misuse of AI-generated content.
