Deepfakes have been around for a while but as it has become easier to create, there has been a subsequent increase in cyber crimes and spread of misinformation. For instance, deepfakes of celebrities and political actors are being used to scam individuals and impact elections. Due to the comparative lack of awareness on deepfakes as well as increasingly realistic outputs, many have fallen prey. This has justifiably led to an intense fear of deepfakes, resulting in attempts at policy interventions across countries.
India’s approach to curbing deepfakes has so far been through advisories, which by nature, are not enforceable. This changed in October 2025, when the Ministry of Electronics and Information Technology (MeitY) published the draft IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 for public consultation. In addition to the popular approach of mandating the labelling of synthetically generated information and metadata traceability, there are three new additions that are a cause for concern:
- The definition of ‘synthetically generated information’: “information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true;” [Rule 2(1)(wa)]. This definition is broad, vague, and subjective. Including “modified or altered” to the definition opens up much of the media that is published today that may be edited with filters, graphic design, etc. Moreover, “reasonably” is a subjective term and whether something appears to be real or true is up to the eyes of the beholder.
There are also safe harbour concerns as a result of the draft Rules, which have not been covered in this blog.
The Rules also require Significant Social Media Intermediaries (SSMIs) to ensure that such synthetically generated information is not published without labelling, failing which, the intermediaries will be held in non-compliance. As the definition of such information is already broad, the burden on the intermediaries may result in censorship to avoid non-compliance [Rule 4(1A)].
Moreover, SSMIs will have to obtain a user declaration on whether uploaded information is synthetically generated and deploy reasonable technical measures to ensure checks and balances. Here, it is again unclear what amounts to reasonable and may result in intrusive features [Rule 4(1A)].
Clearly, there is a need for an alternative approach to regulating deepfakes, if any. One such interesting approach has been by Denmark where an amendment to the Copyright Law to include copyrighting one’s likeness has been proposed. Broadly, the proposed amendment would protect individuals from others sharing realistic digital recreations of their physical traits, artistic performances, and mimicry of artists for up to 50 years after their death. While this has been lauded as a gamechanger to the deepfake regulation space, its implementability remains uncertain. Such an effort would require significant state capacity and likely a quantitative approach to copyright each physical trait.
Stepping back, we must question whether additional regulations are at all necessary. While the risks of deepfakes are definitely significant, existing regulations for cyber crime and content on online platforms already penalise criminal and misinformed outputs, regardless of deepfakes. This begs the question, would improving state capacity and implementation of existing regulations be a better and less resource-intensive approach to tackling the same issues?