Do deepfakes need separate regulations?

Authors

Deepfakes have been around for a while but as it has become easier to create, there has been a subsequent increase in cyber crimes and spread of misinformation. For instance, deepfakes of celebrities and political actors are being used to scam individuals and impact elections. Due to the comparative lack of awareness on deepfakes as well as increasingly realistic outputs, many have fallen prey. This has justifiably led to an intense fear of deepfakes, resulting in attempts at policy interventions across countries.

India’s approach to curbing deepfakes has so far been through advisories, which by nature, are not enforceable. This changed in October 2025, when the Ministry of Electronics and Information Technology (MeitY) published the draft IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 for public consultation. In addition to the popular approach of mandating the labelling of synthetically generated information and metadata traceability, there are three new additions that are a cause for concern:

There are also safe harbour concerns as a result of the draft Rules, which have not been covered in this blog.

Clearly, there is a need for an alternative approach to regulating deepfakes, if any. One such interesting approach has been by Denmark where an amendment to the Copyright Law to include copyrighting one’s likeness has been proposed. Broadly, the proposed amendment would protect individuals from others sharing realistic digital recreations of their physical traits, artistic performances, and mimicry of artists for up to 50 years after their death. While this has been lauded as a gamechanger to the deepfake regulation space, its implementability remains uncertain. Such an effort would require significant state capacity and likely a quantitative approach to copyright each physical trait.

Stepping back, we must question whether additional regulations are at all necessary. While the risks of deepfakes are definitely significant, existing regulations for cyber crime and content on online platforms already penalise criminal and misinformed outputs, regardless of deepfakes. This begs the question, would improving state capacity and implementation of existing regulations be a better and less resource-intensive approach to tackling the same issues?