The issue of deepfakes took centre stage in the Indian media recently, when a deepfake video of a celebrity went viral on social media. The video was created without the consent of the celebrity whose face was morphed replacing the person who originally featured in it. The deepfake video met with outrage on the internet and resulted in an advisory from the Ministry of Electronics and Information Technology (MEITY) to social media platforms, on 7 November, requiring them to take decisive actions on such incidents.
In this article, we talk about what one needs to know about deepfakes and the regulatory landscape around it.
What are deepfakes?
Deepfakes, also known as synthetic media, are AI-generated, hyper-realistic videos and images which convincingly manipulate and superimpose the likeness, voice or facial expressions of individuals onto other content. This technology has several positive use cases, including in educational learning by making lessons more engaging, creating content featuring historic figures; in art and film production by making VFX technology more cost efficient; amplification of social messages by modifying videos to create translations etc.
However, deepfakes have garnered a notorious reputation due to their potential to mislead, defame and manipulate viewers. There have been many incidents globally where deepfake videos of political speeches, celebrity endorsements, obscenity etc. have been created.
What is the prevailing legal framework to regulate deepfakes?
The Information Technology Act, 2000 governs activities over the internet or those conducted through computer resources. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 formulated under the IT Act provide specific compliance requirements for digital and social media platforms as well as remedies to aggrieved users.
Some safeguards under the current legal framework are as follows:
Intermediaries, like social media platforms, are required to inform and reasonably ensure that users do not post content that is harmful to children, deceives, misleads or spreads misinformation, is obscene, pornographic, paedophilic, or invades a person’s privacy, or impersonates another person.
Messaging platforms are required to enable identification of first originator of content if required pursuant to a judicial order. This enables tracking of the user who may have first uploaded the deepfake content.
The MEITY Advisory specifically requires social media platforms to take down any content reported as being deepfake within 36 hours of such reporting. Failure to take down content within this timeline would render the platform liable for the deepfake as well.
The IT Act penalises identity theft, impersonation and violation privacy of a person.
Civil remedies are available to celebrities in the form of infringement of personality rights which protect use of their name and likeness without authorisation.
The Indian government has also commenced consultations on a new legislation – ‘the Digital India Act’, which is anticipated to regulate deepfakes, and generally regulate artificial intelligence from the lens of user harms and safety.
What are the practical challenges in regulating deepfakes?
The biggest challenge for regulation of deepfakes is detection. Tools to detect synthetic media or deepfakes are not easily available, and in certain instances, it may not be possible to conclude with certainty whether content is authentic or a deepfake. Artificial intelligence tools can be utilised to detect minute anomalies in images and sounds, which are characteristics of deepfakes. These tools could be deployed by social media platforms to monitor and detect deepfakes.
Similar issues are anticipated while presentation of evidence in courts in relation to deepfakes, which may involve authentication. Proving that a piece of evidence is a deepfake could be a technical and complex process, and the legal system may find it challenging to adapt to these new requirements. As such, legal procedural reforms may be required for streamlined and specialised procedures for authentication of digital evidence in cases involving deepfakes or artificial intelligence. Legislatures and courts could consider establishment of forensic standards and the training of legal professionals to handle cases that centre around authenticity of digital content.
The Road Ahead…
The recent incidents have brought to fore the need for swift adoption of procedural, legal and technological methods to tackle deepfake. Collaboration through consultations among the government, BigTech, the AI industry and public in general are crucial to combat the negative impacts of deepfakes without creating an impediment for development of artificial intelligence to develop synthetic media.