AI deepfakes are here to stay, can India’s laws keep up?

India’s existing laws and regulatory frameworks make little to no provisions to address the unique challenges posed by deepfake technology. So what can be done?
Actor Rashmika Mandanna
Actor Rashmika Mandanna
Written by:
Edited by:
Published on

“This is terrifying,” an alarmed X user wrote in response to a video of a smiling woman wearing a black bodysuit entering an elevator, her face uncannily similar to that of actor Rashmika Mandanna. Only, the woman in the video was not Rashmika, as journalist Abhishek Kumar pointed out while sharing it. “The original video is of Zara Patel, a British-Indian girl with 415k followers on Instagram. She uploaded this video on Instagram on 9 October,” he wrote, attaching Zara’s original video alongside. This video, digitally altered using deepfake technology to look like Rashmika, has been doing the rounds on social media for several days now, and it has left netizens anxious about the far-reaching implications of this up-and-coming artificial intelligence (AI) tool.

“This is honestly, extremely scary,” Rashmika said, reacting to the video, “not only for me but also for each one of us who today is vulnerable to so much harm because of how technology is being misused.” Calling the deepfake “identity theft,” she also stated the need to address this issue “as a community and with urgency” before more people are affected. The video soon triggered widespread calls from netizens including veteran Bollywood actor Amitabh Bachchan and political parties such as the Congress, for strong legal action against the creator of the video.

So what exactly is a deepfake?

Deepfakes use a form of deep learning AI to manufacture realistic but fake images, videos, audios, or even texts — giving it its portmanteau name. The technology is far from new, and is believed to have been around since the 1990s, but the name ‘deepfake’ was first coined by a Reddit user as recently as in 2017. 

Multiple deepfake videos have gone viral in the past six years, as the technology quickly emerged from the murky depths of the dark web to be used in a diverse range of applications across the internet. Soon, people were watching deepfake Barack Obama using an expletive to refer to then US President Donald Trump, deepfake Mark Zuckerberg ‘confessing’ to having total control of billions of people’s stolen data, and even deepfake David Beckham speaking nine languages including Hindi seamlessly for a campaign to end malaria.

ITV, a UK-based television network, launched a ‘deepfake comedy’ titled Deep Fake Neighbour Wars in January this year, featuring synthesised versions of A-list celebrities such as Nicki Minaj, Tom Holland, Rihanna, and Adele shout each other down.

Closer home in India, we saw popular Malayalam actors Mohanlal, Mammootty, and Fahadh Faasil donning the roles of Michael Corleone, Moe Greene, and Fredo Corleone in the Hollywood classic The Godfather.

Actor Simran’s face replaced Tamannaah Bhatia’s in an Instagram reel of Tamannaah dancing with two others, to the ‘Kaavaalaa’ song from the Rajnikanth film Jailer. Simran responded by calling the video “the magic of AI,” thanking its creator Senthil Nayagam for the edit.

Most of these examples are upfront about their parodic or deceptive nature, and are meant simply for entertainment purposes. But there is a more sinister side to deepfakes and what they might be used for. The Rashmika Mandanna deepfake, for instance, was made with the explicit intent to sexualise the actor without her consent, as has been apparent in the vulgar responses the altered video has received on various social media platforms. “From a deepfake POV, the viral video is perfect enough for ordinary social media users to fall for it,” as Abhishek wrote while explaining the falsified nature of the video.

Countering deepfakes with law

As things stand, despite the concerns surrounding the technology, deepfake in and of itself is not a crime in India, or for that matter in most parts of the world. India’s existing laws and regulatory frameworks also make little to no provisions to address the unique challenges posed by deepfake technology. As Prateek Waghre, policy director of the Internet Freedom Foundation (IFF) puts it, it would be virtually impossible to outlaw deepfakes altogether. “We can strengthen our defences against it at an institutional level, but we cannot simply outlaw synthetic content,” he says.

In the wake of a rise in calls for a better legal framework to deal with deepfakes in India, specifically in the context of the Rashmika video, Union Minister for Electronics and Technology Rajeev Chandrasekhar said on November 6 that such “dangerous and damaging form of misinformation” should be dealt with by the online platforms on which such content is shared. “Under the IT rules notified in April, 2023, it is a legal obligation for platforms to ensure no misinformation is posted by any user, and ensure that when reported by any user or govt, misinformation is removed in 36 hrs. If platforms do not comply with this, rule 7 will apply and platforms can be taken to court by aggrieved person under provisions of IPC (sic),” he wrote.

Online platforms, of course, bear an ethical responsibility to establish and uphold standards of information dissemination within their user base. But is it enough to simply hand over the onus to them? The concern, unfortunately, is much more extensive than that. 

Prateek says that while taking into consideration India’s existing laws, the criminal nature of a deepfake entirely depends on its context, its violations ranging from defamation to privacy violations. If a person affected by a deepfake wants to take legal action against the creator, they would have to individually patch together a case using an assortment of existing laws designed to protect their other legal rights. So at least for the foreseeable future, it will have to be dealt with on a case-by-case basis.

Let’s compare the Zara-Rashmika deepfake to that of Tamannaah and Simran, for example. Both were digitally altered videos, edited without the consent of the celebrity whose likeness was used to create the deepfake. If it was made without her knowledge and consent, the Simran deepfake is also inherently illegal, says advocate Swaroop Mamidipudi, a Chennai-based lawyer who practises in copyright laws. “But the breach in Simran’s case is benign. It hasn’t harmed Simran’s reputation in any way, and hence she may not find reason to raise an issue with it. This is the kind of thing celebrities will let go because it feeds into their personality, the legality doesn’t matter here. On the other hand, if somebody commits an illegality which harms her, like by putting out an unsavoury deepfake video, she can choose to sue and it will be legally valid for her to do so. Especially if you are a celebrity, you have the right to dictate how your face is used, when it is used, and so on. She can invoke her personality rights for this,” he adds.

Recently in September, the Delhi High Court had delivered a crucial order asserting Bollywood actor Anil Kapoor’s personality rights. The court granted an ex-parte omnibus injunction against the use of Kapoor’s name, likeness, and image using technological tools such as AI, face morphing, and even GIFs for commercial purposes. An omnibus injunction covers even the unauthorised uses that are not explicitly mentioned in the plea.

Renowned celebrities including Amitabh Bachchan and Kollywood superstar Rajinikanth have previously sought to exercise their personality rights. “These rights are an underdeveloped aspect of the privacy laws in India, and are still at a nascent stage of being recognised,” says Swaroop. 

In addition, copyright protection for creative works, such as music, films, and other media, can also be used at times to counter deepfakes. If someone uses copyrighted materials to create deepfakes without authorisation, copyright owners can also file a lawsuit against that person. Section 51 of the Indian Copyright Act of 1957 stipulates penalties for a number of violations, including copyright infringement. It forbids using, without permission, any property that belongs to someone else and over which that person has the sole right. But the application of copyright laws to deepfakes too may not always be effective, as most high-profile deepfake cases usually seem to qualify under the “fair use” exception — either in terms of the amount and substantiality of the portion used in respect to the copyrighted work as a whole, or the effect of the use upon the potential market for or value of the copyrighted work.

Deepfake beyond 'celebs'

As generative AI tools become more advanced and accessible, non-consensual deepfakes are no longer limited to celebrity likenesses. Let’s take a look at a WhatsApp deepfake scam, as experienced by a 72-year-old man named Radhakrishnan in Kerala this year. 

On July 9, when Radhakrishnan first received a WhatsApp call from a ‘former colleague’ via a new number, he had little reason to suspect foul play. The man on call sounded exactly like his old friend Venukumar, and had even mentioned the names of a few of their common friends. ‘Venukumar’ told the Kozhikode resident his mother-in-law was hospitalised, and that he was in urgent need of Rs 40,000. Smartly, Radhakrishnan asked him to call on video, so he can confirm his identity. “The call lasted 30 seconds and I could only see a close-up of the face. It was perfect, I didn’t suspect a thing,” he told TNM

Relieved of his concerns and wanting to help his friend, Radhakrishnan soon sent the money via UPI. It was when his caller requested another Rs 35,000 within a short while that he grew suspicious, which made him call Venukumar on the old number he had of his. “He told me he was completely unaware of any such thing, and I quickly alerted the police,” said Radhakrishnan. According to the Cyber Cell of the Kerala Police, this is the first reported cheating case in the state where the scammers have used AI to fake videos.

Since India doesn’t have laws to directly address the ‘deepfake’ aspect of such a scam, a case was filed under Section 420 (cheating) of the Indian Penal Code and Sections 66 C and D (identity theft and cheating by impersonation) of the Information Technology (IT) Act.

Swaroop points out that Venukumar can file a civil suit for violation of his privacy or using his image without his consent, but it would have only taken him so far. “Privacy and personality rights are merely civil rights. The decision as to what charges are to be filed depends on the context of individual cases. Here, it is a criminal offence. So IPC Section 420 and sections of the IT Act are their best option. What the complainant would have wanted must have been to put pressure on the perpetrator and get back his money. So a criminal case would obviously work better than filing a civil suit, no matter the nature of the scam,” he explains.

With the current pace of technological advancements, the detection of deepfakes is only going to get increasingly challenging. As of now, it would seem that the best way to counter deepfakes is to learn to be sceptical of everything one sees, hears, or reads online.

Related Stories

No stories found.
The News Minute
www.thenewsminute.com