In a significant breakthrough, Delhi Police have successfully apprehended the main conspirator behind the Rashmika Mandanna deepfake video, according to ANI reports on Saturday. The investigation, ongoing since November 2023, gained momentum as the primary accused was on the run.
Earlier, the police had identified suspects associated with uploading the video online, but the mastermind remained elusive. This arrest follows two months of intensive efforts since vital clues emerged in November, with technical analysis focusing on tracing IP addresses linked to the video’s upload.
The Intelligence Fusion and Strategic Operations (IFSO) unit registered an FIR on November 11 after the Delhi Commission for Women issued a notice, prompting legal action. The deep fake video, which surfaced last year, showcased the face of a British-Indian influencer digitally replaced with that of the Bollywood actress.
Expressing her concern, Rashmika Mandanna called the incident “extremely scary,” highlighting the misuse of technology. In response to the incident, Delhi Police filed a case under sections 465 and 469 of the Indian Penal Code, along with sections 66C and 66E of the IT Act, 2000, addressing forgery, harming reputation, identity theft, and privacy violations.
Other incidents with Deepfakes:
The credit for the delusional video creation goes to the technologies, namely, GANs (Generative Adversarial Networks) or ML (Machine Learning}), that are interplayed to create the deepfake videos. The edited imagery can be an imitation of a face, body, sound, speech, environment, or any other information.
The term deepfakes was coined in 2018 by a Reddit user, the creator of the Reddit forum, who began to swap the faces of female celebrities into pornographic videos, making women the major victims of deepfakes. Recently, we witnessed some deepfake videos of many other prominent personalities, including alia Butt and Sachin Tendulkar.
In March 2022, a deepfake video of Ukrainian President Zelenskky surfaced, marking a turning point in information operations during armed conflicts and highlighting the potential political risks associated with deepfakes. Deepfakes extend beyond imagery, with AI tools cloning voices for financial scams. Approximately 47% of Indian adults have experienced or know someone affected by AI voice scams, leading to significant financial losses.
The Ministry of Electronics and Information Technology, in its recent advisory dated November 7, 2023, instructs major social media platforms to:
- Exercise due diligence to identify and act against misinformation and deepfakes violating rules and user agreements.
- Take prompt action within the timelines set by the IT Rules 2021 against reported cases.
Users are urged not to share such deceptive content, and platforms must:
- Remove reported content within 36 hours of notification.
- Act swiftly, following the IT Rules 2021, and restrict access to the problematic content.
The Information Technology Act, 2000:
Failure to adhere to the IT Act and IT Rules would risk organisations losing protection under Section 79(1) of the IT Act.
Section 79(1) exempts online platforms from liability for third-party content.
Rule 7 of the IT Rules empowers individuals to take platforms to court under the Indian Penal Code.
Section 66E prescribes punishment for privacy violations with a three-year imprisonment and a fine of INR 2 lakh.
Sections 67, 67A, and 67B prohibit and penalise electronic transmission of obscene and explicit material.
Social media companies are advised to act within 24 hours on complaints related to impersonation, with Section 66D imposing a three-year sentence and a fine of up to one lakh rupees for cheating through impersonation.