The Indian government is taking robust steps in the battle against deep fakes. It’s changing the laws so that companies must be more proactive in taking action against AI-generated material. The government is also dramatically decreasing the time limits for social media companies to take down offending content.
2021 IT rules
The last time India updated its laws that regulate social media companies was back in 2021. Those rules required companies such as Facebook, Twitter (now X) and Google to act within certain deadlines. In relation to takedown requests relating to unlawful, misleading or violent content, the companies had to acknowledge receipt within 24 hours. They then had 15 days to resolve the complaints. However, for some categories of complaints, such as those relating to explicit sexual content, the social media companies had to take the offending content with 24 hours.
Reduced deadlines
India has now published amendments to the 2021 rules. For the first time, deepfakes are controlled under the law. The new rules also shorten the time limits for companies to act. Official takedown orders must be complied with within just three hours. In addition, some urgent complaints from social media users must be dealt with in only two hours.
Deepfakes
The new rules in India mean that social media companies must enforce new requirements on anyone who uploads or shares audio-visual content. Users will have to disclose if the content is synthetically generated. The social media companies must also run checks to ensure content is correctly declared. Plus, deepfakes need to be clearly labelled and include embedded information, such as metadata, to trace where the files have come from.
Banned content
Some types of deepfake content are banned outright. These include deceptive impersonations and non-consensual intimate imagery. Content linked to serious crimes is also prohibited. The new laws require companies to use automated tools to check content for compliance and to ensure it is correctly labelled where required.
Additionally, companies that fail to comply with the new requirements might lose their “safe harbor” protection. Safe harbor protections under Indian law grant conditional immunity to social media companies and ISPs for third-party content uploaded by users.
What we think
Concerns about the prevalence of deepfakes have been growing around the world. The sharing of explicit content without permission is also an increasing problem. While many governments have been slow to act, India is amending its laws to tackle the problem head on. In many ways this is a good thing, as some social media companies have been reluctant to take responsibility for the content shared on their platforms. However, the very short deadlines imposed by the new Indian rules will require more automated checking. It would be difficult for human moderators to review all content which is the subject of complaints within the permitted time limits. This could lead to overly cautious policies on social media content which impact on the concepts of free speech. One thing is for sure, the battle against deepfakes is far from over.
