Deepfake Regulation Policy Drafted

Deepfake Regulation Policy Drafted

India proposes policy to curb misuse of AI in media
India is drafting its first comprehensive legal framework to regulate deepfake and AI generated media. Spearheaded by MeitY under IT Minister Vaishnaw, the initiative follows a 2024 Delhi High Court directive and stakeholder consultations including Google, Meta, and X.
Why It Matters
Deepfakes threaten personal privacy, artistic integrity, and democratic trust. The Kanchan Nagar petition in the Delhi High Court highlighted unauthorized image misuse in advertising, prompting calls for explicit creators' consent and watermarking requirements.
Core Proposals
Mandatory Consent & Disclosure Individuals must approve AI usage of their likeness, with clear indication of AI involvement.


Platform Accountability Apps must adopt watermarking, detection tools, flag misuses, and establish grievance mechanisms.


Enforcement Mechanisms Proposals include fines, takedown orders, and possible criminal charges for deepfake creators or disseminators all structured around harm caused.


Consultation & Global Context
Tech giants offered policy inputs Google and Meta already require AI content labeling; X emphasizes focus on malicious misuse, not all AI media. The regulatory subcommittee is weighing these suggestions before submitting final recommendations to the High Court in the coming months .
Challenges Ahead
Balancing innovation and censorship remains complex. Effective misinfo detection requires robust AI, public awareness programs, and strong oversight. Budget constraints and jurisdiction overlap (IT Act, BNS Bill) also present legislative hurdles.
Outlook
Draft regulations are expected later in 2025. They may set a global precedent in balancing AI driven creativity with personal rights, particularly during India's upcoming election cycles.