Why in the news DEEPFAKES?

The Center recently issued an advisory to social media intermediaries to spot deepfakes and false material.

About Deepfakes

  • Deepfakes synthetic media, typically videos, that creates using artificial intelligence techniques like deep learning. They involve superimposing or replacing existing content in videos with digitally manipulated content, often featuring people doing or saying things they never actually did or said.
  • Deepfakes have raised concerns about misinformation, privacy, and the potential for malicious use in spreading false information or manipulating public perception.
Deep learning is a subset of machine learning that involves the use of artificial neural networks with multiple layers (hence “deep”) to learn and extract features from data. It mimics the structure and function of the human brain’s interconnected neurons to process complex information and make predictions or decisions. Deep learning has revolutionized various fields, including computer vision, natural language processing, speech recognition, and robotics, by enabling computers to learn and perform tasks that were previously challenging or impossible for traditional algorithms.

How does Deepfake work?

  • Deepfakes creates images of events using photo shopping, AI, and deep learning technology.
  • To make the videos, a class of machine learning technology calls GANs (Generative Adversarial Networks) combine with other technologies.
  • Generative Adversarial Networks (GANs), which made up of discriminators and generators, also use in deepfakes.
    • Generators produce new images by using the original data collection.
    • After that, the discriminator makes more refinements and assesses the content for realism.
  • Variational auto-encoders, a kind of artificial neural network typically employed for facial recognition, are another deep-learning computer network used in deepfakes. Auto-encoders identify facial traits while stifling “non-face” aspects and visual noise. By leveraging similar qualities of a person, image, etc., they allow a flexible “face swap” paradigm.


Opportunities with Deepfake technology

  • Entertainment and Creative Industries: Deepfake technology can be use to create realistic special effects in movies, TV shows, and video games, reducing production costs and enhancing visual effects.
  • Advertising and Marketing: Marketers can leverage deepfake technology to create personalized and engaging advertisements and promotional content that resonates with target audiences.
  • Education and Training: Deepfake technology can be utilize in educational videos and simulations to enhance learning experiences, such as creating realistic scenarios for medical training or language learning.
  • Virtual Influencers and Brand Ambassadors: Brands can create virtual influencers or brand ambassadors using deepfake technology, allowing them to tailor their messaging and appearance to specific demographics.
  • Accessibility and Inclusivity: Deepfake technology has the potential to improve accessibility by creating personalized content for individuals with disabilities, such as generating sign language interpreters or audio descriptions for visually impaired individuals.
  • Historical and Cultural Preservation: Deepfake technology can be use to recreate historical events, speeches, or cultural performances, providing immersive experiences for future generations to learn and appreciate.
Issues associated with Deepfake
  • Misinformation and Fake News: Deepfakes can be use to create convincing yet false content, leading to the spread of misinformation and fake news. This can undermine trust in media and institutions and exacerbate societal divisions.
  • Privacy Concerns: Deepfake technology can be use to create non-consensual or revenge pornographic content, violating individuals’ privacy and causing emotional distress and reputational harm.
  • Identity Theft and Fraud: Deepfakes can be use to impersonate individuals, leading to identity theft, fraud, and blackmail. This poses significant risks to personal and financial security.
  • Erosion of Trust: The proliferation of deepfakes can erode trust in visual and audio evidence, making it increasingly challenging to discern truth from falsehood. This can have far-reaching consequences for journalism, law enforcement, and public discourse.
  • Ethical Implications: The creation and dissemination of deepfakes raise ethical concerns regarding consent, authenticity, and the potential for harm. This requires careful consideration of the ethical implications of using deepfake technology.
  • National Security Risks: Deepfakes have the potential to be used for malicious purposes, such as spreading disinformation, manipulating elections, or impersonating government officials. This poses significant national security risks and challenges.
  • Gender inequality persists, with approximately 90% of victims of crimes like revenge porn, non-consensual porn, and various forms of harassment being women. Deepfake technology exacerbates this issue, further limiting the online presence and safety of women.
Regulatory measures applicable to deepfakes
  • Legal provisions in India : Deepfake technology is not specifically prohibited by law in India. Some laws, like
    • Section 66E of the IT Act of 2000, which states that it is unlawful to capture, publish, or transmit someone’s photograph for use in mass media, indirectly address deepfake.
    • Section 66D of the IT Act of 2000: This section stipulates that using computer resources or communication devices maliciously to deceive or impersonate someone will result in prosecution.
    • Indian Copyright Act of 1957: Establishes fines for copyright violations.
  • International action to combat deepfakes:
    • Bletchley Declaration: Calls to address the possible threats of AI were made by more than 25 major nations, including the US, China, Japan, UK, and India.
    • The EU’s Digital Services Act requires social media companies to follow labeling regulations, increasing transparency and assisting consumers in verifying the legitimacy of media.
    • Google revealed the using of a watermark to recognize content that was created artificially.
Way ahead
  • Strengthening the legal framework: Deepfake and related content development, distribution, and harmful usage require the creation and updating of laws and regulations.
  • Encourage Responsible AI Development: Deep learning technology should be used responsibly, and ethical practices in AI development should be encouraged.
  • The Asilomar AI Principles can serve as a roadmap for secure and advantageous AI development.
  • Social media platforms’ accountability and responsibility: There will need to be a common, universal norm that all channels may follow internationally. YouTube, for instance, has implemented policies requiring content providers to reveal whether or not the content was produced using AI techniques.
  • International Cooperation: Create uniform guidelines and procedures to stop the use of deepfakes internationally.
  • Invest in R&D: Allocate funds to assist continuing investigations into deepfake technology, detection techniques, and defense strategies.

In conclusion, while deepfake technology offers innovative possibilities, its proliferation raises significant ethical and security concerns. Safeguarding against misuse requires a multi-faceted approach, including regulation, technological advancements, education, and collaboration. Balancing innovation with responsibility is essential to mitigate risks and ensure a trustworthy digital environment.