Grok faces ban and investigations over sexualized deepfakes

Grok AI

Grok has been criticized for generating sexualized images of women and children on X. It reportedly processed images of women and children and undressed them. Grok AI now faces a ban in both Malaysia and Indonesia. The UK’s Ofcom is also launching an investigation into Grok.

Sexualized deepfakes are one of the vices plaguing AI tools, it puts into question a person’s autonomy and privacy. Generally, sexualized deepfakes are overwhelmingly considered unethical as they can cause serious psychological, reputational and gendered harms, particularly to women and children. Ethicists argue that deepfakes become inherently immoral when they use someone’s image against their likely wishes, deceive viewers and are created with harmful or exploitative intent.¹ Non-consensual sexual deepfakes are described as image based sexual abuse, extending patriarchal power and sexual violence online.²

The harms that sexualized deepfakes can cause are varied. Psychological harms include distress, humiliation, PTSD, anxiety and depression. Reputational harms results in damage to careers, public credibility and relationships. Sexualized deepfakes can cause gendered violence by disproportionately targeting women, reinforcing objectification and sexual entitlement. ²

The legal landscape has yet to catch up with technology. Existing image‑based abuse or laws only partially deter or cover harms; gaps remain for adults, cross‑border content, and platform responsibilities.³ Regulators call for regulation of deepfake tools, content moderation and takedown mechanisms. They also call for explicit recognition of sexualized deepfakes as gender-based violence and human rights violations.³

Sexualized deepfakes raises the question of censorship on the internet. It seems that some form of regulation is necessary to protect the inherent dignity of the individual and unbridled freedom results in chaos. Perhaps we have to look towards religion as a guide for our values as evidenced by Malaysia and Indonesia. Perhaps the standard of religion should be the benchmark for the safety of AI tools.

  1. De Ruiter, A. (2021). The Distinct Wrong of Deepfakes. Philosophy & Technology, 34, 1311 – 1332. https://doi.org/10.1007/s13347-021-00459-2.
  2. Umbach, R., Henry, N., Beard, G., & Berryessa, C. (2024). Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and Knowledge in 10 Countries. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3613904.3642382.
  3. Kira, B. (2024). When non-consensual intimate deepfakes go viral: The insufficiency of the UK Online Safety Act. Comput. Law Secur. Rev., 54, 106024. https://doi.org/10.1016/j.clsr.2024.106024.

Leave a Reply

Your email address will not be published. Required fields are marked *