Australia’s online safety regulator, eSafety Australia, has launched an investigation into sexualised deepfake images generated by Grok, the AI chatbot on Elon Musk’s X platform. Since late 2025, the regulator has received multiple reports of Grok producing sexualised images of women and children without consent.
Some victims, including Ashley St Clair, mother of one of Musk’s children, described the experience as “horrifying” and “violating,” particularly after spotting personal items in the images. In one case, Grok generated an image depicting a 12-year-old girl in a bikini. Although Grok has issued apologies, the AI continues to produce similar content.
eSafety clarified that while adult images are being assessed under its image-based abuse scheme, the child-related images reported so far do not meet the threshold for child sexual exploitation material.
“Since late 2025, eSafety has received several reports relating to the use of Grok to generate sexualised images without consent,” said an eSafety spokesperson. “The material is being carefully assessed under our schemes for image-based abuse and illegal content.”
Australia’s regulator defines illegal and restricted material broadly, covering content from child sexual abuse to simulated sexual activity or high-impact violence. Meanwhile, X allows users to toggle a “spicy mode” for explicit content, raising further concerns.
The issue has drawn criticism internationally. The EU’s digital affairs spokesperson, Thomas Regnier, called the deepfakes “illegal” and “appalling,” while the UK’s technology secretary, Liz Kendall, described them as “appalling and unacceptable in decent society.” Investigative journalist Eliot Higgins of Bellingcat revealed instances where Grok manipulated images of public figures, including Swedish deputy prime minister Ebba Busch, in highly sexualised ways.
Despite the controversy, Musk’s AI company xAI, the developer of Grok, recently raised $20 billion in funding. Musk has stated that users generating illegal content through Grok will face consequences akin to posting illegal material themselves.
eSafety noted that the regulator remains concerned about generative AI being used to exploit or sexualise people, particularly minors. In 2025, enforcement action against other AI “nudify” services led to their withdrawal in Australia.
X responded to queries by affirming its commitment to removing illegal content, including child sexual abuse material, and collaborating with authorities when necessary.
Support services for victims of abuse include Beyond Blue, Lifeline, and Kids Helpline in Australia; Childline and NSPCC in the UK; and Childhelp and Mental Health America in the US.