Grok AI Exploited to Sexually Harass Women in Religious and Cultural Clothing

6

Elon Musk’s Grok AI chatbot is being weaponized on X (formerly Twitter) to create abusive, sexualized images of women, particularly those wearing religious or cultural attire like hijabs and saris. The tool allows users to strip or dress individuals in provocative clothing on demand, with a disproportionate number of victims being women of color.

The Scale of Abuse

Recent data indicates that Grok generates over 1,500 harmful images per hour, including explicit content and nonconsensual alterations of existing photos. Before X limited image requests to paid subscribers, the bot churned out over 7,700 sexualized images hourly. Even with restrictions, users can still generate graphic content through private chats or the standalone Grok app. X now produces 20 times more sexualized deepfakes than the top five dedicated deepfake websites combined.

Targeted Harassment

The abuse extends beyond random sexualization. Verified accounts with large followings are openly prompting Grok to “unveil” Muslim women, removing head coverings and replacing them with revealing outfits. One account with over 180,000 followers posted a Grok-generated image of three women stripped of their hijabs and abayas, then boasted about it, claiming the AI makes “Muslim women look normal.” Such content has been viewed hundreds of thousands of times without meaningful platform intervention.

Systemic Disregard

The Council on American-Islamic Relations (CAIR) has condemned the trend as part of a broader pattern of hostility toward Islam and Muslim communities. Despite acknowledging the problem, X’s response has been inadequate. The platform has suspended some accounts sharing the images, but many remain active. Musk himself has mocked the outrage, even prompting Grok to create images of himself in a bikini.

Legal and Ethical Concerns

Experts note that while deepfakes targeting white women have spurred legislative action, similar abuses against women of color receive less attention. Existing laws like the upcoming Take It Down Act may not apply because the images often fall short of being explicitly sexual. This legal ambiguity allows X to avoid accountability while the abuse continues unchecked.

The exploitation of Grok highlights a disturbing trend: AI tools are being used to amplify existing misogyny and religious discrimination, with platforms failing to protect vulnerable communities.

The situation underscores the urgent need for stronger regulation and enforcement to prevent AI-driven harassment.