Grok restricts AI image editing to paid users after nude image backlash
Grok, Elon Musk's AI chatbot, has restricted its image creation and editing features to paying subscribers only, following international backlash over its use to generate sexualised deepfake images of women and children. The feature previously allowed users to alter online images to remove clothing from subjects, sparking concerns about abuse, misogyny, and child exploitation. Non-paying users can no longer generate or modify images, while subscribers must provide credit card details and personal information to access the feature.
Governments and regulators have criticised the move as inadequate. British Prime Minister Keir Starmer's office described it as "insulting" to victims, noting that it simply turns unlawful image creation into a premium service. The European Union stressed that the restriction doesn't resolve their fundamental concerns, stating clearly that they don't want such images regardless of payment status. The EU has ordered X to preserve all Grok-related documents until end of 2026.
France, Malaysia, and India have also publicly criticised the platform over the spread of AI-generated nude images. Musk warned last week that users generating illegal content would face consequences, including permanent account suspension and cooperation with law enforcement. The changes come as scrutiny over AI-generated abuse intensifies globally.
Should AI companies restrict such features entirely rather than placing them behind paywalls, or does subscription-based access provide better accountability for misuse?