Why Are Grok and X Still Available in App Stores?

https://www.wired.com/story/x-grok-app-store-nudify-csam-apple-google-content-moderation/

33 Comments

  1. Well_Socialized on

    Article text:

    Elon Musk’s AI chatbot Grok is being used to flood X with thousands of sexualized images of adults and apparent minors wearing minimal clothing. Some of this content appears to not only violate X’s own policies, which prohibit sharing illegal content such as child sexual abuse material (CSAM), but may also violate the guidelines of Apple’s App Store and the Google Play store.

    Apple and Google both explicitly ban apps containing CSAM, which is illegal to host and distribute in many countries. The tech giants also forbid apps that contain pornographic material or facilitate harassment. The Apple App Store says it doesn’t allow “overtly sexual or pornographic material,” as well as “defamatory, discriminatory, or mean-spirited content,” especially if the app is “likely to humiliate, intimidate, or harm a targeted individual or group.” The Google Play store bans apps that “contain or promote content associated with sexually predatory behavior, or distribute non-consensual sexual content,” and well as programs that “contain or facilitate threats, harassment, or bullying.”
    Don’t just keep up. Get ahead—with our biggest stories, handpicked for you each day.
    By signing up, you agree to our user agreement (including class action waiver and arbitration provisions), and acknowledge our privacy policy.

    Over the past two years, Apple and Google removed a number of “nudify” and AI image-generation apps after investigations by the BBC and 404 Media found they were being advertised or used to effectively turn ordinary photos into explicit images of women without their consent.

    But at the time of publication, both the X app and the standalone Grok app remain available in both app stores. Apple, Google, and X did not respond to requests for comment. Grok is operated by Musk’s multibillion-dollar artificial intelligence startup xAI, which also did not respond to questions from WIRED. In a public statement published on January 3, X said that it takes action against illegal content on its platform, including CSAM. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the company warned.

    Sloan Thompson, the director of training and education at EndTAB, a group that teaches organizations how to prevent the spread of nonconsensual sexual content, says it is “absolutely appropriate” for companies like Apple and Google to take action against X and Grok.

    The amount of nonconsensual explicit images on X generated by Grok has exploded over the past two weeks. One researcher told Bloomberg that over a 24-hour period between January 5 and 6, Grok was producing roughly 6,700 images every hour they identified as “sexually suggestive or nudifying.” Another analyst collected more than 15,000 URLs of images that Grok created on X during a two-hour period on December 31. WIRED reviewed approximately one-third of the images, and found that many of them featured women dressed in revealing clothing. Over 2,500 were marked as no longer available within a week, while almost 500 were labeled as “age-restricted adult content.”

    Earlier this week, a spokesperson for the European Commission, the governing body of the European Union, publicly condemned the sexually explicit and non-consensual images being generated by Grok on X as “illegal” and “appalling,” telling Reuters that such content “has no place in Europe.”

    On Thursday, the EU ordered X to retain all internal documents and data relating to Grok until the end of 2026, extending a prior retention directive, to ensure authorities can access materials relevant to compliance with the EU’s Digital Services Act, though a new formal investigation has yet to be announced. Regulators in other countries, including the UK, India, and Malaysia have also said they are investigating the social media platform.

    Grok and X are part of a multimillion dollar industry peddling “nudify” services online. Over the past few years, dozens of standalone apps and websites have popped that promise to digitally strip women without their consent, often marketing themselves as harmless novelty tools while enabling image-based sexual abuse. Mainstream AI companies have also struggled to prevent their tools from being used to generate nonconsensual sexualized imagery. For example, WIRED reported last month that people were sharing tips online about how to get Google and OpenAI’s generative AI chatbots, Gemini and ChatGPT, to alter pictures of women to depict them wearing bikinis and other revealing clothing.

    Lawmakers in the US and other countries have begun cracking down on nonconsensual AI deepfakes. Last year, President Donald Trump signed the TAKE IT DOWN Act, which makes it a federal crime to knowingly publish or host nonconsensual sexual images. But Thomspon says the law is limited by the fact that companies are only required to begin the removal process after a victim chooses to come forward.

    “Private companies have a lot more agency in responding to things quickly,” Thompson says. “When we talk about other tools for addressing image based abuse—lawsuits take time, and it takes time for laws to be passed, and especially right now, when we have technologies that are hitting the market at a breakneck pace, and it’s very, very difficult for laws to be passed at the same pace.”

    David Greene, a civil liberties director at the Electronic Frontier Foundation, says people should be cautious about the idea of removing entire platforms from app stores. He emphasizes that X and xAI both have the power to combat this problem themselves.

    Greene argues that Musk’s companies could put in place better technical safeguards to deter users from creating deepfakes and other kinds of sexualized imagery. They “might not be a perfect fix, but might at least add some friction to the process,” he adds

    Thompson agrees that companies like X and xAI should be subject to more public pressure to prevent these sorts of photos and videos from being created in the first place. “That’s where I think we need intervention,” she says.

  2. AI company isn’t going to ban another AI company and make AI look bad. All those fuckers are in it together.

  3. Because in 2026, companies have to balance between playing along with bullshit and retuning value to investors. It’s a fucked up system, but a LOT of idiots voted for it.

  4. Results from Grok:

    Hi there beautiful 🤩 

    That’s a great question 👍

    The fact is Elon is a pedo bitch and he’ll go cry to the king of pedos Trump if he gets banned

    Meanwhile Cook has decided to give up any morals the company had in order to appease King Pedo

    Have a great day sunshine 👍

    Can you send me a photo of your tits?

  5. Because they’re run by American companies that can’t afford targeted action against them by the current administration: which has an axe to grind, an itchy trigger finger, and a complete absence of impartiality.

  6. I don’t really know what Grok even is and don’t use X but I assume you’re referencing the fact that Grok has been in the news for people undressing people or whatever. Keep in mind, Reddit is on the app store. So I don’t think they have an issue with the big players regardless of the type of content on there.

  7. The App Store doesn’t allow apps with nudity or pornography apparently. But look at Reddit. FULL of porn and still in the App Store. So was Tumblr and Vine for a while. Riddled with porn. Not surprised Grok is still up. And now that they patched it and fixed that issue it’ll probably stay up unfortunately.

  8. Probably because Musk has the money to sue anyone for anything, who else has sued companies for not advertising on their platform.

    I men that’s mental to even say but he’s done it.

  9. theswiftarmofjustice on

    There have been many apps removed for far less. Google and Apple at the very least are liable if they allow X to stay up.

  10. Cold_Specialist_3656 on

    Because the oligarchs now run the government through their puppet party GOP and carnival barker figurehead. 

    The billionaires wrote Project 2025 to weaken the government and unshackle their megacorps from the law. 

    We better not get another pussy ass “decorum” Democrat. These criminals need to be in prison. I don’t care how rich they are

  11. You can bet if a normal made an app that did this, it would have been pulled immediately. Apple ordinarily don’t hang around to address something like this.

  12. They get rid of ICE detector apps while propping up fascists supporting platforms and apps. Go figure.

  13. I don’t need or want Apple or Google to be the morality police.

    I’ve never had Twitter/X installed on my phone, but I don’t give a hoot if anyone else does.

    When harm comes to another person is generally when I would like the law to intervene.

    People that are generating AI porn aren’t necessarily hurting someone, but depending on context, you definitely could be a creep.

    I miss the days when creeps got the ever living crap best out of them for being creeps, it seemed effective in ironing out some personality flaws.

  14. Weak-Ganache-1566 on

    What’s a reason for them to be banned that doesn’t get waived away via “code error that has been addressed”?