Undressing Bot Scandal: Dems Demand X Be Banned!

ideko

Okay, let’s just cut right to it, because honestly, I’m still trying to wrap my head around this one. An “undressing bot.” Seriously? We’re here now? Like, I’ve seen some wild stuff in my fifteen years doing this gig, covering everything from dial-up modems to VR headsets that make you feel like you’re actually flying, but this? This feels like a new low, even for the internet. And yeah, that’s saying something.

The Bot, The Boss, The Big Problem

So, the news dropped, and naturally, my phone blew up. Democrats – a whole bunch of ’em, senators no less, like Mark Warner and Amy Klobuchar, big names – are basically yelling at Apple and Google to yank X (you know, the artist formerly known as Twitter, because apparently we just can’t keep names straight anymore) from their app stores. Why? Because X, under Elon “free speech absolutist” Musk’s ever-watchful, chaotic eye, is apparently hosting some truly vile AI. An “undressing bot” is what they’re calling it. And yeah, it does exactly what it sounds like. It takes pictures of people, mostly women, and uses AI to digitally strip them. Nonconsensual deepfakes, basically. It’s grotesque. It’s a violation. It’s a form of digital sexual assault, plain and simple. And it’s happening on a major social media platform, one that used to, you know, at least pretend to have some standards.

You gotta wonder, what is even going on over there? Musk bought Twitter with all this talk about free speech, right? “Town square,” “less moderation,” “paradise for free thinkers,” all that jazz. And look, I get it, free speech is important. Crucial, even. I’ve built my career on it, basically. But there’s a pretty big difference between robust debate and letting actual digital sexual assault run rampant. This isn’t just someone saying something you don’t like; this is directly harming people. It’s making them vulnerable in a way that feels incredibly personal and invasive. It’s like building a public park and then just letting actual criminals set up shop in the middle of it, stealing purses and, like, flashing people. And when someone complains, you just shrug and say, “Oh, but everyone’s free to be here!” No, dude. No. That’s not how any of this works. That’s not a town square; that’s just a lawless wasteland. And frankly, it’s dangerous.

Grok’s Hand in This? Oh, Come On.

And get this – the senators are specifically saying this particular flavor of awful is tied to Grok. For those who haven’t been keeping up with Musk’s various shiny new toys, Grok is X’s own AI. Let that sink in for a second. The platform’s own technology is apparently being used, or at least enabling, the creation of these nonconsensual deepfakes. This isn’t just some random third-party bad actor that slipped through the cracks, a tiny little loophole that someone exploited. This is happening on their house. This is happening with their rules (or, you know, lack thereof). This isn’t just a bug; it feels an awful lot like a feature of a system that’s been deliberately deprioritizing safety for… well, for whatever Musk thinks he’s doing. It’s a shocking level of negligence, if not outright complicity, from what I can tell. And it just makes you shake your head and wonder, “What next?”

App Store Accountability: Is This The Only Way Left?

Now, here’s the kicker, and it’s a legitimate question that a lot of people are wrestling with: Is asking Apple and Google to ban X the only lever we’ve got left? I mean, it feels pretty drastic, right? Kicking a major social media platform, one that still has millions of users, off the two biggest mobile ecosystems on the planet. That’s a huge move. But wait, doesn’t that tell you something about how little faith people – even elected officials – have in X’s own leadership to actually fix anything? The Democrats are basically saying, “Look, Elon won’t clean up his mess, he won’t even acknowledge it half the time, so we’re gonna call his landlords.” And honestly, from what I’ve seen over the past couple of years, it’s hard to argue they don’t have a point. The platform’s content moderation has just gone off the rails. It really has. Apple and Google both have pretty strict terms of service about objectionable content, about harassment, about nonconsensual imagery. If X is actively violating those, and from the sounds of it, they absolutely are, then yeah, there should be consequences. Big ones. It’s not just about what’s legal; it’s about what’s morally acceptable for a platform to host.

“Grok’s nonconsensual deepfakes violate the app stores’ terms of service, the senators argue.” – Yeah, no kidding. It’s pretty black and white when you look at it.

The Slippery Slope or The Necessary Stand?

This whole thing raises so many red flags, and it’s complicated. I’m not gonna lie. On one hand, you’ve got people screaming about censorship, about big tech having too much power to deplatform. And I get that concern, really, I do. The idea of two companies, Apple and Google, essentially deciding who gets to be online and who doesn’t, is a massive amount of power. It’s something we should always be wary of. But on the other hand, what’s the alternative when a platform just… stops caring? When the content isn’t just “offensive” or “misleading” but actively harmful, illegal, and traumatizing? This isn’t about someone’s tweet being mean or politically incorrect; this is about digital sexual assault. This is about deepfakes that can ruin lives, absolutely shatter someone’s sense of privacy and safety. And who cares if it’s AI generated? The victim still feels violated. The damage is still done. This isn’t some abstract philosophical debate; it’s about real people getting hurt. And frankly, the idea that a company would allow this, or be so negligent it happens on their watch, just blows my mind. It’s a fundamental breakdown of responsibility. It’s a complete abdication of what a social media platform should be. It’s a dereliction of duty, if you ask me.

What This Actually Means

So, where do we go from here? If Apple and Google do pull X, it’s gonna be a seismic event. I mean, truly. Millions of users would be cut off, and the message would be crystal clear: “You can’t just let anything fly. There are lines, and you crossed them.” It would set a massive precedent for platform accountability, for sure. It would probably trigger a whole new wave of debates about who controls the internet, about censorship, about big tech power. But if they don’t? Well, then what? Does it just signal that these giant app store gatekeepers are all talk and no action when it comes to the really tough stuff? That their terms of service are just, like, suggestions? It’s not entirely clear yet how this plays out, or how quickly. Apple and Google usually move pretty slowly on these things, they like to deliberate, to make sure they’re on solid ground. But I’ll tell you one thing – the pressure is on. Really on. Because this isn’t just about X anymore. It’s about what kind of internet we’re building, what we’re willing to tolerate, what we’re willing to just scroll past. And who actually has to answer for the mess. And right now, it feels like the answer to that last part is… nobody, unless someone forces their hand. And that, my friends, is a problem. A really, really big problem that’s only going to get worse with AI if we don’t draw some serious, unmovable lines.

Share:

Emily Carter

Emily Carter is a seasoned tech journalist who writes about innovation, startups, and the future of digital transformation. With a background in computer science and a passion for storytelling, Emily makes complex tech topics accessible to everyday readers while keeping an eye on what’s next in AI, cybersecurity, and consumer tech.

Related Posts