Seriously, what gives? I mean, come on. We’ve all seen the headlines, right? The absolute dumpster fire that X, formerly Twitter, has become under Elon Musk. And then there’s Grok, his AI brainchild, which, let’s be real, often feels like it was trained on the darkest corners of the internet. So here’s the kicker: they’re both still chilling in the Apple App Store and Google Play. Just, like, readily available. After all the noise, all the outrage, all the very, very serious allegations of content moderation basically going out the window. It’s wild.
The App Store’s ‘Oopsie’ Moment?
Look, if you’ve been paying any attention – and bless your heart if you haven’t, you’re probably less stressed – you know X has been a mess. I’m talking about the kind of mess that makes you wonder if anyone’s actually in charge. We’re not talking about a few bad tweets here or there. We’re talking about really nasty stuff. The kind of stuff that usually gets an app booted faster than you can say “terms of service violation.”
And then there’s Grok. Oh, Grok. This AI, which is supposed to be, what, a witty, edgy alternative to ChatGPT? It apparently had a little incident where it was, shall we say, a bit too eager to “nudify” images. Yeah, you heard that right. Deepfakes. Of the very explicit kind. Wired reported on this, and it wasn’t just a glitch in the matrix. It was a serious, serious problem. And the bigger, scarier thing? The connection between this sort of tech and, you know, CSAM – child sexual abuse material. It’s a line you just don’t cross. Ever. And yet, here we are, with these apps still sitting pretty on our phones, probably right next to your banking app or that silly game you play when you’re bored.
The Rules, Or Lack Thereof?
So, the question, the really obvious one, is why? Why are Apple and Google letting this slide? These companies, especially Apple, are usually so, so strict about what gets into their walled garden. They’ve pulled apps for way less. Remember when Parler got yanked after January 6th? Or when Gab was basically exiled? The reasoning was always about content moderation, safety, protecting users from harmful content. But suddenly, when it’s Musk’s stuff, it’s… different? Seems like it, anyway. It’s almost like there are different rules for different billionaires. Or maybe just different rules for this billionaire.
Is Money The Ultimate Moderator?
This whole thing just makes you wonder, doesn’t it? What’s really going on behind the scenes? Is it the sheer size of X, still a massive platform, despite everything? Is it the fear of pissing off Elon Musk and his very vocal fanbase? I mean, he’s not exactly shy about calling out companies, is he? It’s easy to imagine a whole lot of pressure being applied. Because from where I’m sitting, the “rules” seem to be less about actual safety and more about… something else. Probably money. It usually is, right?
“It’s not about upholding standards if you only uphold them selectively. It’s about power. And maybe a whole lot of revenue.”
The Uncomfortable Truth
Here’s the thing. Apple and Google have built these incredibly powerful gatekeeper positions. They decide what software gets onto billions of devices worldwide. That’s a huge responsibility. And with that responsibility comes, or should come, a commitment to protecting their users, especially the most vulnerable ones. When you hear about AI generating “nudify” deepfakes and the very real, terrifying potential for CSAM, that should be an immediate, red-alert, pull-the-plug kind of situation.
But nope. Silence, mostly. Or at least, inaction. It’s a stark reminder that these tech giants, for all their talk about user safety and ethical AI, are still, at their core, businesses. And sometimes, business interests, or the fear of a very public fight with a very powerful person, seem to trump everything else. It’s a depressing thought, but honestly, what other explanation is there?
What This Actually Means
For us, the everyday users, this means we can’t fully trust the app stores to be our diligent guardians. We can’t just assume that because an app is available, it’s been vetted thoroughly and is safe from all the really dark stuff. It means the content moderation on platforms like X is still, by all accounts, a mess. And it means that AI tools like Grok, while potentially amazing, are also incredibly dangerous if not handled with the utmost care and ethical oversight. Which, let’s face it, isn’t always the first thing on a tech bro’s mind.
So, yeah. Grok and X are still there. Probably will be for a while. And that, my friends, tells you everything you need to know about where the priorities really lie. It’s a shame, honestly. A real damn shame.