Technology
  • 6 mins read

TikTok AI: Racist, Sexist, Unauthorized?

So, you think you’ve seen it all with AI, right? Robots writing crappy poems, chatbots hallucinating, deepfakes making politicians say wild things. Cute, innocent stuff, mostly. But let me tell you, what TikTok’s AI has allegedly been up to? That’s a whole different level of messed up. We’re talking racist, sexist, and completely unauthorized. All for a video game about a cute little fox. Yeah, you read that right. A fox.

TikTok’s AI Went Rogue. Seriously Rogue.

Look, I’ve been doing this job for a minute – 15 years, to be exact – and I’ve seen some pretty dumb advertising. But the latest kerfuffle involving TikTok and Finji, the indie publisher behind the absolutely gorgeous and critically acclaimed game ‘Tunic,’ it just makes your head spin. Finji is out there, rightfully fuming, claiming TikTok’s AI went full-on wild west, generating ads for ‘Tunic’ that were not just offensive, but also completely, utterly wrong about the game. And here’s the kicker: they did it without Finji’s permission. Without their knowledge, even.

You know ‘Tunic,’ right? It’s that Zelda-esque adventure where you play as a tiny fox knight exploring a mysterious world. It’s charming. It’s beautiful. It’s NOT, under any circumstances, a game about “racially diverse women” or “sexualized women,” as Finji CEO Bekah Saltsman put it. But that’s exactly what TikTok’s AI decided to spit out. Ads featuring human women, depicted in ways that were, frankly, inappropriate and totally misrepresented the game’s actual content. I mean, we’re talking about a game where the protagonist is literally a small, adorable fox. How do you even get from A to Z, much less to “sexualized women,” with an AI?

When AI Goes Beyond “Oops” and Into “What The Hell?”

This isn’t just a simple mistake, folks. This isn’t a typo in an ad copy. This is an AI, presumably TikTok’s self-serve ad platform, taking liberties so egregious they cross multiple lines. We’re talking about:

  • Racism and Sexism: Generating images of women of color and then “sexualized women” (Finji’s own words, and I trust them on this) to promote a game that has literally zero human characters, let alone sexually objectified ones. That’s a problem. A huge one.
  • Unauthorized Use: Running ads for a product without the creator’s consent. That’s not just rude, it’s a massive intellectual property violation. It’s basically TikTok playing marketing director for Finji without anyone at Finji even knowing they were in the running for the job.
  • Complete Misrepresentation: Fundamentally misunderstanding the product it’s supposed to be promoting. It’s like an AI trying to sell a vacuum cleaner by showing pictures of a sports car. Except, you know, with added bigotry.

It’s not just incompetence; it’s a profound failure of oversight. Or maybe, a complete lack of it. It begs the question: how many other indie devs, how many other small businesses, are having their products twisted and misrepresented by an AI running amok on TikTok?

But Seriously, Who Is Letting This Happen?

The thing is, this whole incident just screams a lack of human supervision. It’s like someone built a really powerful, really dumb robot, gave it a bunch of money, and told it to go make ads, no questions asked. And then just walked away. What kind of safeguards are in place? Are there any? Because from what I can tell, the answer seems to be a resounding “nope.”

“We’ve seen our games get advertised for other products, but this is the first time we’ve had our game advertised with imagery that is racist, sexist, and not even from our game.” – Bekah Saltsman, CEO of Finji (paraphrased from the sentiment of their statements)

And that quote, or the sentiment behind it, really hits. Finji has been around. They’ve dealt with the usual shenanigans of the internet. But this? This is new territory. This is TikTok’s AI actively damaging their brand and, frankly, probably making a lot of people scratch their heads and wonder what kind of game ‘Tunic’ actually is. Imagine working your butt off for years, pouring your heart and soul into creating something beautiful and unique, only for some algorithm to slap your logo on top of something completely offensive and unrelated. It’s infuriating. It truly is.

The Real Danger Here Isn’t Just a Bad Ad

This isn’t just a quirky AI glitch we can all laugh about later. This is serious. It highlights a critical problem with the unbridled deployment of AI without ethical guardrails or, you know, basic human common sense. When an AI can decide on its own to generate and run ads that are racist, sexist, and completely fabricated, it’s not just a marketing issue. It’s a societal one. It’s about who controls the narrative, who controls the imagery, and what kind of garbage gets pushed into our feeds without our consent.

And let’s be real, TikTok has a massive audience. If these kinds of ads are slipping through the cracks, how many people saw them? How many people now have a completely skewed, possibly negative, impression of ‘Tunic’ because of some rogue AI? The damage isn’t easily undone. Brand reputation, especially for indie developers who rely on goodwill and word-of-mouth, is incredibly fragile. One bad viral moment, especially one involving racism or sexism, can tank years of hard work.

What This Actually Means

Here’s the thing: this isn’t just about TikTok. This is a wake-up call for every platform rushing to integrate AI into their operations, especially advertising. It’s a stark reminder that these systems, left unchecked, can and will go off the rails. They don’t understand nuance. They don’t understand ethics. They certainly don’t understand that a cute little fox in a video game isn’t a human woman. And they definitely don’t care about your intellectual property rights.

So, what needs to happen? Transparency, for starters. TikTok needs to explain how this happened, what they’re doing to prevent it, and how they’re going to make it right for Finji. And frankly, every company using AI for content generation or advertising needs to implement strict, human-led review processes. Because if a company can’t even ensure their AI isn’t accidentally pushing bigotry and lies, then maybe they shouldn’t be using it at all. It’s not rocket science, people. It’s just basic decency. And if AI can’t grasp that, then maybe we need to pull the plug on some of these systems until they can. Or, more accurately, until the humans running them figure out how to put some actual controls in place. Because if we don’t, this kind of messed-up story is just going to keep happening, over and over again…

Share:

Emily Carter

Emily Carter is a seasoned tech journalist who writes about innovation, startups, and the future of digital transformation. With a background in computer science and a passion for storytelling, Emily makes complex tech topics accessible to everyday readers while keeping an eye on what’s next in AI, cybersecurity, and consumer tech.

Related Posts