The “Just Deal With It” Doctrine
It’s everywhere, isn’t it? This narrative. You can’t scroll through LinkedIn or catch five minutes of the evening news without hearing some variation of “AI is coming for your job, better reskill!” or “AI will change everything, for better or worse, so just buckle up.” It’s spoken with this air of weary resignation, like we’re talking about the weather or taxes. Like it’s gravity, for crying out loud. And that, my friends, is exactly the point.
This whole “inevitable” thing? It’s a brilliant piece of PR, a masterclass in narrative control by the very people who stand to make unfathomable amounts of money from this technology. They want you to feel powerless. They want you to think it’s too big, too complex, too far gone to do anything about. Because if you feel that way, you won’t ask tough questions. You won’t demand regulation. You won’t push back on the ethical nightmares bubbling just under the surface. You’ll just… deal with it. And that’s exactly what they want.
We’ve Seen This Movie Before
I’ve been around long enough to remember the dot-com bubble, the early days of social media, the whole blockchain/crypto craze. Every single one had its cheerleaders, its prophets of a new, unavoidable future. Remember when we were all supposed to be living in the metaverse by now? Yeah, me too. Things that seem “inevitable” often aren’t. They’re built on choices, on investments, on human decisions and, sometimes, on pure, unadulterated hype.
Who Benefits From This “Inevitable” Talk?
Here’s the thing, and this is probably the most important question to ask: who benefits when we collectively shrug our shoulders and accept AI as a done deal? Is it the millions of workers whose jobs are suddenly on the chopping block? Is it the artists and writers whose work is being scraped without consent to train these models? Is it you, the average person, who’s about to be inundated with even more AI-generated junk, deepfakes, and privacy invasions?
I don’t think so.
“The narrative of inevitability is often spun by those who have the most to gain from its acceptance, shielding them from accountability and critical examination.” – Some smart person, probably me right now.
No, the ones who benefit are the behemoth tech companies. The ones pouring billions into development, buying up every scrap of data they can get their digital hands on. They want to move fast, break things (and people, apparently), and establish market dominance before anyone can really figure out what the heck is going on. And saying it’s “inevitable” is just another way of saying, “Don’t bother trying to stop us.” It’s a power play, plain and simple.
It’s Not Magic, It’s Code (and Power)
Let’s be clear: AI isn’t some alien intelligence that landed on Earth and started rewriting our destiny. It’s software. It’s algorithms. It’s massive amounts of data, processed by incredibly powerful (and energy-hungry) computers. It’s built by humans, funded by humans, and deployed by humans. And guess what? Humans can make different choices.
We can choose to regulate it. We can demand transparency. We can enforce ethical guidelines for how data is collected and used. We can push for safeguards against bias, misinformation, and job displacement. We can insist on fair compensation for the creators whose work feeds these systems. This isn’t some abstract philosophical debate; this is about concrete policy decisions, about legislative action, and about consumer pressure.
The idea that we can’t do anything about it is a cop-out. It’s an excuse for inaction. It’s a way to avoid the hard conversations we need to be having right now about who controls this stuff, who profits, and who pays the price. Because if we don’t, we’re essentially handing over the keys to our future to a handful of unelected tech executives. And honestly, I don’t trust them with my Spotify playlist, let alone society’s operating system.
What This Actually Means
So, what does this actually mean for us? It means we need to get loud. We need to stop buying into the hype and start asking tough questions. We need to remind ourselves that technology isn’t a force of nature; it’s a tool. And like any tool, it can be used for good or ill, depending on the intentions and the controls put in place by the people wielding it.
Don’t let anyone tell you AI is inevitable. That’s a lie. It’s a choice. Our choice, if we decide to take it. We have the power to shape its development, to demand accountability, and to ensure that whatever future AI helps build, it’s a future that actually serves humanity, not just corporate bottom lines. Otherwise, we’re just letting someone else write our story, and frankly, I think we deserve better than that. Don’t you?