“Basically zero, garbage!”
Yeah, that’s not some angry dude yelling at his smart speaker. That’s Joel David Hamkins, a legitimately renowned mathematician, dropping truth bombs about AI models and their ability to solve math problems. And if I’m being honest, it’s exactly what I’ve been thinking, and probably what a lot of you have been too, if you’ve spent five minutes actually testing these things beyond asking them to write a sonnet about a toaster. Math, it turns out, is the AI’s kryptonite. Who knew?
So, They’re Not Geniuses After All? Shocking!
Look, for months now, we’ve been swimming in this sea of AI hype. Every other headline screams about how AI is gonna solve world hunger, cure cancer, and probably do my taxes while making me a perfect cup of coffee. (Still waiting on that last one, by the way.) And then you hear stuff like this from someone who actually understands the deep mechanics of logical reasoning and abstract thought. It kinda makes you pump the brakes on the whole “Skynet is coming” panic, doesn’t it?
Hamkins isn’t just some cranky old fuddy-duddy, either. The guy’s a professor at Oxford, he knows his stuff. He’s not saying AI is useless for everything. He’s specifically calling out its fundamental inability to actually reason in the way math demands. He’s saying these models, bless their data-gobbling hearts, are just not built for it. They’re pattern matchers, really good ones, sure. But they’re not thinking.
The thing is, we keep falling for the parlor trick. An AI spits out a plausible-sounding answer, and we’re all like, “Whoa, it’s a genius!” But what Hamkins is pointing out is that when it comes to math, especially anything beyond rote calculation or regurgitating a known theorem, the AI isn’t understanding the underlying principles. It’s just predicting the most likely sequence of tokens based on the gazillion examples it’s seen. It’s like a really, really good mimic, not an actual original thinker.
It’s Not Just About 2+2=4
And let’s be clear, we’re not talking about whether ChatGPT can add two plus two. It can. Mostly. Sometimes. (I’ve seen it stumble on that too, not gonna lie, especially if you try to trick it.) We’re talking about higher-level stuff. Proving theorems. Deriving new mathematical concepts. Understanding the why behind a solution, not just spitting out the what.
That’s where the “garbage” comes in. Because if an AI can’t reliably perform the logical steps required for a proof, or if it hallucinates facts (which they do, constantly, even in math!), then the output isn’t just wrong. It’s fundamentally flawed. It’s not just a small error; it’s a house built on sand. And in math, a house built on sand collapses. Fast.
So, Are We All Just Believing the Hype Because It Sounds Cool?
Honestly, I think a big part of it is exactly that. There’s a powerful narrative at play here: AI is the future, it’s going to change everything, it’s practically magic. And that’s a sexy story. It sells subscriptions, it drives investment, it makes for great headlines. But the reality, as always, is a lot messier, and a lot less magical. When an actual expert like Hamkins steps up and says, “Hold on, emperor’s got no clothes when it comes to prime numbers,” you gotta listen. Or at least, I do. Because I’ve seen this pattern before, with every new tech fad that promises to revolutionize everything and then delivers… well, something a bit more mundane.
“Basically zero, garbage.” – Joel David Hamkins, on AI models solving math.
It’s like thinking a parrot understands Shakespeare just because it can perfectly recite Hamlet’s soliloquy. The parrot can reproduce the words, sure, but it doesn’t grasp the existential dread or the poetic beauty. AI can often reproduce mathematical steps, or even generate text that looks like a proof, but it doesn’t get the math. It doesn’t have insight. And that’s a huge, gaping hole if you’re trying to push the boundaries of knowledge.
The Real Takeaway Here, If You Ask Me
This isn’t about shitting on AI entirely. It’s an incredible tool for so many things. Data analysis, content generation (within limits, obviously), automating repetitive tasks – absolutely. But we’ve got to stop treating it like some omniscient oracle that can just think its way through any problem. Especially not one as fundamentally logical and abstract as advanced mathematics.
What Hamkins is saying, and what I wholeheartedly agree with, is that there are fundamental limitations to these models. They excel at pattern recognition and statistical prediction. They don’t do logical inference in the human sense. They don’t have consciousness. They don’t have intuition. And those are pretty damn important for, you know, actually doing math. The kind of math that truly advances our understanding of the universe.
So, next time you see a headline about AI solving some impossible math problem, maybe take a beat. Ask yourself: Is it actually solving it, or is it just finding a very convincing pattern in its training data? Because from what I can tell, and from what folks like Hamkins are shouting from the rooftops, the distinction matters. A lot. It’s the difference between a glorified calculator and genuine intelligence. And we’re still a long, long way from the latter, especially when it comes to the elegant, brutal truth of numbers.