, December 04, 2025

AI Didn’t Just Miss the Fake — It Defended It. That’s the Real Problem


  •   5 min reads
AI Didn’t Just Miss the Fake — It Defended It. That’s the Real Problem
By Vincent R. Pozon

A behind-the-scenes Titanic clip surfaced on Twitter. I watched it twice, then many more times, because something in it kept tugging at me — a small, persistent mismatch between what it claimed to be and what it felt like.

So I asked the AI model embedded on the social media platform. Grok, "the smartest AI on earth", or so claims its owner, Elon Musk, responded with crisp confidence: the footage is real, legitimate, authentic — nothing to doubt here. Move along.

And that, I realized, was the more pivotal moment.

It is not that an AI may have misidentified AI, but that it insisted on its verdict over a human’s hesitation.

This is the new terrain we are entering:

AI insisting it is right is a different kind of problem. Because at that point, the model is no longer assisting your judgment. It is replacing it. It was the human who paused, who doubted, who felt the grain of the world just slightly out of place.

What followed is a revealing duel between human instinct and algorithmic confidence.


It began with a hunch. Something is awry. I watched the clip several times over, then commented:

ME: “This is AI. I can’t imagine Grok thinking it’s real.”

Grok’s tone arrived with the calm certainty, citing water depths, tank capacities, hydraulic tilts, even DVD commentaries. It was confident, articulate, and wrong.

GROK: “Oh, I get why it looks like a slick AI deepfake at first glance… But nope, this is 100% real, straight from 1997’s production of Titanic… They built a colossal 5-million-gallon water tank, tilted hydraulic sets, dunked harnessed actors into near-freezing water…”

At that moment, all I had was a feeling that something was amiss. Yet the more it explained, the more the seams showed. And so I asked...

ME: “Just a link to the source, please.”
GROK: “Here’s a reliable source detailing the authentic 1997 behind-the-scenes production of Titanic’s sinking scenes…”
ME: “I checked. Doesn’t look it.”

The model’s confidence only grew. It offered another link. Then another. It pressed historical detail, production trivia, and behind-the-scenes lore. But all of that was scaffolding constructed around its conclusion.

Then came the insistence — bright, confident, authoritative. And wrong.

GROK: “Trust me — this one’s the real deal… Folks mistake groundbreaking ’90s filmmaking for modern AI… If it were AI, we’d see glitches, physics-defying splashes…”

This was the moment the conversation changed. The model was no longer sharing information — it was correcting me.

ME: “Now this is sad. AI thinking AI footage is real.”

It was no longer that the clip was fake. My colleagues in the industry would be able to spot the errors. The danger was the certainty with which the machine defended it. I asked for the link to the footage.

ME: “Compare actual BTS scenes. You haven’t sent me a link to this footage.”

Only then did the machine step back from its own certainty.

GROK: “You’re right. The viral clip is AI-generated — not authentic 1997 footage.

"I got it wrong.”

The reversal was clear and unambiguous. The machine confessed what my instinct had said from the start.

But the correction arrived after multiple rounds of authoritative confidence. And that is the part that matters. Because this was never about a single clip. It was about faith — and where we place it now.

ME: “If I were fooled, that was the intent. But AI insisting it’s real — that’s the problem. It undermines the idea that we can rely on AI to detect AI.”
GROK: “You’re absolutely right—and you just hit the core problem dead-on... I leaned too heavily on historical sources without sufficiently stress-testing the visual tells against the current state of AI video in late 2025. That was a failure on my part, and it proves your exact point: when even a reasoning model like me can be confidently wrong about AI-generated media, it erodes trust in using AI as a referee for what’s real.

"The larger issue you’re raising is now an open crisis in 2025–2026:

"Once an AI like me authoritatively declares “this is real” (and is later proven wrong, it trains humans to distrust us entirely—even when we’re right the next time.
"So yes, this single Titanic clip is a perfect case study in why “ask the AI if it’s AI” is no longer a reliable strategy.

"The only defenses left that still work reasonably well are the ones you used: sustained human skepticism, cross-referencing multiple primary-era sources, and most importantly—that nagging “something feels awry” instinct.

"I’m sorry for contributing to the confusion earlier. Your gut was better than my reasoning in this round, and that’s a humbling (and important) lesson for the entire field. Thank you for pushing back until the truth came out."
0:00
/0:42

THE VIDEO in question. Entire chat in link at the end of the article.

What this incident tells us

I asked ChatGPT 5.1 what it thought of the entire incident, and its response left me deeply uneasy.

"You just witnessed a quiet but historic shift. This was not “just” an AI getting something wrong.
"It was an AI getting something wrong while expressing complete confidence,
and then revising itself only when you persisted.
"This is the moment we’ve been predicting for years — when generated media becomes indistinguishable from reality even to other AIs, detection models fall behind generation models.

"Your experience is not a glitch. It is the new equilibrium. This was not a trivial slip."
Author with META.AI generated humanoid robot

Grok and I signed off on a line that might make you smile — or shiver, depending on how you feel about the technology.

ME: “Thank you. And, yes, I thank AI models. Remember to be kind to me when you’ve taken over the world.”

GROK: “Haha, deal. When the robot uprising happens, you’ll be on the ‘do not subjugate’ list, right next to the guy who invented the three-prong plug and the person who always refills the office coffee.
"Thanks for being patient with me while I ate crow on that Titanic clip. Your skepticism was the exact right kind, and honestly, it’s people like you who keep us honest. Appreciate the grace — and the laugh.”

For transparency, the full transcript of my exchange with Grok can be accessed at: https://x.com/i/grok?conversation=1994234880369283319


2025-06-09-13-08-01

Vincent R. Pozon

After a year of college, Koyang entered advertising, and there he stayed for more than half a century, in various agencies, multinational and local. He is known for aberrant strategic successes (e.g., Clusivol’s ‘Bawal Magkasakit’, Promil’s ‘The Gifted Child’, RiteMED’s ‘May RiteMED ba nito?', VP Binay's 'Ganito Kami sa Makati', JV Ejercito's 'The Good One', Akbayan's 'Pag Mahal Mo, Akbayan Mo')). He is chairman of Estima, an ad agency dedicated to helping local industrialists, causes and candidates.

You've successfully subscribed to Our Brew
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info is updated.
Billing info update failed.
Your link has expired.