I'm watching myself control Pedro Pascal.

On my screen, Pedro's doing exactly what I'm doing, turning his head, raising his eyebrow, smiling on command, and touching his hair.

Except it's not really Pedro.

And it's not really me either.

It took me seven minutes to make this video using Kling, an AI tool that does face swaps. Seven minutes to put my facial expressions onto Pedro Pascal's body in a scene from The Last of Us.

My kids thought it was hilarious.

"Do it again, Dad!" my son said, giggling.

But I wasn't laughing.

Because while I was watching Pedro-who-was-actually-me turn his head left and right, my brain kept asking the same question:

What's stopping someone from doing this to someone else?

Nothing.

Not technical skill. Not expensive software. Not even time.

Seven. Minutes.

I work in fraud prevention. I've spent a decade building teams to stop bad actors, studying patterns, mapping risk. And this—this—is different.

Because the barrier to entry just disappeared.

A fraudster doesn't need to be a video editor anymore. Doesn't need fancy equipment. Doesn't need weeks to study someone's mannerisms.

They need seven minutes and a tool anyone can access.

We trust what we see. Video feels real in a way text never will.

CEO fraud used to require voice cloning (already terrifying enough). Now imagine an executive on a Zoom call, giving wire transfer instructions, looking and sounding exactly like themselves.

Except it's not them.

The more details match, the face, the voice, the mannerisms, the context, the less our brains question what we're seeing.

Deepfakes are just hyper-specific social engineering.

And we're entering an era where seeing is not believing.