Skip to main content
Clear icon
47º

‘New wave of cybersecurity attack:’ Deepfake AI poses threat on social media

Chris Hadnagy, founder of Social-Engineer, LLC, discusses deepfake AI videos

ORLANDO, Fla. – With the presidential election year upon us, the impact of audio and video deepfake AI has become the hot conversation in 2024.

AI technology is vastly improved from just six months ago, according to Chris Hadnagy aka the human hacker and founder of Social-Engineer, LLC.

“I found some websites that for pennies, I was able to make some pretty realistic deepfakes,” Hadnagy told News 6. “I was shocked at how simple it was.”

Hadnagy put a website to the test creating deepfake AI of Taylor Swift and business powerhouse Elon Musk.

With Musk, he used legitimate video interviews, altered the words and it appeared as if Musk was saying Chris Hadnagy was one of his best friends.

The Musk deepfake also said he met Mike Holfeld, “one of his favorite reporters.”

Of course, Musk has never met Hadnagy or Holfeld.

“I would love it if Elon Musk was my best friend and he thought you were the best reporter on Earth,” Hadnagy said. “Sadly, I don’t know the guy.”

The video was clearly marked deepfake but in the world of social media, intentional AI deceptions offer no warnings or cautions.

“Imagine if video was released of the president of the U.S. claiming that there was going to be a war or he was going to bomb a certain country,” Hadnagy said. “These things could happen.”

A recent deepfake video of Coinbase CEO Brian Armstrong posted on YouTube appeared to offer bitcoin investments that would “double your money.” Of course, it was a scheme to steal money from investors across the country.”

“It looked like him, sounded like him and they even got a Coinbase-esque office in the background,” IdentifAI’s Paul Vann recently told News 6.

Vann and cofounder Justine Marciano exposed the Armstrong deepfake using their company’s software and alerted Coinbase executives.

The deepfake commercial was pulled a short time later.

“I think it’s a new wave of cybersecurity attack,” Marciano told News 6.

Vann agreed cautioning that many times the ads “are not full vetted.”

“Companies spend money to put (legitimate) ads out there but hacker groups can do the exact same thing.”

Earlier this month the Florida Legislature passed a new bill requiring that any political advertisement that uses images, video or audio created with generative AI that depicts a real person doing something that didn’t happen in reality, and was created to injure a candidate or to deceive regarding a ballot issue, must have a prominently-placed disclosure that it was created using AI.

Gov. DeSantis still has to sign it into law. If he does, it will go into effect on July 1.

If you have a consumer or investment issue, contact us makeendsmeet@wkmg.com or text the words “make ends meet” along with your issue and contact information to 407-676-7428.


Get today’s headlines in minutes with Your Florida Daily: