AI Deepfakes Are a Threat to Businesses Too—Here's Why - Decrypt

AI Deepfakes Are a Threat to Businesses Too—Here’s Why – Decrypt

AI Deepfakes Are a Threat to Businesses Too—Here's Why - Decrypt PlatoBlockchain Data Intelligence. Vertical Search. Ai.

As tech giants compete to bring artificial intelligence to the masses and own the burgeoning market, the AI arms race is fueling an increase in “deepfake” videos and audio—content that often looks or sounds convincingly legitimate, but is actually a fraudulent misrepresentation. And they’re impacting businesses too, according to a new report.

Deepfakes are AI-generated creations like images, videos, and audio manipulated to deceive people. Scammers use deepfakes for fraud, extortion, or to damage reputations. The proliferation of generative AI tools has made it easier than ever for scammers to create fake content.

Celebrities and other public figures are being inserted into artificial and sometimes explicit footage without their consent in ways that sometimes go viral or can cause panic on social media. In a report provided to Decrypt, global accounting firm KPMG wrote that the business sector is not immune to the deepfake threat, either.

Deepfake content can be used in social engineering attacks and other types of cyberattacks targeted at companies, KPMG wrote, while such content can also impact the reputations of businesses and their leaders. False representations of company representatives can also be used in schemes to scam customers, or to convince employees to provide information or transfer money to illicit actors.

KPMG cited a 2020 example of a Hong Kong company branch manager who was duped into transferring $35 million worth of company funds to scammers after believing his boss was on the phone, ordering him to do so. Instead, it was an AI-cloned recreation of the supervisor’s voice—all part of an elaborate scheme to swindle money from the firm.

“The consequences of cyberattacks that utilize synthetic content—digital content that has been generated or manipulated for nefarious purposes—can be vast, costly, and have a variety of socioeconomic impacts, including financial, reputational, service, and geopolitical, among others,” the report reads.

Deepfake footage of Donald Trump being arrested earlier this year, Pope Francis wearing Balenciaga luxury apparel, and Elon Musk promoting crypto scams have gone viral in the last year as deepfake technology has improved thanks to evolving AI tools.

“Whether it’s cheap fakes or deepfakes, the economic model has shifted significantly because of generative AI models,” KPMG wrote.

Beyond the public figures mentioned above, prominent celebrities whose likenesses have been stolen and applied to faked footage include actress Emma Watson and musician Taylor Swift. But it’s the potential impact to businesses and their sometimes prominent leaders that KPMG is raising the alarm about.

“As a risk factor, deepfake content is not merely a concern for social media, dating sites, and the entertainment industry—it is now a boardroom issue,” KPMG said. “Case in point, nearly all respondents (92%) of a recent KPMG generative AI survey of 300 executives across multiple industries and geographies say their concerns about the risks of implementing generative AI is moderately to highly significant.”

It’s not just businesses and celebrities that are dealing with the rise in both the quality and number of deepfake videos on the internet. Governments and regulators are also reckoning with the potential impact on society and elections, while the creators of such AI tools are weighing their potential negative impact.

Last week, the U.S. Federal Election Commission, anticipating the use of deepfakes in the 2024 election, moved forward with a petition to prohibit using artificial intelligence in campaign ads.

If approved, the agency would amend current regulations regarding fraudulent misrepresentation of “campaign authority,” and clarify that the prohibition applies to deliberately deceptive AI campaign advertisements.

“Regulators should continue to understand and consider the impact of evolving threats on existing regulations,” Matthew Miller, Principal of Cyber Security Services at KPMG, told Decrypt. “Proposed requirements for labeling and watermarking AI generated content could have a positive impact.”

In July, researchers at MIT proposed adding code changes to large diffusion models to mitigate the risk of deepfakes, by adding tiny changes that are hard to see but ultimately change how the models work, causing them to generate images that don’t look real. Meta, meanwhile, recently held back an AI tool due to its deepfake potential.

The excitement around the creative potential of generative AI tools has been tempered by the realization that they can similarly create opportunities for malicious content. Ultimately, amid rapid improvements for such tools, Miller urged everyone to be vigilant about the potential for fake content.

“The public needs to maintain continuous vigilance when interacting through digital channels,” he told Decrypt. “Situational awareness and common sense go a long way to prevent an incident from occurring. If something does not ‘feel right,’ it has a high probability of being malicious activity.”

Stay on top of crypto news, get daily updates in your inbox.

Time Stamp:

More from Decrypt