(Picture: Getty)
  • EY has warned companies to be on the alert for "synthetic media", as criminals fake online audio and video.
  • One UK company executive transferred millions to a criminal's bank account after receiving a call that mimicked his CEO's voice.
  • New technology may help to verify content, but companies also need to educate their staff about so-called deepfakes. 
  • For more articles, go to www.BusinessInsider.co.za.

Last year, the CEO of a UK-based energy firm transferred more than R4 million to a bank account, after he believed he received a telephonic instruction from his boss at the company’s German parent group.

But, as The Wall Street Journal reported, the call was made by a fraudster who used AI voice technology that mimicked the German chief executive's voice. According to the WSJ, the UK CEO confirmed that the person spoke with the exact “subtle” German accent, and the same “melody”, as that of his boss.

This is an example of the rise of “synthetic media” in the corporate sphere, according to a new report by the accounting firm and consultancy group EY.

“It’s now easier than ever to fabricate realistic graphical, audio, video and text-based media of events that never occurred making synthetic, or fake media, one of the biggest new cyber threats to business,” says Ashwin Goolab, consulting partner of EY Africa, India and the Middle East.

Goolab says these fakes can make companies vulnerable to fraud, defamation, extortion, and market manipulation.

“A well-timed, sophisticated deepfake video of a CEO saying their company won’t meet targets could send the share price plummeting. Phoney audio of an executive admitting to bribing officials is prime fodder for extortion.

"If released, these could cause serious reputational damage, alienate customers, impact revenue, and contribute to the volatility of financial markets."

Cybersecurity companies, startups, universities and government agencies are exploring how to authenticate videos, photos and text on the internet, and EY’s analysis shows patents filed in this space jumped by 276% between 2007 and 2017.

Some of the possible solutions include:

Digital forensics. Inconsistencies in lighting and shadows in an image or eye-blinking patterns are identified, including pixel-level incongruities that may trick the human eye. 

Digital watermarking. This helps identify bogus content by placing hidden marks in images or videos. Such signatures could also be built into software for cameras, speakers and other content-creation devices to automatically tag images, videos or audio at the moment they are created. 

Hashing and blockchain. This technique takes digital watermarking one step further, EY says. The content is tagged with date, time, location and device-level information that identifies how it was generated – and sent to a public blockchain, creating an immutable copy directly from the source.

Apart from these future solutions, companies will have to equip their staff with media literacy and critical-thinking training, to help them detect phoney information, Goolab says.

“PR and marketing departments will need to be judicious about how much media of senior leadership is exposed to the public and what technologies are used to establish the provenance and integrity of digital content being shared across the internet.”

Receive a daily update on your cellphone with all our latest news: click here.

Get the best of our site emailed to you daily: click here.

Also from Business Insider South Africa: