phishing emails
Following preliminary investigations, the case has been categorised as "obtaining property by deception". Pixabay

Hong Kong authorities have initiated an inquiry following allegations made by an employee at an undisclosed corporation, who asserted that she fell victim to a sophisticated scam involving a deepfake video conference call, resulting in the transfer of HK$200 million (£20 million) from her company's funds to fraudsters.

The Hong Kong Police Force confirmed receiving a report from the said employee, stating that she was misled into executing the transfer by individuals impersonating high-ranking company officials.

In an official statement, the police disclosed: "Police received a report from a staff member of a company on 29 January that her company was deceived of some HK$200m after she received video conference calls from someone posing as senior officers of the company requesting to transfer money to designated bank accounts."

Following preliminary investigations, the case has been categorised as "obtaining property by deception" and has been assigned to the cybercrime unit for further scrutiny. As of now, no arrests have been made, and inquiries are ongoing.

According to reports by Hong Kong's public broadcaster, RTHK, the affected individual is identified as a clerk within a multinational corporation, though the company's name remains undisclosed. Acting Senior Superintendent Baron Chan, as quoted by RTHK, speculated that the perpetrator employed artificial intelligence (AI) to execute the fraudulent scheme.

"[The fraudster] invited the informant [clerk] to a video conference that would have many participants. Because the people in the video conference looked like the real people, the informant ... made 15 transactions as instructed to five local bank accounts, which came to a total of HK$200m," Chan remarked.

He further added: "I believe the fraudster downloaded videos in advance and then used artificial intelligence to add fake voices to use in the video conference."

The broadcaster also stated that the employee received a directive from the company's chief financial officer emphasising the confidentiality of the transactions. Only after the money transfer and a conversation with the company's headquarters did the employee realise the call was fraudulent.

Commenting on the incident, Chan cautioned: "We can see from this case that fraudsters are able to use AI technology in online meetings, so people must be vigilant even in meetings with lots of participants."

The fraudulent scheme involving a deepfake video conference call, which led to the transfer of HK$200 million from a company's funds to fraudsters, was initially uncovered following a phishing attempt. It was revealed that an employee in the finance department of the company's Hong Kong branch received a phishing message purportedly from the company's UK-based chief financial officer.

The message instructed them to execute a secret transaction, raising suspicions. However, during a subsequent group video call that appeared to feature the CFO and other company officials, the employee was persuaded to proceed with the transfers.

Despite initial doubts, the employee executed 15 transfers totalling HK$200 million to five different Hong Kong bank accounts. It wasn't until approximately a week later that officials realised the scam, prompting a police investigation into the matter.

This incident joins a growing list of deepfake-related concerns globally. Notably, the exploitation of deepfake technology in other contexts has sparked significant reactions.

One instance that garnered widespread attention involved the unauthorised creation and dissemination of AI-generated deepfake pornography featuring singer Taylor Swift. The incident led to considerable backlash and pressed the need for proactive measures to address the misuse of such technology.

In response to the escalating concerns surrounding deepfake content, various measures have been proposed and implemented. Technology companies like Microsoft have introduced updates to their software to prevent the generation of inappropriate content using deepfake technology.

Additionally, platforms such as X (formerly known as Twitter) have taken steps to curb the spread of deepfake content, including shutting down accounts responsible for disseminating such material.

Furthermore, policymakers have started to take action to address the challenges posed by deepfake technology. Initiatives such as the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act have been introduced to combat the dissemination of manipulated content.

In the political arena, concerns about deepfake technology have become particularly pronounced, especially in the lead-up to major elections worldwide.

Meta, the parent company of Facebook, has come under scrutiny following the circulation of manipulated videos or deepfakes featuring prominent political figures such as US President Joe Biden. The incident prompted calls for updates to policies governing deepfake content on social media platforms.