Information Security Tips (May 2025) – How to Prevent AI Deepfake Scams?
資訊安全貼士 (2025年5月號) - 如何防範 AI Deepfake 詐騙?

To: All Users
Deepfakes refer to audio or video content that has been digitally manipulated using AI (Artificial Intelligence) deep learning to convincingly imitate a person’s face, voice, or behavior. While this technology has legitimate applications, it is increasingly being exploited by scammers to impersonate individuals—such as corporate executives during video calls—to deceive others and commit fraud. These scams can lead to financial loss, reputational damage, or misinformation, and similar cases have already occurred in Macau.
5 Tips to Protect Yourself from Deepfake Scams:
- Stay Alert:
- Always be cautious of videos and audio you receive online. Seeing is not believing. Treat unexpected or suspicious content with skepticism.
- Verify Through Multiple Sources:
- Check if the information is reported by other reliable sources, such as official channels or reputable news outlets. If it comes from only one source, it may be fabricated.
- Watch for Irregularities:
- Even though deepfakes are becoming more realistic, there are still telltale signs.
- Facial expressions or lip-syncing that don’t match the audio
- Unnatural movements or gestures
- Unusual voice pitch or inconsistent tone
- Skin tone or lighting that looks off or unnatural
These can all indicate manipulated media.
- Even though deepfakes are becoming more realistic, there are still telltale signs.
- Be Extra Cautious with Money Requests:
- If you receive urgent instructions to transfer money—especially from a supposed boss or family member—pause and verify their identity directly, preferably through another channel like a phone call.
- Hang up on unknown video/voice invitations:
- If you receive a call or video conference invitation from an unknown number, especially if the caller claims an identity but you can’t immediately verify it, hang up immediately to avoid falling into a Deepfake scam. If the person continues to harass you or makes you feel uncomfortable, consider blocking that number/user account to protect yourself.
Deepfake technology poses real threats, but with awareness and careful verification, you can protect yourself. If you suspect you’ve encountered a deepfake scam, report it to the police or seek assistance immediately.
Should you have any enquiries, please feel free to contact ICTO Help Desk.
Reference:
- How can I identify a phishing, fake email and websites?
- Do not let yourself to be the next victim of phishing scams
- Beware of Ransomware Attacks
- Beware of Spear Phishing
- Beware of Scammers Exploiting AI Techniques
ICTO Help Desk
Location : Room 2085, 2/F, Central Teaching Building (E5), eMap
Telephone : 8822 8600
email : icto.helpdesk@um.edu.mo
Information and Communication Technology Office
各位用戶:
所謂 Deepfake(深度偽造),是指騙徒透過人工智能(AI)中的深度學習技術,模仿他人的面部表情、聲音特徵等,製作出以假亂真的影片或音訊。雖然這項技術有合法的應用,但它越來越多地被詐騙者利用來偽裝成他人進行詐騙,例如冒充公司高層在視像會議中要求員工匯款,又或者製作虛假新聞影像,誤導大眾。隨著 Deepfake 技術日益成熟,相關詐騙手法在本澳亦已出現,值得大家提高警覺。
防範 Deepfake 詐騙的五個方法:
-
保持警惕:
- 對任何來自網絡的影片或錄音資料要有懷疑精神,眼見不一定為實。尤其是當中牽涉到金錢或敏感資訊時,更應小心。
- 查證多個資訊來源:
- 收到資訊後,可於網上搜尋是否有其他可靠媒體或官方消息來源作佐證。若只有單一來源,可信度存疑。
- 留意異常細節:
-
雖然 Deepfake 效果愈來愈逼真,但仍可能出現瑕疵,例如:
- 面部表情或嘴形與語音不同步
- 動作不自然
- 聲音音調怪異或忽高忽低
- 膚色與光線不一致
這些都是判斷影片是否被偽造的線索。
-
- 涉及金錢時要格外謹慎:
- 如收到來歷不明、聲稱來自上司或親友的緊急匯款請求,應多方查證,包括直接聯絡當事人,切勿草率轉帳。
- 掛斷來歷不明的視像或語音邀請:
- 若接到陌生來電或視像會議邀請,尤其對方聲稱身份但無法即時驗證,應立即掛斷,避免落入 Deepfake 詐騙陷阱。如果對方持續騷擾或令你感到冒犯,建議直接封鎖該號碼/帳號以保護自身安全。
Deepfake 技術雖強,但只要提高警覺、善用常識和多方查證,仍然可以有效防止受騙。如懷疑遇上詐騙,應即時向警方報案或尋求協助。
如有任何疑問,請聯絡資訊及通訊科技部服務中心。
參考資料:
服 務 中 心
位置 : 中央教學樓東5座(E5)二樓2085室 (電子地圖)
電話 : 8822 8600
電郵 : icto.helpdesk@um.edu.mo
資訊及通訊科技部