‘100 Video Calls Per Day’: Models Are Applying to Be the Face of AI Scams

‘100 Video Calls Per Day’: Models Are Applying to Be the Face of AI Scams

Summary

Dozens of Telegram job posts and application videos show people applying to be “AI face models” — largely young women who sit in front of a camera while their faces are swapped or filtered with AI to run deepfake video calls. WIRED’s reporting links these ads to scam hubs in Southeast Asia, where industrialised operations run romance and investment frauds. Ads promise long hours (some list up to 100–150 video calls per day), high pay, and demand frequent photo and video submissions; many listings contain clues — locations, language requirements, references to crypto or “clients” — that tie them to pig-butchering and other organised scams. Researchers and anti-trafficking groups warn the roles can overlap with or enable human trafficking and exploitation.

Key Points

  • Recruitment videos on Telegram advertise roles as “AI face models” or “real face” models in known scamming hubs, especially in Cambodia and Southeast Asia.
  • Job ads demand extreme output: some list roughly 100 video calls per day, others up to 150, plus daily photos and voice messages.
  • Applicants are mostly young women; requirements often include photos, short intro videos, marital and vaccination status, and language skills (notably Chinese).
  • The advertised work is tied to pig-butchering and romance/investment scams where deepfakes and face-swapping are used to convince victims on video calls.
  • Red flags include unusually high salaries for the region, vague recruiters, passport retention, and locations known for organised scam compounds — indicators of potential trafficking or coercion.
  • Platforms like Telegram say scamming is forbidden, but many recruitment channels remain live and must be reviewed case-by-case.
  • Anti-fraud investigators have observed the same models circulating between operations, suggesting a market for people who provide live likenesses for scammers.

Author (punchy)

This piece matters. It exposes how AI deepfakes aren’t just code — they’re being married to human labour in ways that amplify fraud and risk serious harm. Read the detail to see how recruitment signs map to criminal hubs and why the surface-level job offers are often much darker.

Why should I read this?

Look, it sounds mad — but it’s real. If you use dating apps, get investment pings, or just want to understand how AI is being weaponised, this story shows the human side of deepfake scams and the red flags to watch for. It’s short, sharp and could save someone money or worse.

Source

Source: https://www.wired.com/story/models-are-applying-to-be-the-face-of-ai-scams/