Amazon blocked 1,800 suspected North Korean scammers seeking jobs
Summary
Amazon says it has stopped more than 1,800 suspected Democratic People’s Republic of Korea (DPRK) scammers from joining its workforce since April 2024, with applications tied to DPRK activity rising about 27% quarter-on-quarter this year, according to Amazon Chief Security Officer Steve Schmidt.
The scam typically involves real developers’ identities being faked or stolen, AI-generated résumés and social profiles, and sometimes deepfakes during video interviews. Once hired, the attackers remit wages to the North Korean regime and may use insider access to steal source code, sensitive data or launch extortion schemes. Researchers also warn of a new, heavily obfuscated BeaverTail infostealer and loader used by Lazarus-linked subgroups to deliver backdoors and harvest credentials across Windows, macOS and Linux.
Amazon combines AI screening — checking connections to nearly 200 high-risk institutions and spotting geographic or application anomalies — with human verification such as background checks and structured interviews. Despite this, attackers are evolving: they now hijack dormant LinkedIn accounts, work with “laptop farmers” to make devices appear US-based, and target sectors beyond pure IT (finance, healthcare, public administration, professional services).
Key Points
- Amazon blocked over 1,800 suspected DPRK-linked job applicants since April 2024.
- Applications tied to DPRK activity rose ~27% quarter-on-quarter in 2025, per Amazon’s CSO.
- Scam techniques include fake/stolen identities, AI-crafted résumés, deepfakes and hijacked LinkedIn accounts.
- ‘Laptop farmers’ are used to make remote workers appear to be in the US, aiding deception.
- The Lazarus Group has a revamped BeaverTail infostealer/loader with heavy obfuscation and signature evasion.
- Consequences include stolen source code, extortion, data theft and funding for DPRK weapons and crypto theft programmes.
- Amazon uses AI screening plus human checks (backgrounds, credential verification, interviews) and behavioural signals like keystroke lag to spot fraud.
- Recommended defences: multi-stage identity verification, database queries for resume/email/phone patterns, and monitoring anomalous technical behaviour and unauthorised hardware.
Why should I read this?
Short and blunt: if you hire remote devs or rely on contractor access to code or systems, this affects you. These scams are getting clever — AI, deepfakes and stolen profiles — so a quick read will save you time and probably a headache later.
Context and relevance
This story sits at the intersection of remote hiring trends, nation-state cybercrime and AI-enabled deception. As more organisations hire remotely and use automated screening, adversaries increasingly monetise access by laundering wages back to sanctioned regimes and stealing IP. The rise of advanced infostealers like BeaverTail and techniques such as account hijacking and laptop farming mean security teams, hiring managers and CISOs should treat recruitment and onboarding as a security control: strengthen identity verification, instrument behavioural monitoring and assume insider compromise is possible.
