I was fortunate enough to spend several days last week at the Aspen Institute’s Crosscurrent summit on AI and national security in San Francisco. My first takeaway: I very much recommend being in sunny (at the moment, at least) San Francisco rather than slushy, raw New York in early March. The second took a little longer to form.
The conference was filled with former national security officials, cybersecurity executives, and AI leaders. The conversations often gravitated toward anticipated issues like the Anthropic-Pentagon fight, AI’s role in the Iran conflict, and the potential for autonomous weapons. However, one panel resonated deeply; it focused on what might seem almost old-fashioned, but is now supercharged by AI: scams.
During the discussion, Todd Hemmen, a deputy assistant director in the FBI’s Cyber Division, described how North Korean operatives are leveraging AI-generated face overlays to successfully pass remote job interviews with Western tech companies. These operatives subsequently juggle multiple remote jobs, funneling both their salaries and intelligence back to the regime in Pyongyang. “They fabricate résumés with AI, prep for interviews with AI, and use AI to wear the ‘face of someone who’s not the person behind the camera,’” Hemmen explained. This represents a new level of sophistication, where individuals can hold down several full-time jobs simultaneously, all under the guise of fake identities.
The detail of operatives managing multiple jobs left me puzzled. Yet it underscores a more profound concern regarding our present landscape. While speculation about AI’s risks often focuses on dreary scenarios—killer robots and omnipresent surveillance—the more immediate threat lies in a foreign agent donning a synthetic face during a Zoom call, covertly collecting a paycheck from your company. Alarmingly, few seem to recognize the urgency of this issue.
How Cybercrime Got Worse Than Ever
Cybercrime has plagued the internet since its inception, but the scale we now face is staggering. According to the FBI, the U.S. suffered $16.6 billion in reported cybercrime losses in 2024, representing a 33 percent increase over just one year and more than double the losses recorded three years earlier. Seniors alone accounted for nearly $5 billion of these losses. These figures are merely the tip of the iceberg; research by Alice Marwick, director of research at Data & Society, revealed that only about one in five victims ever report scams, leaving the true extent largely unquantified.
As generative AI continues to evolve, it has accelerated the ease and effectiveness of cybercrime. Today’s phishing emails no longer suffer from typos or questionable syntax; large language models (LLMs) can create fluent, regionally tailored communication. AI image generators can fabricate entire synthetic personas—complete with vacation photos and fashionable accessories.
The rise of voice cloning has sparked financial heists that once seemed like the stuff of science fiction. For instance, in early 2024, an employee at the Hong Kong branch of U.K. engineering firm Arup unwittingly transferred $25 million following a deepfake video call depicting the company’s CFO and other colleagues. It turned out that all of them were fabricated. According to CrowdStrike’s 2026 Global Threat Report, AI-enhanced attacks surged by 89 percent year-over-year, while the average time from initial breach to effective spread across a network plummeted to just 29 minutes, with the fastest observed breakout occurring in a mere 27 seconds.
Will AI Cyberoffense Beat AI Cyberdefense?
Why is this pressing issue so comparatively overlooked? Partly because we have become desensitized to it. Cybercrime has been escalating for years, fueled by the professionalization of criminal syndicates, the emergence of cryptocurrencies, the rise of remote work, and the industrialization of scams in Southeast Asia. (My Vox colleague Josh Keating wrote an insightful piece on these so-called “pig butchering” scams a while ago.)
Each new year’s rising losses are now deemed the price of conducting business online. However, the troubling reality is intensifying: Deloitte forecasts that losses from generative AI-enabled fraud in the U.S. could reach $40 billion by 2027. “Just as legitimate businesses are integrating automation, so are organized crime groups,” Marwick noted.
The fact that these alarming trends often go unreported compounds the damage. Marwick’s research focuses specifically on romance scams, where victims—often during times of vulnerability—slowly lose their savings to someone they believe is genuine. Surprisingly, many victims resist acknowledging that they are being scammed, even when presented with irrefutable evidence. AI makes the emotional manipulation even more effective, and no spam filter can protect someone who willingly transfers money.
Can cyber defense keep pace? Marwick posited an optimistic comparison to spam emails, which nearly overwhelmed email systems in the 1990s before a mix of technical strategies, legislation, and social shifts mitigated the issue substantially. Financial institutions are now deploying AI to combat AI-enabled fraud effectively. The FBI has successfully frozen hundreds of millions in stolen assets last year.
Nonetheless, the consensus among conference attendees was mainly bleak. “We’re entering a time where the offensive capabilities are significantly outpacing defensive measures,” stated Rob Joyce, a former director of cybersecurity at the National Security Agency. Marwick bluntly assessed the situation: “Overall, I would say I’m quite pessimistic.”
That sentiment resonates with me. While drafting this article, I received an email from a friend who sent what appeared to be a Paperless Post invitation. The language felt slightly off, so I hesitated. After contacting my friend to verify its legitimacy, he assured me it was indeed real.
Relieved, I got sidetracked and neglected to click the next step on the invitation. Fortunately, a few minutes later, my friend informed us that, yes, he had been hacked.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!
You’ve read 1 article in the last month
Here at Vox, we’re unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.
Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.
We rely on readers like you — join us.
Swati Sharma
Vox Editor-in-Chief
For more detailed insights and a comprehensive overview of AI-assisted cybercrime, you can refer to the original article Here.
Image Credit: www.vox.com






