The U.S. State Department has issued a stark warning to American diplomats and foreign governments following a sophisticated AI-driven impersonation campaign in which someone posing as Secretary of State Marco Rubio attempted to contact senior U.S. and international officials.
According to a classified cable obtained by The Washington Post and later confirmed by The Associated Press, the impersonator used the encrypted messaging platform Signal to approach at least three foreign ministers, one U.S. senator, and one state governor. The campaign began in mid-June and included the use of deepfake audio and AI-generated messages, representing one of the most serious incidents of digital political impersonation to date.
AI and Signal Used in Coordinated Hoax
The imposter reportedly used the identity “Marco.Rubio@state.gov” on Signal, leaving voicemails and sending messages that urged targets to continue communication through the platform. In at least two cases, individuals received AI-generated voicemails purporting to be from Secretary Rubio himself.
The State Department confirmed the incident but has not disclosed the identities of the targeted individuals or the content of the messages, citing an ongoing investigation.
“This is precisely why you shouldn’t use Signal or other insecure channels for official communications,” warned Hany Farid, a digital forensics expert at the University of California, Berkeley. He pointed to the growing number of incidents involving high-ranking U.S. officials and unsecured messaging platforms.
High-Profile AI Scams Becoming More Common
This event follows a string of AI-enabled impersonation hoaxes. In May, President Trump’s Chief of Staff, Susie Wiles, was the target of a similar operation where messages and calls appeared to come from her phone contacts. In some cases, recipients claimed to hear her voice—likely synthesized through advanced voice cloning technology.
Earlier this year, a deepfake video of Marco Rubio surfaced, showing him supposedly advocating the shutdown of Elon Musk’s Starlink access to Ukraine. The Ukrainian government later denied the claims, confirming the footage was fabricated.
Experts warn that such deepfake campaigns are not only growing in frequency but also in quality.
An Escalating Digital Arms Race
Siwei Lyu, a computer scientist at the University at Buffalo, called the ongoing battle between deepfake creators and digital security experts an “arms race.”
“Just a few years ago, AI-generated media had obvious flaws—robotic voices, extra fingers, unnatural movements,” said Lyu. “Now, those errors are vanishing. It’s becoming nearly impossible for the average person to distinguish a fake from reality.”
Deepfakes and AI-cloned voices are increasingly realistic, often indistinguishable from genuine communications. As these technologies improve, officials warn, malicious actors will have an expanding toolbox to manipulate, deceive, and access sensitive information.
State Department Responds
Tammy Bruce, a spokesperson for the State Department, confirmed the department is conducting a full investigation. “The department takes seriously its responsibility to safeguard its information and continuously improve its cybersecurity posture to prevent future incidents,” she said, declining to elaborate further due to security reasons.
While officials believe the recent Rubio impersonation was “not very sophisticated” and ultimately unsuccessful, they emphasize the increasing risks. One senior official noted, “Even if nothing was breached this time, these efforts will only get more convincing as the technology improves.”
FBI and Intelligence Community on Alert
The FBI had already issued a warning earlier this year about the misuse of AI to impersonate U.S. officials in a broader campaign to deceive contacts, gain sensitive access, or spread disinformation. Intelligence officials now say the Rubio case fits this pattern.
“There’s no direct cyber threat from this incident,” said a senior official speaking anonymously, “but the exposure risk to sensitive information is real if anyone falls for these impersonations.”
Next Steps: Regulation and Countermeasures
With AI-driven impersonations rising, calls are growing for stronger oversight and technical safeguards. Proposals on the table include:
Criminal penalties for deepfake impersonation,
Federal guidelines on communication tools for government officials,
AI-detection tools integrated into encrypted apps.
Despite these efforts, the impersonator behind the Rubio deepfake has not yet been identified.
“The more high-level the target, the greater the threat,” said Farid. “This won’t be the last time a deepfake walks through the door pretending to be someone it’s not.”
The impersonation of Secretary Marco Rubio marks a troubling milestone in AI-fueled political deception. As deepfake technology advances, so too does the need for governments, platforms, and the public to adapt to an evolving digital threatscape.