A recent report by the Tech Transparency Project (TTP) has revealed a troubling trend on Meta’s platforms, including Facebook and Instagram, where scammers have emerged as some of the largest spenders on political advertisements. These bad actors are leveraging advanced artificial intelligence (AI) technologies, such as deepfake videos featuring prominent U.S. politicians like President Donald Trump, to promote fraudulent government benefits schemes. The scams primarily target vulnerable populations, particularly senior citizens, with promises of fictitious stimulus checks, spending cards, and healthcare payments. According to TTP, these deceptive campaigns have reached tens of thousands of users, exploiting public confusion and inadequate content moderation on Meta’s platforms.
The TTP report identified 63 scam advertisers who collectively spent a staggering $49 million on Meta’s platforms to run over 150,600 political ads. These ads often prey on older Americans, enticing them with offers of fake financial relief programs. The findings underscore how scammers are capitalizing on technological advancements, such as AI-generated deepfake content, to create highly convincing but entirely fraudulent advertisements. TTP noted that Meta’s content moderation policies, which are intended to prevent scams and protect users, have been insufficient in curbing this wave of deceptive activity. Despite Meta’s stated commitment to scam prevention, the report suggests that the company’s efforts have fallen short, allowing scammers to operate with relative ease.
Under Meta’s advertising policies, political ads in the U.S. are subject to a stringent authorization process. Advertisers must provide official identification and a valid U.S. mailing address to run such campaigns. However, TTP’s investigation revealed that all 63 identified scam advertisers violated Meta’s policies, resulting in the removal of their ads over the past year. Alarmingly, nearly half of these advertisers were still active on the platforms as of the week of the report’s release, indicating significant gaps in Meta’s enforcement mechanisms. The report highlighted that Meta disabled 35 ad accounts, but in many cases, this action was taken only after the accounts had run dozens or even hundreds of fraudulent ads. Six accounts were particularly egregious, each spending over $1 million before Meta intervened.
One notable example cited in the report involved an advertiser called the Relief Eligibility Center, which ran a deepfake video featuring President Trump in April and May. The video, designed to resemble an official Rose Garden speech, falsely promised Americans “FREE $5,000 checks.” This campaign specifically targeted men and women over the age of 65 across more than 20 U.S. states, exploiting the trust that many seniors place in political figures and government programs. The use of deepfake technology, which allows scammers to create highly realistic but fabricated videos, marks a significant escalation in the sophistication of online fraud. These AI-generated ads are difficult for users to identify as fraudulent, making them particularly dangerous for vulnerable populations.
The rise of such scams is part of a broader surge in online fraud, as evidenced by data from the Federal Trade Commission (FTC). In August, the FTC reported a fourfold increase since 2020 in complaints from older adults who lost $10,000 or more to fraudsters. In some cases, victims lost their entire life savings to scams impersonating government agencies or trusted businesses. The TTP report aligns with these findings, emphasizing that scammers are exploiting public confusion surrounding social safety net programs, such as stimulus checks and healthcare benefits, to deceive users. The combination of advanced AI tools and lax platform oversight has created a perfect storm for fraudulent activity.
Meta has acknowledged the challenge of combating scams, stating in a response quoted by TTP that it is investing in “new technical defenses” to counter the evolving tactics of scammers. However, the company did not immediately respond to requests for comment from Agence France-Presse (AFP) regarding the TTP report. Meta’s public statements suggest an ongoing effort to improve scam detection and prevention, but the persistence of these fraudulent campaigns raises questions about the effectiveness of their current measures. The fact that nearly half of the identified scam advertisers remained active on Meta’s platforms at the time of the report indicates that significant improvements are needed to protect users.
The implications of TTP’s findings extend beyond Meta’s platforms, highlighting broader challenges in regulating online advertising and combating digital fraud. The use of deepfake technology in political ads represents a new frontier in scams, as it allows fraudsters to manipulate trusted figures and institutions to lend credibility to their schemes. For seniors, who may be less familiar with identifying online scams, these ads are particularly insidious. The targeting of older Americans also reflects a calculated strategy, as this demographic is more likely to rely on social safety net programs and may be more susceptible to offers promising financial relief.
Fact-checkers have long warned about the prevalence of fake stimulus check offers on social media, but the scale and sophistication of the campaigns uncovered by TTP mark a significant escalation. The report calls for stronger action from Meta to address these violations, including more proactive monitoring of political ads and faster response times to disable fraudulent accounts. It also underscores the need for greater public awareness about online scams, particularly those that exploit trust in government programs or political figures. As AI technology continues to advance, the potential for misuse in fraudulent schemes is likely to grow, making it imperative for platforms like Meta to stay ahead of these evolving threats.
The TTP report serves as a wake-up call for both social media platforms and regulators. While Meta has taken steps to remove fraudulent ads and disable offending accounts, the fact that scammers can spend millions of dollars and reach thousands of users before being stopped suggests that current safeguards are inadequate. The report also highlights the broader societal impact of these scams, particularly on older adults who may lose significant sums of money to fraudsters. As the FTC’s data shows, the financial and emotional toll of these scams can be devastating, with victims often losing their life savings to schemes that exploit their trust.
To address this growing problem, experts suggest a multi-faceted approach. Platforms like Meta must invest in more robust AI-driven detection systems to identify and remove fraudulent ads before they reach large audiences. Additionally, regulators could impose stricter penalties for platforms that fail to adequately police scam activity. Public education campaigns are also critical to help users, especially seniors, recognize and avoid fraudulent offers. By combining technological innovation, regulatory oversight, and user education, it may be possible to curb the rise of AI-driven scams on social media.
In conclusion, the TTP’s findings reveal a sophisticated and alarming trend in online fraud, with scammers using deepfake technology and political ads to exploit vulnerable populations on Meta’s platforms. The $49 million spent by 63 scam advertisers underscores the scale of the problem, while the continued activity of nearly half of these advertisers highlights the challenges in enforcing platform policies. As AI technology continues to evolve, so too must the strategies to combat its misuse. For now, the burden remains on platforms like Meta to strengthen their defenses and protect users from the growing threat of digital scams.

