
A patient with chronic granulomatous disease, a rare inherited immune disorder, displays a package of pills he must take daily at the National Institutes of Health in Bethesda, Md., Sept. 4, 2025. A new report found hundreds of thousands of scam ads for medical products, some of which were illegal or had been deemed dangerous. [Alyssa Schukar/The New York Times]
Two months ago, a sponsored post for a supplement popped up on Facebook with a miraculous claim: “The doctors have been hiding this! Diabetes will disappear in 2 days,” the advertisement boasted in German. “Finally, I was able to abandon Metformin which I had been taking for five years.” At the bottom of the post, a button directed users to “shop now” for the pills.
This kind of advertisement is not unusual on Facebook,according to a new report published by Reset Tech, a global public policy nonprofit. Researchers documented hundreds of thousands of similar sponsored posts selling unregulated health products to EU users on the social media platform between 2023 and 2026. Many of the products, like the one advertised for diabetes, were illegal or had been deemed dangerous by health authorities.
Meta has policies that are designed to stop advertisements like these from propagating on the platform. For example, it forbids ads that claim to “cure, heal or eliminate” incurable diseases such as diabetes. But the researchers found that enforcement of those policies was “inconsistent, retroactive and delayed.”
The report found that the accounts publishing those ads were frequently allowed to stay online, pumping out thousands of ads for unproven treatments for serious conditions like cancer and hypertension, said Aleksandra Atanasova, an open-source intelligence researcher at Reset Tech who led the project.
“They are cunningly exploiting pain points in very vulnerable people,” she said.
Atanasova and her team sifted through Meta’s ad library, a public database of all the ads running on Meta’s platforms, and uncovered more than 350,000 Facebook ads hawking 390 unregulated health supplements.
Daniel Roberts, a Meta spokesperson, said the company had removed the ads and pages included in the report that Meta determined violated its policies.
“We don’t allow misleading ads that claim to cure incurable diseases, prohibit the promotion of unsafe products and supplements, and have strict restrictions on how prescription medicine can be advertised on our services,” he said.
While the report focused on campaigns targeting European users, Atanasova said the problem is global.
“It is happening everywhere,” she said. “The problem is systemic.”
In recent years, Meta has faced intense backlash for its handling of misleading content on its platforms. In 2025, Meta dismantled Facebook’s long-standing fact-checking program in the United States and announced that it would rely on users to flag falsehoods.
The Consumer Federation of America, a consumer advocacy nonprofit, and the U.S. Virgin Islands attorney general’s office have sued Meta for allegedly allowing scam ads to proliferate on Facebook and Instagram. Roberts said the allegations “misrepresent the reality of our work, and we will fight them.”
It’s unclear how many people saw or clicked on the health ads documented by Reset Tech. The report found that they reached 878 million EU users but may have targeted overlapping groups of people.
Even if only a small fraction of users purchased the products, the stakes are high.
One of the products advertised for weight loss contained a drug that was taken off the market in the United States because it raised the risk of heart attack and stroke in some people. The capsules also contained a chemical that studies have suggested may cause cancer.
The researchers found tens of thousands of ads focused on chronic illnesses like diabetes (“Diabetes disappears! Blood sugar drops in 2 days!”) and psoriasis (“How can you get rid of this deadly autoimmune disease? Solve the problem at home without visiting the doctor!”). And they identified ads for weight loss supplements that pictured before-and-after transformations and women pinching fat on their stomachs, all of which appear to violate Meta’s advertising rules.
Roughly a third of the advertisements that Atanasova and her team detected had not been taken down by the time they published the report. When the company did crack down, it was often by taking down individual ads rather than banning the account that published them.
That’s a problem, she explained, because her group found evidence that many of these ads were not created by individual bad actors but by networks of accounts that had similar user names and pumped out ads with identical slogans. Leaving those networks intact makes it easy for the accounts to quickly replace any ads that are taken down.
“The next day and the next day, they just launch 100 new ads,” said Cornelius Hirsch, head of research products at Reset Tech.
Facebook is not the only platform with this problem. The researchers found a few thousand similar ads on Google. Erica Walsh, a spokesperson for Google, said the company takes “extensive measures” to keep scam ads off the platform, including suspending accounts.
But the scale of the problem appears to be orders of magnitude larger on Facebook, which, the researchers hypothesized, could be because the platform’s ads cost less and can be customized to target specific groups of people. They also noted that Meta’s public database of paid advertisements is more transparent than Google’s, which had made it easier for them to find these ads.
The researchers found evidence of scammers‘ attempts to evade Meta’s detection; they sometimes shrunk their ads and nestled them in the corner of a larger, innocuous-looking picture like a photo of an animal, or sneaked them in at the end of long carousel of harmless ads for furniture and clothing.
Hirsch said he found it difficult to believe that one of the wealthiest companies in the world could be outmaneuvered by such tactics. He noted that a small team at Reset Tech, working with relatively few resources, was able to uncover hundreds of thousands of these ads.
Luca Luceri, a researcher who studies influence campaigns on social media platforms at the University of Southern California, agreed that it would not be technically challenging for a company like Meta to crack down on these ad campaigns. But he added that the companies may have incentives to leave them up. A Reuters investigation last year found that Meta projected that 10% of its annual revenue would come from running advertising for scams and banned goods. A spokesperson told Reuters at the time that the documents reviewed “present a selective view that distorts Meta’s approach to fraud and scams.”
Meta’s handling of such ads is “a matter of priorities,” Luceri said. “Since the beginning of 2025, it looks like Meta’s priorities are not going toward moderation.”
This article originally appeared in The New York Times