The social media misinformation profits of companies like X, Meta, and YouTube have come under intense scrutiny. A new investigation reveals how these platforms profit from lies during real-world disasters. When floods devastated central Texas in July 2025, killing more than 130 people, many turned to social media for urgent updates. However, instead of verified emergency alerts, users were overwhelmed by conspiracy theories about weather manipulation, government sabotage, and other baseless claims.
According to a 55-page report by the Center for Countering Digital Hate (CCDH), these misleading posts were not just random. They were amplified, monetised, and often created by verified users. The study found that 88 percent of misleading posts on X came from verified accounts, compared to 73 percent on YouTube and 64 percent on Meta. These verified accounts, marked by blue check marks, gained wide reach through algorithms and earned money through monetisation tools. Shockingly, discredited conspiracy theorist Alex Jones reached more users during the Los Angeles wildfires than FEMA and 10 major news outlets combined.
Jones, who spread false claims about the fires being part of a government land grab and FEMA stealing food, amassed more than 408 million views. His content, shared without warnings or corrections, illustrates how platforms profit from fear and chaos. The report criticises these platforms for undermining trust in first responders while generating revenue from content that fuels panic.
Despite public promises to fight misinformation, the study found that social media companies are failing to enforce fact-checking. On X, Community Notes appeared on only 1% of false posts. Meta failed to label 98% of misinformation across Facebook and Instagram. YouTube applied zero fact-check labels to misleading videos examined in the report. These companies have even scaled back their efforts. Meta shut down its third-party fact-checking program in the U.S. and replaced it with a crowd-annotated model that often delays or prevents corrections.
The CCDH report also outlines how Meta has weakened transparency. In early 2025, Meta discontinued CrowdTangle, a vital tool used by journalists and researchers to monitor viral content. Its replacement, Meta Content Library, offers limited access and functionality. At the same time, Meta expanded its paid verification scheme. For $14.99 a month, users can now purchase a blue checkmark through Meta Verified, which boosts visibility and enables ad revenue sharing. Alarmingly, some of the accounts spreading false information during Hurricane Helene belonged to users enrolled in Meta’s monetisation program.
X, under Elon Musk, has adopted a business model that critics describe as “pay-to-misinform.” Users pay between $3 and $40 monthly for verified status and monetisation perks. The CCDH identified five major accounts spreading weather misinformation through X’s paid system. Combined, these accounts had over 14 million followers. Despite having access to Community Notes, the flagged posts remained uncorrected. According to the report, “X is not just failing to stop lies, it is actively promoting and profiting from them.”
The real-world impact is devastating. After the Texas floods, conspiracy content linking cloud seeding to the disaster gained over eight million views. One influencer suggested meteorologists were behind the floods, leading to threats against weather agencies. In Oklahoma City, a man destroyed a weather radar system he believed was controlling the storm. Similarly, during Hurricane Helene, false claims about FEMA sparked public confusion and threats against aid workers. Some disaster victims even avoided seeking help after believing online claims that aid had been redirected to undocumented migrants.
The CCDH links this trend to a wider movement called the “New Climate Denial.” Unlike traditional denial, which rejects human-caused climate change, this version focuses on discrediting climate solutions, blaming disasters on sabotage or policy failures. These narratives are more emotionally charged, often blaming wildfires on land grabs or hurricanes on secret technology. Such content spreads quickly, outperforming scientific information in reach and engagement due to its dramatic nature.
Ultimately, the report calls on platforms to take urgent action. CCDH recommends reinstating expert fact-checkers, increasing transparency for researchers, auditing algorithms that reward viral lies, and blocking monetisation of harmful content. “We cannot wait for the next disaster to demand accountability,” warns CCDH CEO Imran Ahmed. “The stakes are too high, the cost too real.”
For countries like Uganda, this warning is timely. As climate disasters like floods and droughts increase, misinformation will follow. Already, Ugandan users rely heavily on global platforms like Facebook, X, and YouTube for updates. If these platforms continue to prioritise profits over truth, local misinformation will become a dangerous part of the climate crisis. Fake news won’t just mislead—it will kill trust, delay emergency response, and put vulnerable lives at risk.
Read: Meta denies African content-moderator firm exit poses risk