Meta, the parent company of Facebook and Instagram, has decided to end its third-party fact-checking program in the U.S., claiming this change will promote free speech. Mark Zuckerberg stated this move is meant to reduce censorship and errors but calling it a “relaxation” of content moderation is misleading. This isn’t about loosening the reins—it’s about purposefully allowing harmful and hateful content to flourish under the guise of free expression.
As someone who worked as a content moderator for Google’s YouTube, I’ve seen the real impact of these policies. While I worked in the CSAM (child sexual abuse material) department, we often encountered content so violent, racist, and hateful that it could only be described as soul-crushing. Social media platforms are anything but free speech havens. Governments under different presidential administrations set the guidelines these companies follow, ensuring no platform truly operates without oversight.
Facebook has a history of using its platform in ethically questionable ways. In 2012, the company conducted an experiment manipulating users’ emotions by allowing them to see an increased amount of negative or positive posts. This was done without their consent to study how it would impact engagement. The result? People exposed to more negative posts were harmed emotionally, but they stayed on the platform longer, arguing against what they saw. That’s the real motivation here—keeping people scrolling, no matter the cost to their mental health or society at large.
Now, by doing away with fact-checking, Meta is opening the floodgates for harmful misinformation. We’ve already seen what happens when disinformation is left unchecked tensions rise, divisions deepen, and hateful rhetoric spreads like wildfire. For example, people can now claim to have “seen studies” suggesting certain groups are smarter than others. These false and harmful ideas have been debunked repeatedly, yet they will now be allowed to circulate freely, just as books in the past pushed narratives about the inferiority of entire groups. This isn’t just careless; it’s dangerous.
The timing is also suspicious. Meta’s CMO recently pointed to “changing vibes in America” and the influence of Trump as reasons for the shift, but that’s not the full story. Allowing more harmful content on their platform aligns with a broader strategy to increase engagement and profits, even if it means fueling division and perpetuating hate.
To make matters worse, Meta is rolling out an AI-powered chatbot described as a “Black queer momma of 2 & truth-teller.” While the intention might seem progressive, the execution raises concerns. This bot, created and controlled by individuals far removed from the identity it represents, risks perpetuating stereotypes and trivializing the real experiences of Black women. Black women already face disproportionate harassment online, and this move feels like another way to exploit their identities for corporate gain while ignoring the harm it could cause.
We live in a world where social media platforms are the primary source of news for many people. What happens when these platforms deliberately allow harmful content to spread unchecked? Its clear Meta’s decision isn’t about protecting free speech. It’s about profit, even if it means amplifying hate, division, and harm.
When I think back to the violent and hateful content I encountered as a content moderator, I can’t help but feel this decision will only add fuel to an already out-of-control fire. Social media has the power to connect and inform, but when used irresponsibly, it becomes a weapon. That’s exactly what Meta is doing—weaponizing their platform while pretending it’s about freedom.