Such questions are unavoidable after Frances Haugen’s Congressional testimony this week. The former Facebook employee, now the pre-eminent whistleblower against the social media giant, says the company prioritizes profits over the mental health of users, especially teen girls, and leans on algorithms that amplify fake or misleading news.
Facebook CEO Mark Zuckerberg fired back, asserting Haugen’s charges make no sense and that advertisers would desert in droves if the company were peddling hate speech and deceit.
Granted, a trip through most Facebook feeds reveals a lot of innocuous content. Currently, my feed shows my mother-in-law congratulating her granddaughter on a volleyball win, images of old comic books, and a meme about Kermit the Frog’s weight gain. Hardly the stuff of Congressional hearings or parental nightmares.
But I don’t have to scroll much further to find posts by “news” organizations that lean far right and far left, where content is so biased it makes Fox News or MSNBC appear fair and balanced. And the more I click on this material, the more like-minded content Facebook will send me, as its faceless bots determine how to keep me there longer.
Even this would be no problem if:
- Users were consuming Facebook material as one part of a balanced news diet that included newspapers, magazines, professional journals, network and cable news, reliable websites, podcasts and blogs.
- Facebook was not blending its personalization and amplification algorithms to promote engagement over all else.
This isn’t a Facebook-only problem. A Pew Research study last month noted that 19% of respondents said they “often” got news from social media (which includes, but is not limited to, Facebook) in 2021, and 29 percent said they “sometimes” did.
These numbers are down slightly from the year before, but in such small percentages that it’s hard to draw much encouragement from the decrease.
One eyebrow-raising exception is TikTok users, 29 percent of whom access news through the service, compared to 22 percent in 2020. TikTok’s demographics skew young, and even a casual scroll through its largely user-created content demonstrates that much of this information is served with unhealthy dollops of sass and cynicism.
How to stem this tide of social media snark and misinformation has been the subject of strong debate. Dissenting voices should not be stifled, lest we run afoul of the First Amendment and the protections it affords.
But a certain percentage of Facebook and other social media content, designed by multiple bad actors specifically to sow division, and then amplified wittingly or unwittingly by the companies’ own policies of engagement, would seem to fall outside the First Amendment purview, broad as that may be.
Roddy Lindsay, a former data scientist at Facebook, writing in The New York Times, suggests changes to a 1996 law that shields social media corporations from lawsuits. He believes that if Facebook and other companies could be sued for libel and illegal content, they would drop so-called “engagement-based ranking,” pushing the trash back to the bottom.
Lindsay’s idea is more feasible than my plan: allowing negative publicity and the free market to sink Facebook and other engagement-based platforms without government intervention. If more users abandoned these sites voluntarily, their influence would wane, perhaps inspiring them to reinvent themselves in more prosocial ways.
But if I’m being honest, my last attempt to leave Facebook lasted all of three days before I came crawling back. I doubt I’m alone in such failed attempts. Expecting people to leave voluntarily the very drug that has addicted them is not realistic.
If we can’t quit social media on our own, and if companies won’t provide more transparency and demonstrate a sincere willingness to stop spewing flat-out lies, then the government needs to get involved.
Opening Facebook to litigation is one way to hold the company that holds the megaphone responsible for the shouting it amplifies.
No comments:
Post a Comment