Screen Shot 2021-10-31 at 7.27.07 PM.png

Facebook, the Company You Love to Hate

Frances Haugen, the Facebook whistleblower, has leaked thousands of pages of internal company documents showing that Facebook, in order to increase its profits, has continually operated against the public interest. It has spread disinformation on crucial subjects like Covid and election integrity, undermined the self-images of vulnerable teens, further polarized the American electorate and supported violent extremists like the ones who attacked the Capitol on January 6.

The “Facebook Papers” describe the company’s control of what each of us sees in our Facebook feed, using sophisticated algorithms and machine-learning to learn from our past behaviors who we are, how we behave and what we like and don’t like. Then it shows us messages its research suggests we are most likely to want to see and respond to.

Facebook says it just wants to help individuals make the personal connections we want to make. But the leaked documents show that its real goal is to maximize the volume of user engagement of any kind because that’s what maximizes the volume of ads it can show and the revenue it makes from those ads.

It’s all about money, not your personal connections.

Facebook did not set out to become an evil empire. Who could criticize a creative new way to encourage and facilitate friendships? I could live with Facebook exploring my purchasing history in order to target ads for me if that meant that I could widen and deepen connections that I wanted to have. Facebook needed some way of making money and, in the beginning, a few ads seemed harmless enough, at least to me.

But once Facebook figured out the almost limitless income streams it could generate from online ads, simply exploring the purchasing history of its users to target its ads became, shall we say, less interesting. The company began spending millions to figure out how to bore much deeper into our private lives than our past purchases. It began scanning our social media behaviors and using sophisticated machine-learning techniques to generate secret “scores” that ranked options of what messages it should put on our individual newsfeeds to maximize our Facebook activity and hence the company’s ad revenues.

Facebook’s use of emoji was a brilliant way to increase traffic. Posts with lots of reaction emoji tend to get more attention, and keeping users engaged is the key to Facebook’s business model.

Facebook’s real evil genius, however, was figuring out that not all emoji had the same power to increase traffic. Emojis signaling anger did a far better job of attracting people than did happy faces or teary ones.

So Facebook’s algorithms made anger emojis five times more important than the others in determining how much push the system would give the posts containing them.

But Facebook’s own researchers quickly found that posts that sparked angry reactions were also much more likely to include misinformation, toxicity and propaganda, all being weaponized by political figures and extremist groups.

Whistleblower Haugen described how some employees, alarmed at Facebook’s lack of a “moral compass,” pleaded with company leaders to address the company’s use of anger emojis but “again and again they were ignored.” The practice only ended last September when it finally began to generate public scrutiny.

Facebook’s use of anger emojis is just one of the many levers the company has used to manipulate the flow of information and conversation in order to maximize profit.

The Wall Street Journal ran a series last fall showing that Facebook knew that its Instagram division was toxic for teens, pushing an ideal body image and glamorized versions of people’s lives, leaving others—particularly vulnerable youngsters—to feel that they don't measure up. All this angst generated plenty of engagement—and ads for products to “fix” the image problems Instagram had helped generate.

Other critics have flagged the use of Facebook by drug smugglers, human traffickers, and other criminals.

Evidence given before the House Committee investigating the January 6 insurrection strongly suggests showed that Facebook did little if anything to keep Proud Boys, QAnon followers and other extremists from spreading misinformation about the election, calling for violence, and using Facebook to plan and carry out their attack.

What to do?

We’ve been here before. While the technology Facebook represents is new, the concerns it raises echo public discussion of late nineteenth century industrialization, which was also the product of new technologies. At stake then was whether the concentration of economic power in a few hands would destroy our democracy by giving some rich men far more power than anyone else. Americans eventually solved that problem by reining in the Wild West mentality of the early industrialists, protecting the basic rights of workers, and regulating business practices.

The leaked Facebook documents suggest there are places where Facebook’s excesses could be reined in as the overreaches of industrialization eventually were.

We need tough, binding national legislation—a new set of rules for a new threat.

We can start by breaking up Facebook and other Internet behemoths just as Standard Oil was broken up over a century ago.

We can curb the power of tech companies by altering current legislation that foolishly gives sites like Facebook broad immunity from liability for damage caused by their users. The challenge here is constitutional. Current proposals by Democrats call on tech companies to delete or demote false content in order to retain their immunity. Today’s right-leaning Supreme Court might well strike down such legislation as a violation of the First Amendment.

We can demand transparency. Facebook should be ordered to describe whatever public safeguards for its practices it has installed or intends to install in order to facilitate evaluation and oversight from Congress and from the public. “Sunlight is the best disinfectant” and real transparency should not have to depend on the courage of whistleblowers.

We can force Facebook to end its amoral, predatory practices by hitting them where it hurts—organizing boycotts of companies who refuse to stop advertising on Facebook until it cleans up its act.

Education may be the most powerful response to Facebook’s excesses. Americans have got to get smarter about identifying and not falling for fake news and inflammatory content. How dumb is it to rely on Facebook for information, a company whose single-minded goal is to maximize profits by telling us what it thinks we want to hear, then rattling our cages to increase engagement?

The trainings in evaluating information from social media have got to start when people are young. From elementary school onward, for example, Finnish children practice sorting online content into truth or fiction, assessing media bias and deciphering how clickbait content preys on users' emotions. We need a similar program in every American elementary school.

Even if the answers are complex the questions are simple:

Why are Americans still spending so much of their lives on a platform that leaves them unhappy and divided?

Will we let social media giants with no concern except for their profits destroy our democracy by spreading disinformation and sowing division? Or will we find the patience and courage to critically evaluate the information swirling around us, separating truth from lies, facts from opinions, science from emotions and reasonable discussions from violent rants?

The future—your future, my future—depends on it.