By Gilad Edelman via Wired
PHOTOGRAPH: MICHAEL MACOR/SAN FRANCISCO CHRONICLE/GETTY IMAGES
The company’s new approach to political content acknowledges that engagement isn’t always the best way to measure what users value.
Back in February, Facebook announced a little experiment. It would reduce the amount of political content shown to a subset of users in a few countries, including the US, and then ask them about the experience. “Our goal is to preserve the ability for people to find and interact with political content on Facebook, while respecting each person’s appetite for it at the top of their News Feed,” Aastha Gupta, a product management director, explained in a blog post.
On Tuesday morning, the company provided an update. The survey results are in, and they suggest that users appreciate seeing political stuff less often in their feeds. Now Facebook intends to repeat the experiment in more countries and is teasing “further expansions in the coming months.” Depoliticizing people’s feeds makes sense for a company that is perpetually in hot water for its alleged impact on politics. The move, after all, was first announced just a month after Donald Trump supporters stormed the US Capitol, an episode that some people, including elected officials, sought to blame Facebook for. The change could end up having major ripple effects for political groups and media organizations that have gotten used to relying on Facebook for distribution.
The most significant part of Facebook’s announcement, however, has nothing to do with politics at all.
The basic premise of any AI-driven social media feed—think Facebook, Instagram, Twitter, TikTok, YouTube—is that you don’t need to tell it what you want to see. Just by observing what you like, share, comment on, or simply linger over, the algorithm learns what kind of material catches your interest and keeps you on the platform. Then it shows you more stuff like that.
In one sense, this design feature gives social media companies and their apologists a convenient defense against critique: If certain stuff is going big on a platform, that’s because it’s what users like. If you have a problem with that, perhaps your problem is with the users.
There's more on wired.com. Sign up for our newsletters and catch every update on the technology of the future.
And yet, at the same time, optimizing for engagement is at the heart of many of the criticisms of social platforms. An algorithm that’s too focused on engagement might push users toward content that might be super engaging but of low social value. It might feed them a diet of posts that are ever more engaging because they are ever more extreme. And it might encourage the viral proliferation of material that’s false or harmful, because the system is selecting first for what will trigger engagement, rather than what ought to be seen. The list of ills associated with engagement-first design helps explain why neither Mark Zuckerberg, Jack Dorsey, nor Sundar Pichai would admit during a March congressional hearing that the platforms under their control are built that way at all. Zuckerberg insisted that “meaningful social interactions” are Facebook’s true goal. “Engagement,” he said, “is only a sign that if we deliver that value, then it will be natural that people use our services more.”
In a different context, however, Zuckerberg has acknowledged that things might not be so simple. In a 2018 post, explaining why Facebook suppresses “borderline” posts that try to push up to the edge of the platform’s rules without breaking them, he wrote, “no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average—even when they tell us afterward they don't like the content.” But that observation seems to have been confined to the issue of how to implement Facebook’s policies around banned content, rather than rethinking the design of its ranking algorithm more broadly.
That’s why the company’s latest announcement is quietly such a big deal. It marks perhaps the most explicit recognition to date by a major platform that “what people engage with” is not always synonymous with “what people value,” and that this phenomenon is not limited to stuff that threatens to violate a platform’s rules, like pornography or hate speech.
The new blog post, as with all Facebook announcements, is pretty vague, but it’s possible to read between the lines. “We’ve also learned that some engagement signals can better indicate what posts people find more valuable than others,” Gupta writes. “Based on that feedback, we’re gradually expanding some tests to put less emphasis on signals such as how likely someone is to comment on or share political content.” Translation: Just because someone comments on something, or even shares it, doesn’t mean it’s what they would prefer to see in their timeline. “At the same time, we’re putting more emphasis on new signals such as how likely people are to provide us with negative feedback on posts about political topics and current events when we rank those types of posts in their News Feed.” Translation: If you want to know what people like, ask them. The answers may differ from what a machine learning algorithm learns by silently monitoring their behavior.
This is pretty obvious to anyone who has ever used social media. When I scroll Facebook and see the latest rant by my one anti-vaccine contact, I can’t help but read in horror. Facebook registers that fact and makes sure to push that guy’s next post to the top of my News Feed the next time I open the app. What the AI doesn't understand is that I feel worse after reading those posts and would much prefer to not see them in the first place. (I finally, belatedly, muted the account in question.) The same goes for Twitter, where I routinely allow myself to be enraged by tweets before recognizing that I’m wasting time doing something that makes me miserable. It’s a bit like food, actually: Place a bowl of Doritos in front of me, and I will eat them, then regret doing so. Ask me what I want to eat first, and I’ll probably request something I can feel better about. Impulsive, addictive behavior doesn’t necessarily reflect our “true” preferences.
As with any policy announcement from Facebook, the real question is how it will be implemented, and given the company’s lackluster track record of transparency, we may never stop waiting for answers. (Very basic question: What counts as “political”?) It would be good, in theory, if social media companies began taking the divide between engagement and what users value more seriously, and not just for political content. Perhaps Facebook’s latest announcement will mark a shift in that direction. But it’s also possible that Facebook is behaving opportunistically—using some vague research findings as an excuse to lower its own political risk profile, rather than to improve users’ experience—and will refuse to apply the lesson more broadly. Nicole Bonoff, a researcher at Twitter, suggested as much, and argued that Facebook’s data may not be reliable. “User surveys, which tend to ask ungrounded hypotheticals about ‘politics,’ elicit negative responses,” she tweeted. “This is due to a combination of social desirability bias, differing definitions of politics & stereotypes about politics on social media.”
So the effects of the new policy remain to be determined. There’s a difference, after all, between what someone says and what they do. At least Facebook appears to have learned that lesson.