Facebook quietly makes a big admission

0

Back in February, Facebook announced a small experiment. This would reduce the amount of political content shown to a subset of users in a few countries, including the United States, and then ask them about the experience. “Our goal is to preserve the ability for people to find and interact with political content on Facebook, while respecting everyone’s appetite for it at the top of their news feed,” said Aastha Gupta, Director of product management, in a blog post.

On Tuesday morning, the company provided an update. The results of the survey are available and they suggest that users like to see political content less often in their feeds. Now Facebook intends to repeat the experience in more countries and teases “more expansions in the coming months.” Depoliticizing the flow of people makes sense for a business that is perpetually in hot water for its alleged impact on politics. After all, the move was first announced just a month after supporters of Donald Trump stormed the United States Capitol, an episode for which some people, including elected officials, have sought to blame. Facebook. The change could end up having major ripple effects for political groups and the media that have grown accustomed to relying on Facebook for distribution.

The most important part of Facebook’s announcement, however, has nothing to do with politics.

The basic premise of any AI-powered social media feed – think Facebook, Instagram, Twitter, TikTok, YouTube – is that you don’t have to tell it what you want to see. By simply observing what you like, share, comment on, or just hang out, the algorithm learns what kind of material is catching your interest and keeping you on the platform. Then it shows you more stuff like that.

In a sense, this design feature provides social media companies and their apologists with a practical defense against criticism: if some things are gaining momentum on a platform, it’s because that’s what them. users like. If you have a problem with this, maybe your problem is with the users.

And yet, at the same time, maximizing engagement is at the heart of many reviews of social platforms. An algorithm that is too focused on engagement could push users towards content that could be very engaging but of low social value. This could feed them a diet of ever more engaging messages because they are more and more extreme. And that could encourage viral overgrowth of bogus or harmful material, as the system first selects what will trigger engagement, rather than what should be seen. The list of evils associated with engagement-driven design helps explain why neither Mark Zuckerberg, Jack Dorsey, nor Sundar Pichai would admit at a congressional hearing in March that the platforms under their control are built from this. way. Zuckerberg insisted that “meaningful social interactions” is Facebook’s real goal. “Engagement,” he said, “is just a sign that if we deliver this value, it will be natural for people to use our services more.”

In a different context, however, Zuckerberg acknowledged that things might not be that simple. In a 2018 post, explaining why Facebook is removing ‘boundary’ posts that attempt to push the boundaries of the platform’s rules without breaking them, he wrote: “It doesn’t matter where we draw the boundaries of what’s allowed, as content gets closer to that line, people will be more interested in it on average, even if they tell us afterwards that they don’t like the content. But that observation appears to have been confined to the question of how to implement Facebook’s banned content policies, rather than rethinking the design of its ranking algorithm more broadly.


Source link

Leave A Reply

Your email address will not be published.