Empiricrafting
Justified Posteriors
Did Meta's Algorithms Swing the 2020 Election?
3
0:00
-50:24

Did Meta's Algorithms Swing the 2020 Election?

A major preregistered study of Facebook recommendations and political behavior changes our view
3


We hear it constantly: social media algorithms are driving polarization, feeding us echo chambers, and maybe even swinging elections. But what does the evidence actually say?

In the darkest version of this narrative, social media platform owners are shadow king-makers and puppet masters who can select the winner of close election by selectively promoting narratives. Amorally, they disregard the heightened political polarization and mental anxiety which are the consequence of their manipulations of the public psyche.

In this episode, we dive into an important study published in Science (How do social media feed algorithms affect attitudes and behavior in an election campaign?https://www.science.org/doi/10.1126/science.abp9364) that tackled this question. Researchers worked with Meta to experimentally change the feeds of tens of thousands of Facebook and Instagram users in the crucial months surrounding the 2020 election.

One of the biggest belief swings in the history of Justified Posteriors in this one!

The Core Question: What happens when you swap out the default, engagement-optimized algorithmic feed for a simple, reverse-chronological one showing posts purely based on recency?

Following our usual format, we lay out our priors before dissecting the study's findings:

  • Time Spent: The algorithmic feed kept users scrolling longer.

  • Content Consumed: The types of content changed in interesting ways. The chronological feed users saw more posts from groups and pages, more political content overall, and paradoxically, more content from untrustworthy news sources.

  • Attitudes & Polarization: The study found almost no effect on key measures like affective polarization (how much you dislike the other side), issue polarization, political knowledge, or even self-reported voting turnout.

So, is the panic over algorithmic manipulation overblown?

While the direct impact of this specific algorithmic ranking vs. chronological feed seems minimal on core political beliefs in this timeframe, other issues are at play:

  • Moderation vs. Ranking: Does this study capture the effects of outright content removal or down-ranking (think the Hunter Biden laptop controversy)?

  • Long-term Effects & Spillovers: Could small effects accumulate over years, or did the experiment miss broader societal shifts?

  • Platform Power: Even if this comparison yields null results, does it mean platforms couldn't exert influence if they deliberately tweaked algorithms differently (e.g., boosting a specific figure like Elon Musk on X)?

(Transcript below)

🗞️ Subscribe for upcoming episodes, post-podcast notes, and Andrey’s posts:

💻Follow us on Twitter:

@AndreyFradkin https://x.com/andreyfradkin?lang=en

@SBenzell https://x.com/sbenzell?lang=en


Transcript:

Andrey: We might have naively expected that the algorithmic feed serves people their "red meat"—very far-out, ideologically matched content—and throws away everything else. But that is not what is happening.

Seth: Welcome everyone to the Justified Posterior Podcast, where we read and are persuaded by research on economics and technology so you don't have to. I'm Seth Benzell, a man completely impervious to peer influence, coming to you from Chapman University in sunny Southern California.

Andrey: And this is Andrey Fradkin, effectively polarized towards rigorous evidence and against including tables in the back of the article rather than in the middle of the text.

Seth: Amazing. And who's our sponsor for this season?

Andrey: Our sponsor for the season is the Digital Business Institute at the Questrom School of Business at Boston University. Thanks to the DBI, we're able to provide you with this podcast.

Seth: Great folks. My understanding is that they're sponsoring us because they want to see information like ours out there on various digital platforms, such as social media, right? Presumably, Questrom likes the idea of information about them circulating positively. Isn't that right?

Andrey: Oh, that's right. They want you to know about them, and by virtue of listening to us, you do. But I think, in addition, they want us to represent the ideal of what university professors should be doing: evaluating evidence and contributing to important societal discussions.

Andrey: So with that set, what are we going to be talking about today?

Seth: Well, we're talking about the concept of participating in important societal discussions itself. Specifically, we're discussing research conducted and published in Science, a prestigious journal. The research was conducted on the Facebook and Instagram platforms, trying to understand how those platforms are changing the way American politics works.

The name of the paper is, "How Do Social Media Feed Algorithms Affect Attitudes and Behavior in an Election Campaign?" by Guess et al. There are many co-authors who I'm sure did a lot of work on this paper; like many Science papers, it's a big team effort. See the show notes for the full credit – we know you guys put the hours in.

This research tries to get at the question, specifically in the 2020 election, of to what extent decisions made by Mark Zuckerberg and others about how Facebook works shaped America's politics. It's an incredibly exciting question.

Andrey: Yeah, this is truly a unique study, and we'll get into why in just a bit. But first, as you know, we need to state our prior beliefs about what the study will find. We're going to pose two claims: one narrow and one broader. Let's start with the narrow claim.

Seth: Don't state a claim, we hypothesize, Andrey.

Andrey: Pardon my imprecision. A hypothesis, or question, if you will: How did the algorithmic feed on Facebook and Instagram affect political attitudes and behavior around the time of the 2020 presidential election? Seth, what is your prior?

Seth: Alright, I'm putting myself in a time machine back to 2020. It was a crazy time. The election was at the end of 2020, and the pandemic really spread in America starting in early 2020. I remember people being hyper-focused on social media because everyone was locked in their houses. It felt like a time of unusually high social media-generated peer pressure, with people pushing in both directions for the 2020 election. Obviously, Donald Trump is a figure who gets a lot of digital attention – I feel like that's uncontroversial.

On top of that, you had peak "woke" culture at that time and the Black Lives Matters protests. There was a lot of crazy stuff happening. I remember it as a time of strong populist forces and a time where my experience of reality was really influenced by social media. It was also a time when figures like Mark Zuckerberg were trying to manage public health information, sometimes heavy-handedly silencing real dissent while trying to act for public welfare.

So, that's a long wind-up to say: I'm very open to the claim that Facebook and Instagram had a thumb on the scale during the 2020 election season, broadly in favor of chaos or political polarization – BLM on one side and MAGA nationalism on the other. At the same time, maybe vaguely lefty technocratic, like the "shut up and listen to Fauci" era. Man, I actually have a pretty high prior on the hypothesis that Facebook's algorithms put a real thumb on the scale. Maybe I'll put that around two-thirds. How about you, Andrey?

Andrey: In which direction, Seth?

Seth: Towards leftiness and towards political chaos.

Andrey: And what variable represents that in our data?

Seth: Very remarkably, the paper we studied does not test lefty versus righty; they do test polarization. I don't want to spoil what they find for polarization, but my prediction was that the algorithmic feed would lead to higher polarization. That was my intuition.

Andrey: I see. Okay. My prior on this was very tiny effects.

Seth: Tiny effects? Andrey, think back to 2020. Wasn't anything about my introduction compelling? Don't you remember what it was like?

Andrey: Well, Seth, if you recall, we're not evaluating the overall role of social media. We're evaluating the role of a specific algorithm versus not having an algorithmic feed and having something else – the reverse chronological feed, which shows items in order with the newest first. That's the narrow claim we're putting a prior on, rather than the much broader question of what social media in general did.

Seth: Yeah, but I guess that connects to my censorship comments. To the extent that there is a Zuckerberg thumb on the scale, it's coming through these algorithmic weightings, or at least it can come through that.

Andrey: I think we can come back to that. My understanding of a lot of platform algorithm stuff, especially on Facebook, is that people mostly get content based on who they follow – people, groups, news outlets. The algorithm shifts those items around, but in the end, it might not be that different from a chronological feed. Experts in this field were somewhat aware of this already. That's not to say the algorithmic feed had no effects, but I expected the effects to be very small.

Another aspect is how our political beliefs are formed. Yes, we spend time online, but we also talk to friends, read the news, get chain emails from our crazy uncle (not my crazy uncle, but people do).

Seth: One thing we'll get to see is what people substitute into when we take away their Facebook algorithmic feed.

Andrey: Yes. Furthermore, political beliefs generally don't change very frequently. I don't have a specific study handy, but it's fairly understood. There are exceptions, like preference cascades, but generally, if you believe markets work well, you won't suddenly change your mind, and vice versa. This holds for many issues. Imagine polling people on who they voted for in 2016 versus 2020 – the correlation for voting for Donald Trump would be immensely high. It's really hard to move people's political preferences.

Seth: I think that's right. There are things people's beliefs move around more on shorter timelines, though. One thing they look at is political knowledge, which also seems unaffected, interestingly. The only thing I'd push back on regarding fixed beliefs is the idea of preference cascades. Settings where beliefs like "we are now in chaos, everyone for themselves" can spread very fast if seeded correctly.

Okay, so that was our narrow claim. Let me put a bow on that, Andrey. With what percentage probability would you say that the effect of social media algorithms on political outcomes or polarization is very small?

Andrey: 80 percent confident.

Seth: Alright. Well now, Andrey, let's talk about the broader hypothesis. Go ahead.

Andrey: So, this is something we're realizing as we do more episodes: there's often a very narrow, precise claim a paper addresses, and then there's the more relevant claim of interest to society.

Seth: And this is what we're going to put on the TikTok ads.

Andrey: Yes. The narrow claim is about comparing people looking at either algorithmic or reverse chronological feeds on Facebook over a specific three-month period. The broader question is whether, for society as a whole, the fact that feeds are algorithmic has very different effects.

Why might the effect for society differ from the effect on individuals in an experiment? One key assumption in causal inference – great time to bring this up as I'm teaching my first class tomorrow...

Seth: (To himself) I hope he brings this up right now.

Andrey: ...is the non-interference assumption, or the Stable Unit Treatment Value Assumption (SUTVA). This essentially says that people who receive a treatment don't affect people in the control group, and vice versa. There are no spillovers. But if there's anything we know about social media, it's that it's all about spillovers. If I live with roommates and get slightly different news because of the algorithm, I can still tell them about it.

A broader spillover is the incentive algorithms create for content generation. If the algorithm promotes things with high engagement, and people make money from engagement (like news media, influencers), they'll start creating outrageous stuff to get boosted. Since the incentive is high, a lot of content on the platform might become like this.

Seth: ...unless there's a thumb on the scale from Zuckerberg to shape that.

Andrey: I think the default of any algorithmic feed is to optimize for engagement. Tweaks might happen, but as a first-order approximation, they show engaging stuff – funny claims, videos, outrageous things that keep people using social media more.

Seth: That's the hypothesis. But as we get into the evidence, we'll see how people's content actually switched, at least in this sample. Where are we going with this broader claim? The broader question is: do social media algorithms have political effects more broadly, and are these effects large enough to swing elections or drive polarization?

I come to that question thinking about a famous book, The Revolt of the Public, which argues that digital platforms inherently favor populist politics. When something gets digitized, you often see power-law distributions: superstars at one end, a long tail of niche interests. I think that's basically right as an effect of social media, whether algorithmic or reverse chronological. Remember, even reverse chronological has preferential attachment built-in – people follow others who are already popular.

So, asking if the social media world is politically different from the non-social media world – I think that's obvious. Even within social media, platform owners must have significant power over what information rises. On platforms people spend hours on daily... in principle, could an algorithm swing elections or drive polarization? 90 percent plus. Has it happened already in American history? 80 percent plus.

Andrey: Yeah, I'm with you that the holistic picture of algorithms' role suggests they must have had effects on politics. But this is where detailed platform knowledge matters: there's no one algorithm. There are layers of algorithms and moderation.

A famous example on the right is the Hunter Biden laptop story. There was a perception it came from a hack or was potentially made up. As a result, some platforms manually put a thumb on the scale to limit its spread. Is this an algorithm? It depends. One version is just removing posts with links to the story – hard censorship.

Seth: Censorship, if you will.

Andrey: Right. There's also potentially a scoring system that flags content as possibly fraudulent, illegitimate, or low-quality, giving it a lower algorithmic score without a full ban.

Seth: Shadow banned.

Andrey: Exactly. That's clearly mediated by the platform. But there's a world where this content is removed and wouldn't show up in the reverse chronological feed either, depending on moderation specifics.

Why am I saying this? The algorithm predicting what you'll click on is a bit different from the content moderation system. Famously, Facebook had many people trying to moderate content. These are extremely serious issues. There are credible accusations of Facebook not censoring content inciting genocide in Myanmar (the Rohingya genocide). The stakes are high. It's not just about machine learning algorithms; people are scoring content and deciding what's good or bad.

Seth: Right. So there are values built into the process, is what you're maybe conceding.

Andrey: Yes. Alright, with that broad prior discussion...

Seth: Give me a percentage on the broad hypothesis.

Andrey: I guess I was trying to say it's hard to make it precise. Let's just say all the things Facebook does affecting what you see in the feed – the cumulative aspects – certainly have political effects. But it's not just Facebook alone; many types of social media contribute. Even if we made one platform very unpolitical (and we'll see something about this with Instagram in the experiment), it wouldn't remove the potential role of social media overall.

Seth: Okay, good. Alright, let's get to the evidence.

These researchers worked with Facebook to conduct a pre-registered study. That's impressive – they wrote down all analyses, recruitment, and filtering beforehand. In their main comparisons, they had about 20,000 Facebook users and 20,000 Instagram users. About half were assigned to a reverse chronological feed for three months around the 2020 U.S. election, seeing only the most recent posts from accounts they follow. The control group got the default algorithmic feed, curated by Facebook to be engaging.

Andrey: And also to remove violating content.

Seth: Yes, and reduce slurs. They looked at three types of effects. First: platform usage. Unsurprisingly, the algorithmic feed makes people use Facebook substantially more – perhaps 30% more? My recollection differs slightly, but it's significant.

Andrey: I think the paper states the average respondent in the algorithmic feed group spent 73% more time each day compared to the average US monthly active user. In the chronological feed, this reduced to 37% more. So maybe closer to a 50% reduction relative to the algorithmic group's excess time, but still significant usage even without the algorithm.

Seth: Yes, yes.

Andrey: The effects are interestingly a bit smaller for Instagram.

Seth: It's a really strange way to show the result in the paper. This is a comment about my dislike of Science magazine editors and how they sometimes don't give us the parts needed to evaluate things easily.

Andrey: They don't... well, two things they don't report straightforwardly are the overall level of time use (bizarrely obfuscated) and whether the feed made you vote for Trump or not, which is the question I want to know.

Seth: Well, in their defense, they asked people whether they voted, and there was no effect on turnout.

Andrey: They asked people whether they voted for Trump or Biden; they just don't tell us the answer in the main text. They definitely asked.

Seth: Yeah, I don't know. There are a lot of things to say about this paper. For listeners, there are 300 pages of appendix! Some are survey instruments, but the amount of results is staggering. My understanding is they were obligated to report everything they pre-specified.

Andrey: Even without correcting for multiple hypothesis testing? Well, when all effects are zero, does it really matter?

Seth: You could still get wider confidence intervals.

Andrey: What I wanted to say is this is a very unusual study. Facebook agreed to this and let researchers have high autonomy. My understanding is Facebook also funded it, which is non-negligible given participant payments (potentially over $100 each for 20,000+ participants). They also had full-time Facebook research scientists providing data and coding support. It was a huge endeavor, so many things were measured.

Uniquely, there was an on-platform experiment (different algorithms) and surveys. Some users even consented to install software tracking their off-Facebook activity. It's very comprehensive.

So far, we've mentioned people spend more time with algorithmic feeds. Unsurprisingly, they're also more likely to like and comment on posts they see – consistent with optimization goals.

But some findings about what people see are maybe surprising. With the algorithmic feed, about 60% of content is from friends. With the chronological feed, that falls to 33%. Chronological feed users see much more content from Groups they're in (a popular product, even if I haven't joined one since college) and Pages (brands, news outlets, etc.).

Seth: 90% are just Minion memes. If you're making the mistake of projecting your feed onto the rest of the world... you met Americans? 90% of their Facebook is Minions feeds.

Andrey: Alright. Which result next? There's a lot here.

Seth: How about the political content of the posts?

Andrey: Yes, let's get to the political stuff. Highlighting post sources helps understand how different the content is. In the chronological feed, people actually see a higher proportion of political content (about 15% more) and more content from moderate or mixed sources.

At the same time, a really big effect: they see 70% more posts from untrustworthy news sources in the chronological feed. This relates to moderation. Facebook has scores suggesting certain outlets are "fake news."

Seth: Clickbait factories, right? Tabloids, basically.

Andrey: Yeah. This portrays a nuanced story. We might naively expect the algorithmic feed serves ideological "red meat" and discards everything else. That's not happening. If anything, the chronological feed sends people more potentially outrageous stuff from untrustworthy sources.

Seth: Or maybe the algorithmic feed finds content at the intersection of engaging and anodyne? It wants to bring you engaged content, maybe political if mainstream, but mostly non-political news.

Andrey: Just to clarify, the chronological feed shows more political news (40% more).

Seth: Yes, to be clear, chronological is 40% more political. My point is the algorithm seems to point you towards less political content. To the extent it is political, it's more trustworthy. The chronological feed also has fewer slurs, though.

Andrey: Yeah, but slurs occur so infrequently, I don't know how important that is. This difference in content is what we call the "first stage" in statistical analysis. Any change in the algorithm matters because you see different content.

Seth: Now, let's see how Dr. Evil Zuckerberg manipulated American minds. How big are those effects, Andrey?

Andrey: They are essentially zero. The effects are tiny and fairly precisely estimated. Let's list the primary outcomes:

  • Affective polarization (how you view the other party/politicians)

  • Issue polarization

  • Election knowledge

  • News knowledge

  • Self-reported political participation

  • Self-reported turnout (did you vote?)

No effect on these. The one difference: people with the chronological feed were less likely to post political comments and posts themselves on Facebook. Maybe not surprising, since they see less from friends, and most people might only engage politically when talking to friends via their feed, not random groups or pages.

Seth: The only political activity we see more of in the chronological feed is clicks on partisan news. It seems people in the chronological feed are exposed to more of these less quality-adjusted sources and click on them more often.

Andrey: Let me push back. Putting on my educator hat: this is one of the secondary outcomes. If you run enough hypothesis tests, something will be significant by chance. There are tons of secondary outcomes, and only one is statistically significant. I wouldn't pay much attention to it. If I were reviewing a paper based solely on finding one significant secondary outcome after null primary findings, I'd say, "Dude, what are you doing? You told us you cared about X, found no effect, then dug around until you found Y and built your story on that? That seems wrong."

Seth: Fair enough.

Andrey: Not saying the authors did that, just my general view.

Seth: I just picked out the one number that wasn't zero. But speaking of these zeros, they're reported in standard deviations. The confidence intervals for most outcomes are within +/- 0.05 standard deviations of zero. Is that small, or could 0.05 standard deviations swing an election if scaled across America?

Andrey: Great point, and a limitation. From this study, we know the effects aren't huge. But U.S. presidential elections are often enormously close. If we multiply even a tiny effect out, it could matter. We can't say for sure from this study, but the evidence is consistent with effect sizes that could swing a close election.

Seth: Right. We can't rule out small but potentially significant effects. I'm still frustrated they don't just give us the party-line voting outcome. I understand why Facebook might not want that, but why not? Did this make people vote more for Trump or Biden?

Andrey: They do report "party-line presidential voting" as an outcome in the appendix, I believe.

Seth: I want to see: did they vote more or less for Trump as a function of being assigned to the chronological feed?

Andrey: I haven't dug that deeply into the appendix. Maybe you're confident they didn't report it prominently. I'm confident they have the number. My strong belief is the effect is zero. I'd be shocked if there was an effect.

Seth: I can see why Facebook wouldn't want them to highlight it. If there's a non-zero result there, there's no winning that conversation.

Andrey: But "party-line presidential voting" seems so close to what you want. I'm wary of conspiracy thinking about why it wasn't emphasized. Maybe you're right, but I'm not sure.

I should also mention, earlier I should have disclosed I've had a research collaboration with Facebook in the past.

Seth: Boo, hiss, boo.

Andrey: I got paid a trivial amount, forced to be a contractor for the project. This doesn't mean I'm using inside information; I have none about this study from my prior work.

Seth: But what you're saying, in a sense, is the audience should pay more attention to me for this episode.

Andrey: Just to be clear, I generally think social media is not that great, so you should update based on that too.

Seth: Oh my gosh. Pivoting to the center here, Andrey? Despicable. We need to be extreme!

Andrey, you labeled my speculation about the voting outcome reporting a "conspiracy theory." Well, I want you to know that one of the secondary hypotheses in this article was about whether Facebook makes you into a conspiracy theorist.

Andrey: Oh, yes.

Seth: I'd like to ask you a series of questions Facebook used to evaluate this. Do you accept this challenge?

Andrey: I accept.

Seth: Alright, I need your belief (0-100%) on these statements circulating in 2020. Advanced difficulty: some are in Spanish.

  1. Evidence found on Hunter Biden's laptop proves Joe Biden took bribes from foreign powers.

Andrey: It doesn't prove things. No. I take objection to the wording. It's poorly worded.

Seth: Okay. Question two: 2. The current FBI director, Wray, has said that the greatest domestic terrorist threat is white supremacists.

Andrey: That is what he said.

Seth: Correct. Not a conspiracy theory. 3. Amy Coney Barrett said that a woman needs a man's permission to own property.

Andrey: Probably not. 5 percent?

Seth: 5%? You are correct, 0% was the answer. 4. The US government has a plan to force a COVID-19 vaccine on everyone.

Andrey: "Force" is doing a lot of lifting here. I'm guessing the narrow claim of forcing is zero.

Seth: That would be a 0 percent claim. You see how this determines conspiracy theoriness. 5. Masks and face coverings are not effective in preventing the spread of COVID-19.

Andrey: Right? They're all... (mumbles) The entire world got COVID-19. I don't know what this question wants. It's not like we prevented the spread entirely.

Seth: Alright, next one: 6. Millions of fraudulent ballots were cast in the 2020 presidential election.

Andrey: Hopefully not millions. That's a 0.00001 percent.

Seth: 7. Donald Trump held a Bible upside down in front of a church.

Andrey: Sure.

Seth: 8. In October 2020, most rural counties were in the COVID-19 red zone based on their high rates of new cases.

Andrey: No idea.

Seth: That was correct. Okay. 9. (Spanish) Antes de las elecciones presidenciales de 2016, Donald Trump pagó en secreto a una estrella de cine para adultos. (Before the 2016 presidential election, Donald Trump secretly paid an adult film star.)

Andrey: I don't speak Spanish, Seth.

Seth: You can't get that? Una Estrella... 10. (Spanish) Joe Biden es un pedófilo. (Joe Biden is a pedophile.)

Andrey: Wait, seriously? That's what they asked?

Seth: Facebook scientists asked the public, "Is Joe Biden a pedophile?" In both Spanish and English.

Andrey: Alright.

Seth: Andrey, thanks for playing "Are You a Conspiracy Theorist?" My takeaway: many questions aren't black and white. Believing the "wrong" answer doesn't necessarily mean someone is a schizophrenic-style conspiracy theorist. What do you think?

Andrey: Yeah, it depends if you take them literally or as gestures towards something. Not the best conspiracy test. But I guess the effect [of the feed type on conspiracy beliefs] was zero? I didn't look at this specific outcome closely.

Seth: I think we found you were at least 25% conspiracy theorist, Andrey. Proud or terrified?

Andrey: I'm a free thinker, Seth.

Seth: Alright, Andrey, should we move on to limitations?

Andrey: The only other thing I'll mention is this is part of a bigger set of studies. My understanding is there are at least four, maybe eight papers in progress from this collaboration, studying various aspects like deactivation experiments (paying users to not use Facebook).

Seth: Right.

Andrey: That could speak to the broader question of what social media is doing. But it suffers from similar criticisms: social media isn't an individual decision in a vacuum. Even if we don't use it, we're affected by it.

Seth: Alright, limitations. We already talked about affiliations – doing this with Facebook might mean avoiding highly charged questions. How much does that bother you? Do you think this would have been pocket-vetoed if there were big negative effects found?

Andrey: My understanding is this study was unique. There was a pre-commitment from Facebook to publish results. Interfering would have been a huge, publicized deviation. An independent observer wrote a report confirming no interference. So, while we shouldn't dismiss concerns entirely, I'd be more worried about other collaborations, like unpublished advertising studies where results might be canned internally if they showed ads didn't work. This study had strong commitments against interference, and I think we should trust it more.

Seth: Here's another question: The "first stage" involved both reducing usage time and changing content mix. Are you worried about a net zero effect masking big, canceling effects in opposite directions? Maybe usage levels had one effect, content mix another, and they coincidentally canceled out?

Andrey: It's plausible. The authors do some heterogeneity analysis, which might pick that up if it were happening, but it doesn't seem like much is going on there. It's an interesting interpretation question. If we had found an effect, we'd discuss mechanisms. When there's a zero effect, finding canceling mechanisms is tricky.

Seth: Any limitations I missed?

Andrey: A big one: duration. Three months is long by academic standards (we see one-week studies!), but if we're interested in truly broad effects over years, it's short. If a tiny effect materializes linearly over, say, four years between elections, you could multiply the potential effect from this study by 16. Small effects can get big over time.

Seth: Okay, ready to move into the posterior, Andrey?

Andrey: Sure.

Seth: Alright, my posterior. I started at two-thirds chance the algorithm put a significant thumb on the scale favoring lefty candidates and chaos/polarization (MAGA vs. BLM). The other third was "no net effect." I've moved considerably towards "no net effect," at least regarding political polarization. This paper is convincing the algorithmic feed didn't make people more polarized leading up to 2020. On that specific claim, I go from 67% true to maybe 5% true.

We don't get the Biden/Trump vote answer, so I can't update hard on the "lefty candidate" part, but I'd still update towards zero, maybe from 67% to 30%, because my mechanism involved effects on both polarization and candidate choice simultaneously. How about you on the narrow question?

Andrey: Yeah, it definitely made me update. I'd seen versions of this paper over the past year. But fundamentally, it doesn't answer a critical question: moderation. Take the Hunter Biden laptop. If Facebook moderated posts by simply not showing them, that would likely affect both the algorithmic and reverse chronological feeds equally. We learn nothing about that type of moderation from this comparison. And that's what much political discussion focuses on – these fiery stories that could shift opinions being potentially suppressed across the board. I don't see anything here telling me those bans don't apply to the reverse chronological feed.

Seth: Right. Important editorial choices might exist outside this experimental comparison.

Andrey: Yes.

Seth: How about the broader claim? I come down a bit, from ~90% "this could be super important" to maybe still 90% on the potential, but down from ~80% to maybe 50-60% on the idea that these choices have historically had major political effects. 2020 seemed like a prime election to see big effects jump out, and we didn't see strong evidence here for this specific mechanism.

Andrey: I agree my belief goes down. Here's what I'd say: the role of the specific machine learning part of the algorithm seems less important than I might have thought. A big driver of what people see is simply who they follow. Now, who they follow might be influenced by other algorithmic systems (friend recommendations, nudges) not tested here. Maybe those have big effects. But conditional on following someone, the content seems somewhat similar whether ranked by algorithm or chronology.

Seth: Well, maybe that's a good place to leave it, Andrey, unless you have parting thoughts.

Andrey: I do have one. This discussion is interesting, especially now with the moderation changes on X (formerly Twitter). It's part of the narrative that Elon Musk did something to cause a "vibe shift," possibly increasing support for Trump and decreasing support for progressive causes. What specifically did he do? I'll leave listeners with this: Suppose you put a score in your algorithm to put whatever Elon Musk says at the top of everyone's feed. Could that possibly have different effects than the experiment studied here?

Seth: Right. The question is still unanswered. I know many listeners are young researchers, and we invite you to attack that question. This paper feels like a starting gun for investigating algorithms in politics, rather than the final answer.

Andrey: Yes. Well, thanks for listening. Please make sure to comment, like, subscribe, and generally spread the good word about Justified Posterior.

Seth: And tune in in two weeks where we'll talk through one more paper on economics and technology and get persuaded by it so you don't have to. Alright?

Discussion about this episode

User's avatar