X Polarization
By Victor Jiang
January 2025
There are two main factors causing massive polarization on X/Twitter.
In the past, Twitter’s algorithm would promote content to users based on a combination of factors relating to popularity—volume of tweets, audience engagement, hashtags, etc. However, since 2018, Twitter has shifted its algorithmic approach to be much more user-centric: what appears at the top of the feed is now much more tied to past search history, browsing time, clicks, and geographic location.
This means Twitter has become much less about what is globally trending and much closer to what individual users want to see. And this makes sense for Twitter given contemporary models of receiving ad revenue based on the amount of time users are spending on the platform.
Notably this significantly encourages echo chambers—groups where people consistently hear opinions closely aligned with their own. A pro-life individual who consistently likes and comments on pro-life content is promoted more similar material to engage with, or suggested right-wing pundits to follow who hold similar views. Over time, this biased sampling reinforces beliefs in a cycle that would ordinarily be challenged by more diverse perspectives appearing from the original algorithm. If all the people of Twitter think it’s true, then it must be.
Twitter’s algorithm automatically pushes the most sensational material to the top. Tweets that stir strong emotions such outrage and gratification intuitively tend to be the most engaged with on the platform as a result of the charge it has on individuals otherwise passively scrolling by. Moreover, the radical nature of these tweets set them apart from the highly saturated amount of much more moderate and ‘safe’ tweets.
For instance, while a funny cat video tweet can receive a few hundred comments saying “KAWAII,” there are so many of these videos and they are so ‘normal’ that the average user scrolls right by. On the other hand, a trans-hate tweet likely garners massive support or backlash from individuals across the political spectrum… notably in the form of thousands of comments, likes, and retweets. And by Twitter’s basic algorithmic calculations meant to prioritize user engagement, that tweet is much more efficiently converting users from passive viewers into active tweeters again who send these tweets to their own friends or reply with their sensational comments. At the very least there is more ad revenue from time spent on the platform.
It’s also worth noting the new revenue models introduced in recent years. Tip Jar and Super Follow provide donation or subscription mechanisms for influencers, and X has even begun a revenue-sharing model where premium users are provided 97% of ad revenue from other premium users, with varying amounts based on general impressions. These further incentive contemporary personalities to engage in sensationalist behaviour.
What then is the impact of increased polarization?
Increased propagation of radical views
I think this is fairly intuitive; in so far as both people and Twitter are incentivized to prioritize tweets or comments closer to the radical ends of the spectrum, such views become more commonplace in public discourse. While on their own they are not harmful in all cases (and in fact can oftentimes be good, such as many elements of the 1960 civil rights movement), I think many of these are uniquely promoted for the sake of stoking public sentiments in nefarious intentions.
Mis/disinformation
The best example of this is probably the 2020 US presidential election. Post Trump’s loss, Twitter was flooded with significant mis and disinformation, from the Stop the Steal movement to conspiracies of dead voters and Trump-to-Biden ballot switching machines. This type of information is specifically promoted by X/Twitter given its inflammatory nature and public engagement. This is quite sad in an age where we have known more than ever before in humanity but we cannot figure out what is true or not.
There is probably also decreased trust in genuine media and reporting institutions due to narratives spread through platforms like X. Broadly speaking the term ‘fake news’ is thrown around significantly more than warranted.
Censorship and harassment
Increased polarization of discourse probably also translates to more toxic or aggressive online interactions. The effect of echo chambers means heavily reinforced perspectives that are difficult to constructively engage with. Individuals who attempt to go against groupthink in an intellectually differing circle are at a far greater risk of harassment or cancellation. This pushes people to be less willing to engage with those of the opposing opinions, worsening the cycle of polarization.
In essence, while X's algorithm has likely achieved an experience more relevant to individual user preferences, it has also inadvertently fostered environments where ill-intent, misinformation, and harassment thrive.
I think however, it’s worth noting that even if X is one of the worse examples of public discourse becoming polarized, these analyses exist in many spheres within our contemporary society. Heavy left-leaning universities aren’t letting right wing speakers even talk to students, while the maga movement on the far right is pushing ever scarier views about immigration within right wing circles.
So… is our public discourse getting worse? I think so. What can we do about it? I think that’s for another time to discuss :)