The Algorithm

By Mark Nuyens
8 min. read📱 Technology
TL;DR

Algorithms on social platforms prioritize engagement to maximize profit, often at the cost of users.

When it comes to distributing content, whether automated or manually selected, there’s always an algorithm at play. We may not realize it, but even when we personally choose what content to share with others through social apps, we’re following a naturally developed algorithm. We consider what others have liked in the past, their personalities, and various other factors to decide what we can share with them.

When it comes to large social platforms, the concept is no different: services like X (formerly known as Twitter), Facebook, Instagram, Threads, LinkedIn, TikTok, and YouTube have become very good at predicting which content best suits which audience. Some do this better than others, but as their revenue hinges on engagement, it’s reasonable to assume they all invest heavily in making sure the algorithm is as effective as possible—or put another way, that they don’t serve you the wrong content.

Here’s where it gets complicated: how does a social platform know what to serve you? Well, mostly by checking if new content is similar to content you’ve engaged with in the past. But does this mean you genuinely enjoy that content, or were you simply unaware of better options? In other words, are these platforms encouraging users to step out of their comfort zones? Probably not—after all, this could lead to users leaving the platform sooner than desired.

In fact, platforms likely have calculations laying around indicating how long users need to stay engaged for the service to turn a profit. This means that as platform engagement declines, they’re incentivized to refine the algorithm to boost user activity. This adjustment isn’t abrupt; it’s likely a gradual calibration based on demand. That means a growing, or declining, platform might employ different, more engaging content to maintain consistent ad revenue.

While one might argue that it’s in the platform’s best interest to continuously employ this strategy, it probably comes at a cost to long-term user satisfaction. Simply put, there’s likely a “sweet spot” these platforms aim for: content that’s engaging but not so addictive that it crosses a line. They’re trying to get right up to that edge without going over. But other than that, anything goes.

If users become more active when exposed to others' achievements or gravitate towards certain types of media, then that content will be prioritized. However, this raises the question: does high engagement truly equate to quality, or is it merely another metric—one potentially linked to addictive behavior? For these platforms, this is likely not a question that will be discussed during their corporate meetings. After all, as long as ratings are up, does it really matter why?

Meanwhile, influential figures like Mark Zuckerberg have openly argued that they shouldn’t be held solely accountable for certain issues on their platforms. Instead, they often suggest that poor parenting or other companies, like Apple, bear responsibility. Whhile they attempt to shift the blame, and these other organizations likely doing the same, this leads to a circular debate that ultimately absolves tech companies, leaving users to suffer the actual consequences.

What’s also worth noting is the type of content being promoted. Threads, for example, recently decided not to promote political or polarizing content, likely in an effort to create a more comfortable environment for their users (and probably themselves, in terms of content moderation). Intriguingly, users aren’t given a choice in this matter. While Threads (or rather, Meta) could allow users to tailor content by choosing what they’d like to see, they clearly prefer to make these choices on their users’ behalf. Some may view this as belittling, or even condescending.

This becomes problematic when people rely on social platforms to understand what’s happening in the world. Beyond just creating a potentially monotonous feed, or “toxic positivity”, it also risks selecting content that aligns with popular opinion. Studies show that particularly young people turn to social media for news and facts, meaning the algorithm carries an ethical responsibility to present content that’s both relevant and reflective of reality. If it fails to do so, it risks obscuring or even censoring vital information to maintain user engagement by any means necessary.

In a way, the algorithm exerts control over narratives, influencing what is viewed and discussed. This impacts democracy and our collective ability to make informed decisions. Further complicating things is platform moderation, which may further skew the overall impression of what qualifies as trustworthy and accurate content. With the rapid development of AI, there may come a point where we can no longer tell what is real or not. Moderating on such a scale and with such accuracy will be nearly impossible, placing even greater reliance on the algorithm and intensifying content filtering.

We might eventually reach a stage where only authorized or “verified” creators can publish, limiting the diversity of content and stifling creativity among content creators. Addressing the underlying issues may require a more foundational approach, potentially leading to new and exciting startups and ideas. In an increasingly consolidated landscape of content platforms, such a change would probably be welcomed by many.