Algorithmic feeds expose the worst of human nature

It’s not your feed that’s toxic, it’s your friends.

Thank you for taking the time to read this and I hope you walk away with a new perspective of how we use everyday technology. This post is from my blog, at If you enjoyed this post, please do share with your network and subscribe.

‘Social media is toxic’, ‘Social media has made society worse’, ‘Social media has created more divide’ — how many times have you read this viral post from a average person trying to sound deep?

Screenshot of a tweet from BIA saying ‘Social media is too toxic now’ on April 2nd, 2021
Sorry BIA, I’ve no clue who you are but you’re the first person who came up when I searched ‘Social media is toxic’ on social media.

‘But it’s true’ you say before you bring up Instagram to see people in expensive clothes that they will return tomorrow.

Maybe. Maybe social media is toxic in the sense that the 50% of the world that use it have fostered a sense of aggressiveness or hostility on your feed of your chosen platform. You can’t say anything on there without being subject to attacks from all parts of the internet. Don’t get me wrong, some of the claims of toxicity is completely valid, the sad reality is that some people are just terrible individuals. But I don’t think it’s Facebook’s, Twitter’s or anyone’s fault entirely (keyword here is entirely).


Humans have terrible attention spans

Before we jump on to why humans are terrible, we need to understand what is the ultimate goal of majority of these services we use.

Tech companies are harvesting your time. Actually, even harder, they decided to go after probably the most inefficient thing they could, attention. They all do it for various reasons, but ultimately, that’s what they’re competing for. In the grand scheme of things, humans don’t have the best attention spans. Depending on what you read, it can span from a couple of seconds, to around 20 mins at best. We get distracted all the time. To tame such an unreliable fickle characteristic, they need to pull all the tricks in the book to make sure you are paying attention and on their platform at all times, even more than sleep. Literally.

The currency of attention

The definition of attention span is the length of time a person can concentrate on a given activity or subject. This can change depending on activity, for example some can sit through a 4 hour director cut of a mediocre movie, while struggle to study for an upcoming exam without taking 60 mins breaks every 2 minutes. Everyone has experienced procrastination Olympics when exam season comes around.

Tech giants know this. There’s are a few obvious ways that a tech giant can hold on to your attention:

  1. Show you content that will keep you on the platform (more on this later)
  2. Send you push notifications (which are known to hook you to an app)
  3. Reduce the competition

In the book ‘No Filter’ Sarah Frier’s talks about how Mark Zuckerberg is constantly paranoid that Facebook will one day become irrelevant. Facebook needs to be ‘relevant’ for two main things:

  1. So that users spend time on Facebook, leading to more data of a user being mined
  2. So advertisers believe that most of the users are on Facebook and therefore will pay money to Facebook to advertise there

Facebook needs your attention, so for a while they tracked what other apps you have installed on your phone. They saw that when people weren’t on Facebook, they were on Instagram, Twitter, etc. So what does Mark Zuckerberg do? He buys Instagram, the next most used app. It wasn’t going to let another app become more relevant. Now if someone left Facebook to go to Instagram, they were still in Facebook’s ecosystem, so from their point of view, they still had your attention. Facebook went on to do the same with WhatsApp, and tried to do the same with Snapchat (and good on Snapchat for getting out well in this).

The goal of Mark Zuckerberg here is to hold your attention. You go on Facebook, then you get bored so you jump on Instagram, and then you get some messages from WhatsApp. In this whole process, Facebook has your attention. You are actively opening and using their applications. Facebook isn’t alone in this, they were just an easy target. Why do you think you get random push notifications from Twitter telling you about a random tweet you don’t care about? Or Netflix saying telling you that a new show is on their app? Or this quote from an interview with the CEO of Spotify:

My point by telling that story is that what we found so many times before is that the more people engaged, the more likely they are to pay.

Spotify didn’t like that you listened to music and then went to Apple or Google Podcasts to listen to podcasts, so it bought out a bunch of the biggest podcasters out there. This way, you don’t leave the Spotify ecosystem.

Also, a final point before we move on, poor clubhouse. There was no way the big players would let a small start-up take away attention like that. It had no chance.

This video will make you angry

Back in 2015, CGP Grey made this brilliant video titled ‘This video will make you angry’. I highly recommend you watch the video, but I will also summarise it here if you can’t watch it right now.

A screenshot from the video ‘This video will make you angry’ showing a stick man wearing glasses and drinking coffee, with a projector showing an image of a network of brains with angry red flames jumping between them
Screenshot of ‘This video will make you angry’

CGP Grey starts off by essentially saying we have limited space in our brains for thoughts (or ‘thought germs’), and when a new thought comes in an old one is overwritten. Essentially, we don’t have the best attention span. So when we jump on social media, we see a stream of random thoughts from random people. When a thought exploits a specific emotion (like a cute cat photo making you feel happy), you are more likely to interact with it, like sharing or commenting. The same goes for posts that exploit the part of the brain that makes you angry.

When someone sees a post they are shocked or disagree with, they are more likely to share the post. Now an interesting phenomenon happens. For a post to be relevant, it needs to be talked about. What happens on the internet is that if you have a post that creates an ‘us vs them’ story, it is more likely to be shared and interacted with. But what happens is that the two different parties generally break off and continuously talk about the other group.

Take the example of a video of an iceberg melting. One person will shout on Twitter that it’s time for climate change reform in the world. Multiple users would interact with that post — either by retweeting it, commenting or liking it (or even all three). Someone who is against climate change reform would respond to the tweet against it, and the followers and supporters of that ideology will also echo their beliefs. Now there’s a turf war for those against climate change reform and those pro it. But what happens is that, although there will be some back and forth with the two sides, most of the conversation will be those pro climate change reform raging and angrily talking about the group of people who are against climate change reform and vice versa. And this goes on and on.

Now in this situation, being a neutral is hard. You get bombarded from both sides and you naturally end up picking a side. As soon as the two sides agree on something though, the conversation stops. There’s nothing to shout about anymore, so people move on. But this rarely happens. Mainly because of human ego, but also it’s not in the best interest of the social media companies to end it here.

Social media giants love this stuff. Why? With every outrage, there is interaction, and lots of it too. This means you stay on the app. You jump on the algorithmically determined feed, it will show you the posts that will spark an outrage within you and grab your attention. You can see where I am going with this.

Humans love a tragedy

Have you ever heard the term ‘It’s like a car crash — you can’t look away’? Well there is a reason for that. Human’s like a tragedy. What happens is that when we see danger, our brain instinctively assess whether the danger is a threat to us — invoking the fight or flight response. Once the brain has assessed that it is not a threat, your brain still wants to face its fears without risking immediate harm — hence you can’t look away.

Humans also are prone to being drawn to negativity. It’s from our survival instincts. If you get a negative feedback from something, like touching a hot pan, you need to remember that for next time, hence your brain holds on to the negative feeling. But although society has evolved beyond the point of being surrounded by basic stimuli that give us negative feedback, like a prickly plant or a hot fire, our brain hasn’t. So when you see a post that makes you feel angry online, you are drawn to it. Even though you know you will be angry, upset or shocked, your brain can’t help but engage.

Back in the days when social media feeds were chronological, social media companies realised that people jumped off their platform once they saw all the updates on their feed. So they changed the feed to be sorted algorithmically. But what was the aim of that algorithm? To drive engagement and therefore grab their attention. What pushes people to engage? Tragedy, anger and division. The algorithm isn’t a human. It doesn’t care what content it promotes. It just wants you to stay on the platform and will do anything it needs you to keep you on it. Even if it means showing you a bunch of conspiracy theory videos (although this was changed — if you feel like your YouTube recommendation quality dropped, this is why. Great recommendations also means knowing if you are likely to engage with conspiracy theory videos and recommending those to you). If you’ve used social media, you’ll know that you can monetise your following — the more viral posts or followers you have, the more valuable you are. So what’s an easy way to get attention? Spew out controversial stuff.

We can hate social media platforms and call them ‘toxic’, but what we are really saying is that we don’t like the interactions we have on social media. Those interactions are driven by tapping into and exploiting people’s inherent draw to negativity.

So are platforms like Facebook and Twitter absolved of all guilt? No of course not. Should they have thought of this? Yes, they 100% should have put some thought into this before implementing such algorithms. They’ve created a problem that they very well should invest time and money to fix.

But, unintentionally, these social media sites have created a machine that shows the worst of human kind.

And it’s not going to get better any time soon.

I have a lot more to say about this topic, but I didn’t want to turn this post into a novel! So maybe expect a part 2 in the future — let me know if that’s something of interest to you.

If you have a better idea than I do, if I’ve missed out anything or you think I am talking absolute rubbish, regardless if it’s positive or negative feedback, feel free to reach out either by commenting on the post, or by emailing me on

Subscribe Now.

If you enjoyed this post, subscribe to Tanvir Talks, where I publish a podcast twice a month and a newsletter once a month breaking down the big questions asked in tech into digestible chunks for you to consumer, the average consumer.

I like to write things about technology and video games.