The Algos, Narratives, and Polarization: You keep feeding on your own biases.
--
And there is no end to it.
Extremism and paranoia reign supreme.
But then again, in a world plagued with polarization and extremism, only the paranoid will survive.
The algorithms, trained to give us what we like, keep giving us what we like.
If you like apples, the algos keep feeding you with apple-related content. If you like bananas, that's the content you get. If you have a bias against bananas, the algos will not present banana content to you, unless if it's the type of content that vilifies bananas.
They keep feeding you the apple content until you are pushed to the extreme. At the same time, the algos keep pushing the banana-guy to the extreme by feeding on his own biases/preferences.
That is how the algorithms are polarizing us. They are dividing us. For the avoidance of doubt, divisions, differences of opinions, differences in tastes and preferences have always persisted and will always persist. However, the scale and magnitude that the algos adopt to magnify these divisions and differences are humungous.
The “Feed” is literally food for your eyes, mind, and soul. It's literally feeding you. What appears in your feed is personalized. It is determined by the algorithms.
In Reinforcement Learning (a branch of Machine Learning, which is also a branch of artificial intelligence) the algorithm is taught to learn by getting a reward or not getting a reward. The goal is to maximize rewards and avoid punishments (negative rewards).
With regards to your feed, the reward is your positive engagement, measured by several metrics such as clicks, shares, likes, emojis et cetera. If you like the apple presented in your feed, the algo is rewarded for predicting the right content for you. Next time, it will show you more apples, bigger apples, various apples, or close cousins of apples (pears).
Negative engagement is penalized. An angry reaction post penalizes the algo. It won't do it again (recommending you the banana again?, No, it won't dare). A dislike is a big penalty. Click the dislike button — you have just punished the algo (ouch, that hurts, the algo won't do it again).
You are feeding the algo, and the algo is feeding you more. What you put in is what you get and more. It's like a content pyramid scheme that never collapses. You put in a dollar and you get 10 dollars. You put in ten dollars and you get $100 back. It's never-ending.
The algos, optimizing for user experience (i.e. pleasing you mister) learn how to do their job well (echoing your voice back to you so that you are happy) in a manner that is so subtle and insidious. The Reinforcement Loop means whatever your bias is, the algo will surely magnify it. You keep feeding your own biases. And there is no end to it.
It gets worse. The algos not only learn from you. They also learn from other people who are just like you. People who like what you like. Yes, that other moron who thinks apples are the only fruit worthy of being called a fruit has online behavior that is highly likely similar to yours, so the algos suggest the same stuff to you.
“Hey Johhny, other people who love apples like to read about how disgusting bananas are”, says the algo to you. “How do you rate this banana joke; when you visit a single lady, never eat a banana in her house, you could be eating her husband”. Gross! Isn't it? Bananas are so disgusting.
Feeding your biases based on the biases of people with biases that are similar to yours can only drive your biases to the extreme, never to the center. Polarization.
These recommendation engines are great because they personalize everything and give us content that is curated specifically for us. It's awesome. But that also prevents us from tolerating dissenting views.
It makes us view any view that is different from ours as extreme because we ourselves have been pushed to the extreme. We are so far from the middle, that even the middle will look extreme. “How dare you don't like apples, there must be something extremely wrong with you?”, says the extreme apple-guy, who is not self-aware that his mindset has been pushed to the extreme.
The apple and banana example is a simple one based on simple tastes and preferences. You bite an apple and decide you like it. You bite a banana and decide you also like it but not more than the apple.
In real life, there are complex issues that emerge as a result of the complexity embedded into modern life. You cannot simply bite and then develop a preference. This is where a narrative comes in.
Consider the two statements below (say, as headlines for a newspaper):
- An HIV-positive man slept with 15 women.
- 15 women slept with an HIV-positive man.
The two statements contain the same facts but they are narrated differently. The first statement, by making the man the subject of the matter, paints a picture of an irresponsible and dangerous man who went around sleeping with women.
The second statement, by making the women the subject of the matter, paints a picture of loose and irresponsible ladies that ended up falling prey to an HIV-positive man.
A narrative gives us a direction we ought to take in digesting content. It makes subtle suggestions. A narrative says something without actually saying it. It parametrizes our thought processes and guides us towards developing a worldview.
You buy into a narrative that’s offered and then develop a worldview, which then becomes your bias that you feed using the algos.
As with most things tech, the first plausible and buyable narrative to be offered (think of it as a minimum viable product) usually becomes the first to scale and takes the greater share of the market. Sometimes challenging narratives gain market share and win at the expense of older, established narratives.
Of importance is also the originator of the narrative. Not all narratives are honest. Some narratives are designed to trick people into holding a specific worldview, even though the worldview might actually be detrimental to the holder of the view.
Before buying into a narrative, it is important to fully digest the content objectively. It can also help to test the narrative offered against an opposing narrative. This habit will protect you from buying into a manipulated narrative.
This is important because once you buy into a narrative, it becomes your worldview, which then becomes the default bias that you feed endlessly using the algos, driving you and others to the extremes, polarizing society, and making the world a bad place.
These social media platforms are designed to play around the joys we get from Confirmation bias. You cannot easily win against the algos. So you have to win at the narrative phase.
Ciao!