Feed Algorithms Should Encourage Learning, Not Subvert It

Blog

Publication date:

June 14, 2021

Share:

Education Poverty

When the internet appeared on the horizon, it was touted as an avenue for unlimited and open access to information that would benefit us all. As Paul Theroux reminds us in his book, The Last Train To Zona Verde, “if the internet were everything it was cracked up to be, we would all stay home and be brilliantly witty and insightful.”

It has not panned out that way. In the online world, money does more than talk; it also influences what you see and hear. The implication is that the internet has evolved into something far less open than what was originally promised. The pattern of evolution was perhaps predictable, but the implications of what it has morphed into were less so.

Online platforms decide what information (and ads) you get to see. With the sheer volume of postings, some control is necessary so we don’t all drown in the onslaught. But how that control is exercised has consequences. As an educator, some are quite worrisome. Despite lofty promises of open access, the reality has evolved into something quite different. Let me explain what is happening and why you should worry about it as well.

The simple fact is that we get to see what the platforms judge to be relevant to us; i.e., we are fed views/opinions that align with ours. In that sense, what might appear as open is skewed towards anything that reinforces the beliefs, opinions, and attitudes that we already hold. Research in psychology confirms that we tend to believe opinions that support our own beliefs and will discount others; i.e., the so-called confirmation bias. Since online platforms want us to pay attention and engage, it makes sense for them to build that bias into their feed algorithms. As a consequence, we rarely if ever get exposed to contrarian views or opinions that might make us question the beliefs we hold.

My view is that the current feed algorithms condemn many of us to a slow process of intellectual petrification. On one hand, and no matter whether they are justified in any way, our beliefs become stronger (and perhaps more extreme) because of the persistent reinforcement of them. On the other, repeated reinforcements give us a false sense of being in the right and, hence, create an intellectual arrogance and stubbornness.

Taking a step back, we should all recognize that this is the polar opposite of what education tries to achieve. The stated objective of higher education, for example, is to promote and support critical thinking; i.e., the ability to think, and do so independently and critically.  As Martha Gellhorn put it many years ago: ” the purpose of all education is to enable and ensure the private duties of conscience, judgment, and action.” Looking at the daily news, we have some way to go.

My view is that the current feed algorithms which online platforms rely on to regulate what we get to see do not support critical thinking, let alone thinking. In fact, they might well kill that as an attainable objective in many of us. They appear to promote a more uncritical unthinking culture. That can only lead to what I call education poverty which can have dire consequences.

Reality

With the ubiquitous presence of digital devices, we are all plugged in 24/7 and spend quite a bit of time online. Today, about one third of the entire world’s population are monthly active users (MAU) of Facebook alone (and that is including the most populous nation, China, where Facebook is banned!). Without many of us realizing it, we are all slowly becoming digital natives. More worrisome is that  many of us are becoming “digital naives” in the process.

For many of us, the online world is our only source of information. A surprisingly high number get their information from Facebook, Instagram, and the like. These social media platforms are their only  source for “factual” news and backup. That is worrisome given what is posted on these social media. Most people forget that  Facebook and platforms like it are just  hosting sites, and that what is posted there is largely unedited and unverified. As we are all learning every day, in that world, fact and opinion are largely indistinguishable, and are becoming increasingly so. It requires quite a bit of information literacy to keep ones bearings in that quicksand. As I write in my book, Rough Diamond (https://geni.us/RoughDiamonds), we have arrived at the bizarre reality that we need a heavy dose of critical thinking to develop any kind of critical thinking. The way current feed algorithms are working offers no help in this.

A closely-held Secret

Feed algorithms decide what pops up on your screen when you log into your online accounts. There is no human intervention. Don’t kid yourself. Mark Zuckerberg is not sitting in his kitchen deciding what appears on your Facebook page! Computer algorithms do that following protocols set by humans.

How feed algorithms exactly work is a closely-held secret. Some general characteristics are revealed but most of us have no idea how they exactly operate or what drives them. I do not know how the Facebook feed algorithm works. I have, however, a hunch. That hunch comes from these algorithms being shadow images of the business model that drives these platforms. Perhaps no surprise, but in the end it is money that decides what you get to see.

How do social media platforms like Facebook make money? None of us pay to use them, so we are not the source of their revenues. That source is mostly advertising; i.e., the increasingly indistinguishable “promoted” posts. Social media platforms essentially sell (or auction) your profile to advertisers who are eager to target you (or your wallet more specifically). The better they can profile you for the advertisers, the more they can charge. Hence, in the background, these platforms are busy getting to know you as well as they can.

Based on what they know about you, algorithms calculate how relevant a post might be to you. If that score is high, you are likely to find it in your feed. The same principle drives what ads they feed you and, hence, my argument that feed algorithms mimic business models. There should be no surprise in this as these platforms, no matter what they claim, are in the business to make money, and lots of it.

With this in mind, we should not be surprised that these platforms know quite a bit about us. In fact, most of us would be amazed about how much (and what) they actually know. A good indication of what they know (and how accurate it is) can be inferred from the ads you see. Sometimes, we are surprised when we see a timely and appropriate ad popping up in our feeds. Our reaction typically is: “how do they know that?”. Well, they do and they have a financial incentive to do so. The more accurate a bullseye they can paint on your forehead, the more they can charge advertisers to take a shot.

How much they know and how accurate it is depends on how active you are online. Everything you do online leaves digital footprints from which they create a digital twin of you. In their world, that twin becomes you and is the one that is put on the offer block for the advertisers. The twin’s profile is also what determines what these platforms think might be relevant to you. Their feed algorithms sift through the online posts for what they think might be relevant to each digital twin, and that is what you get to see when you log into your account. In social media, you see who you are and you are what you see.

Implications

In feeding you what they think might be relevant for you, social media platforms create an echo chamber around you. They erect a bubble in which you only get to see and hear what you already believe in. This aligns with the confirmation bias we are all susceptible to, but it also leads to a strengthening of those beliefs over time. Slowly, question marks disappear and beliefs become imprinted in mindsets in larger and bolder print.

There is also a real concern that the extreme might crowd out the more moderate. The reason for this is that algorithms are more likely to identify correctly an extreme viewpoint because they are easier to catch and label. The potential danger in this in that beliefs, besides being solidified over time, might become more extreme. In other words, because algorithms pick up extremes more accurately, we might get to see them more often and this might slowly push our own beliefs towards the extreme. This is indeed a worrisome thought.

As an educator, my main concern is that persistent reinforcements of certain beliefs lead to them becoming more fortified and, hence, more difficult to change. In other words, a slow process of petrification might be at work where one becomes a prisoner of ones own  beliefs. When that happens, it becomes virtually impossible to question beliefs, open minds to opposing viewpoints, or recalibrate beliefs based on new facts. In other words, I worry that the way feed algorithms filter posts subverts learning.

Stimulating thought, reflection, and learning

From a pure educational perspective, we should rethink feed algorithms. They could be redesigned to promote learning, but that would require a perspective quite different from the short game social media platforms are playing now.

Critical thinking is developed through the exposure to, and reflection on, opposing and/or contrarian viewpoints. In fact, as I argue in my book, Rough Diamonds, variance is the source of all learning. To enable and stimulate learning, feed algorithms should do exactly the opposite of what they are doing now. They should feed opposing viewpoints in an inquisitive manner, especially when the beliefs held are harmful or are not supported by any factual evidence.

I am not going to hold my breath on this one. There is just too much money in the short game. And too many of us are happy to be kept on a leash. We should all – at least those who still can – recognize the far-reaching implications of what is playing out in front of our eyes.

Also read

Subscribe to my Blog

.