Media Literacy When the Platforms Are Complicit

3 min read

In June of 2016, Twitter tried to upsell RT -- a propaganda arm of the Russian government -- to increase RT's visibility on Twitter during the US elections. The image below is included in the full article on Buzzfeed:

Email from Twitter Sales to RT

As you can see from the email, Twitter's offer to a propaganda arm of a foreign government included an elections specialist, and early access to new features.

Let's just pause here: a US tech company was willing to provide consulting support to a propaganda arm controlled by a foreign government advertising in our elections for a million dollars.

This is how our democracy is sold. And given the amount of money spent on both politics and tech, the price isn't even very high. RT appeared not to take Twitter up on their offer, probably because it's cheaper to staff convincing troll accounts to manipulate Twitter users, thanks in large part to Twitter's pathetic efforts at addressing bots and trolls on their site.

And, of course, Twitter is far from alone in pursuing advertising revenue from questionable sources. Facebook and Google eagerly accepted money from and provided consulting support to racist campaigns. This is how data collection in the pursuit of targeted advertising works. They will collect data on as many people as possible, and sell to anyone who can pay. The process of online advertising is so opaque and technical that it allows companies to evade scrutiny.

Here is my question to adults working with K12 and college students on information literacy: how do you make students aware that corporate social media platforms and search engines are part of the structure that makes misinformation thrive?

How do we reconcile that when we use Google to search for verification of a story, we are providing Google with data about us that can, in turn, be used to serve us misinformation -- and that if the client pays enough, Google will provide a consultant to help the misinformation merchants do it better?

How do we work with students (and let's be honest here - other adults) to deconstruct that when Facebook or Twitter or other services "recommend" something to us, the recommendation is getting filtered through who has paid to access our attention?

As adults who care about helping people understand our information environment, what steps do we take to ask meaningful questions about the information we read, and believe, and share?

A lot of conversations about media literacy focus on the need to teach youth the skills to disentangle truth -- or a reliable version -- from misinformation. While this is important, this is incomplete. Misinformation is an over-18 problem. In the US, the vast majority of students in K12 didn't vote in the 2016 election. Adults need this training as much as -- if not more than -- kids. We can't teach this well without a rudimentary understanding of the subject.

So: how are we teaching ourselves, and our peers, to do better?

What does it mean to do informal professional development, in the form of Twitter chats, on a platform that actively tried to sell our attention to a foreign government?

More importantly, how do we reconcile or explain this conflict?

I don't have solid or satisfying answers to any of these questions, but the answers start with acknowledging the depth of the problem, and the shallowness of our understanding.