We are all aware of the Mueller Report and its findings from our federal intelligence agencies that a Russian company (Internet Research Agency (IRA)) was hired to use social media advertising on Facebook, Instagram and Twitter to engage likely Trump supporters during the 2016 presidential election. They targeted people with interests consistent with existing Trump supporters and showed them sponsored content within their feed.
From 10,000 feet this approach is vanilla. It is the approach that every legitimate advertiser takes to cost-effectively reach its target audience. If I’m engaging with Mommy blogs, it’s appropriate to target me with content about children’s clothing. If I’m watching videos of extreme sports content it’s appropriate to target me with content about GoPro cameras. As consumers, we have learned to expect this, and we may sometimes even appreciate learning about relevant products/services we might not have otherwise heard about.
This ad revenue is how Facebook (who also owns Instagram) and Twitter make their money. And the more specific and unique the targeting desired, the more expensive it is for the advertiser. When the advertiser is able to get users to engage with their sponsored content (like/share) it significantly increases the reach and impact of their message, and their investment ROI.
For me, it is when the sponsored content on social media veers away from consumer or business products and services into informational content that it moves away from advertising and slides into “propaganda”. And the slope is especially slippery when the informational content looks and feels like news or opinion from a credible source.
The reason it changes when it’s informational content is the “sleeper effect”. This is a social psychology concept when people see a message many times, over time they remember the content but discount the source. So even if they knew the first 3 times they saw a message that it was coming from a less credible source, over time the repetition “wins” and the message remains in their brain.
Social media is so perfect a medium for the sleeper effect - it’s almost diabolical. This is because, as humans, we are always most influenced and persuaded by the people we perceive to be most like us. And, at its core, every social media user experience is built on encouraging interactions among people who are “like us” - our close, loose connections and aspirational connections. And because what populates the majority of our feed is from and by these people “like us”, we do not have our guard up when engaging with the content they may like/share.
When we are scrolling through our feeds, we are far more likely to be “processing spontaneously”. This is harmless when reacting to baby, pet and food images shared by family and friends. But when the same feed also has political, opinion or “news” headlines, it also means social media users are not thinking critically about the source of this information.
An audience in a social media experience is in a state to be more highly influenced because they are interacting with people like themselves and they are processing what they see spontaneously. When informational content is shared — content that is not directly about the person in their friend group — it is being consumed with a highly non-critical eye. And that makes people far more susceptible to the sleeper effect when they see it shared again and again across their friend group.
The media and lawmakers are pressuring Facebook and Twitter to regulate its sponsored political content more tightly in this election cycle. Both companies earn their revenue from advertising and they would not want to voluntarily throttle any potential ad revenue. And, importantly, they also do not want to be the arbiter of the “goodness” or “truthfulness” of informational content shared on its platform.
While social media firms do staff teams who review reported content to ensure it meets their established standards; these standards generally include taboo things like exploitation of children, violence, hate speech, etc. It does not address content that shares facts out of context or those with misleading headlines about a particular political party or candidate.
The “I am so-and-so and I approve this message” on radio and TV ads are easy to recognize as political ads. But unlike regulated media, any content from any source can be promoted on social media. Anyone can pay to promote any blog post, news article or video that they want, to whatever audience they select. This informational content doesn’t “look” like a political ad, nor does it trip any of their established standards -- it looks like just another piece of content that people share with each other. Because it hides in plain sight, there is no reasonable way for Facebook or Twitter to review and assess the truthfulness or intent of every piece of sponsored content on its platform -- it’s just too big a job.
The social media firms appear to want to take more of a “Public Service Announcement” approach -- educating their users that they need to be wary of the sources of the informational content that is shared on Facebook, Twitter or Instagram. However, this strategy of forewarning and inoculation is likely to backfire. This is because people do not ever want to believe that they were, and still may be, being actively manipulated. No one wants to admit to themselves or to others, that they were fooled.
Net net, social media is the perfect place to share misinformation that “looks like” credible information. The people using social media aren’t in a mindset to be able to critically evaluate the source every time they see a message. And the social media firms aren’t equipped to review every piece of sponsored content for its truthfulness, context and lack of bias. With that in mind just isn’t a clear and easy answer for preventing this same kind of ‘interference’ from happening again in the current election cycle.