“We’re not just fighting an epidemic; we’re fighting an infodemic.”
One of the greatest threats of our modern societies has gotten worse during the Coronavirus pandemic – fake news. As Tedros Adhanom Ghebreyesus, Director-General of the World Health Organization (WHO) said in a February gathering of foreign policy and security experts, “we’re not just fighting an epidemic; we’re fighting an infodemic.”
Anyone that values trustworthy information knows for a fact that when Mr. Ghebreyesus says that fake news “spreads faster and more easily than this virus”, he’s unfortunately right. During the course of the COVID-19 pandemic, we’ve seen all kinds of misinformation going around, from people saying that drinking alcohol protects against the virus to people believing that the new 5G mobile network is spreading the disease.
The so-called first social media pandemic is firing fake news on all cylinders, especially surrounding 3 major topics: conspiracies about the virus origins, fraudulent cures, and minimization of the outbreak’s relevance. And it’s using fake news’ favorite channel to reach more people – social media. The battlefield is well-known for experts, as social media platforms have been used for years now to spread misinformation with malicious intent.
However, given the dire nature of today’s crisis, tech companies seem more committed than ever to stop the infodemic. From Facebook’s proactive stance to reach out to those that have interacted with harmful misinformation related to Coronavirus to Google’s linking relevant information from the WHO in search results or removing videos promising false cures, companies are already playing their part.
Here’s how those big tech companies are leading the battle against misinformation.
The Unlikely Arbiters of Truth
If you were to ask Mark Zuckerberg, Jack Dorsey, or Sundar Pichai if they ever wanted their platforms to be in a place where their actions could lead to harm, chances are they would strongly disagree. However, they are in that spot right now, where their websites have to arbiter the content being shared on them in search of the truth. And they are doing a pretty good job, especially given their recent history with fake news.
Tech enterprises are investing their vast resources in three main “weapons” to stop the COVID-19 infodemic: promoting trusted information, removing unreliable information, and preventing misinformation from being shared in the first place. It’s a common effort like nothing we’ve seen before, which has translated into concrete measures that aim to curve down the spread of misinformation. These include:
- Google is prompting trusted results for Coronavirus-related searches that link to public health organizations portals. Additionally, the results page offers links to the World Health Organization and the Centers for Disease Control and Prevention highlighted in bright red badges.
- Google also modified YouTube’s results around COVID-19, prompting changes in its monetization strategy and including new features centered on the pandemic. The company is also giving away ad spaces for trusted organizations to provide relevant information in prominent spots (a measur0e that was also taken by Amazon, Facebook, and Twitter).
- Facebook is working with independent fact-checkers such as the Associated Press and Reuters to analyze Coronavirus articles for false claims. Articles that are deemed fake are flagged by Facebook, which attaches a warning message about the veracity of the information contained in said article. Additionally, it prevents the flagged article from being spread on newsfeeds and groups.
- Facebook has limited Whatsapp’s ability to massively share links in groups, in an attempt to put a stop to the rampant spread of misinformation that’s ailing the popular messenger.
- Both Amazon and Facebook are targeting sellers that want to profit from the crisis by taking down products that promise false cures and treatments, as well as those that are advertised as preventive products to avoid contracting the disease.
- Google, Facebook, and Twitter all have announced policies against ads capitalizing on products that are essential to fight the pandemic, such as face masks and hand sanitizer.
While all of these measures might seem pretty basic, they constitute novel efforts in the fight against misleading information. Naturally, we’re still far from having a conclusive assessment of how effective all of those measures truly are but based on the first results, the outcomes feel very positive. Does that mean that tech is winning the battle against fake news? Sadly, no.
Tech’s Limitation in the Infodemic Battleground
It’d be naive to think that the solution to one of the internet’s biggest problems would be so easy to achieve. The limitations surrounding the tech companies’ efforts are easy to see even in the face of these measures. Facebook and Twitter still show horribly misinforming posts, Amazon still sells scam products that promise miraculous cures, and Google can still be used to access content based on fake claims.
It isn’t hard to imagine why. Even with all the warnings, the reduced sharing capabilities, and the collaboration from external partners, these platforms can’t simply cope with the vast amount of content being posted about the Coronavirus. Since all the world is talking about it, it’s highly likely that a significant part of it will fall through the cracks.
Even if these companies decide to have a harder stance on COVID-19 information and start cracking down on articles and sites that feel remotely false, the whole thing could backfire. That’s because the people behind the fake news and the conspiracy theories would then point to the tech firms as censors that are preventing the people from knowing the truth.
It’s impossible to think of a scenario where tech companies could escape those accusations. In times of flat-earthers, climate-change deniers, anti-vaxxers, and conspiracy theorists of pretty much everything under the sun, the best thing we can aspire to is for these companies to perfect their detection algorithms to catch as much false information as possible before it even gets in circulation.
The tech industry as a whole is looking to the vast potential of artificial intelligence to rapidly process an increasing amount of content and flag it appropriately. Unfortunately, a system as sophisticated as the one required to do that takes a considerable amount of time to train. This means that we can’t expect AI’s full cooperation when fighting the infodemic, because its algorithms aren’t mature enough.
AI’s current (and future) limitations prove that there’s only so much they can do. As always, technology, as a tool, is as good as the people using it. It’s up to us, humans, to provide the part that’s missing to keep the misinformation from reaching our forward-prone grandparents.
The Human Touch
All of that points towards the same strategy for people fighting against misinformation during the Coronavirus pandemic – we need to exercise our critical thinking to the maximum with the things we read before we hit the share button. The SIFT technique, developed by digital expert Mike Caulfield, is a good starting point. The method is named with an acronym that stands for Stop, Investigate the source, Find better coverage, and Trace information to their original context, all of which are crucial steps everyone should take when consuming sensitive information.
Of course, the battle doesn’t end there. Digitally literate people might have it easier to employ the SIFT technique. But what about the people that take everything they read online at face value? We’re not talking about people who maliciously spread misinformation for their own purposes but rather about all those people that unsuspectingly trust anything they find online. Those people need us to be patient enough to teach them how to avoid being scammed while also being vigilant to prevent them from falling into the misinformation trap.
Additionally, we need to take the whole misinformation issue more seriously and attack it on a more general level. That includes a lot of things, from incorporating online literacy programs in our schools to holding online platforms accountable for what they share beyond the Coronavirus. Some of the biggest social media sites are clamping down on the misinformation about the pandemic but they didn’t do as much when asked to do so in the face of similar threats (such as political intrusion in elections, climate change denial, and anti-vaccination movements).
Given that misinformation rarely stays on the online realm (in fact, most of the time it goes offline with harsh consequences), it’s time to stop turning a blind eye to a problem we all know existed way before this. Technology can be a fantastic aid for that, and we should keep working towards more sophisticated tools to tackle misinformation in the coming years. But it will all be for nothing if we can’t develop strategies to handle the human factor as well.