Google And Facebook Are Inadvertently Funding The Global COVID-19 Misinformation Pandemic
Joseph Brookes reports in WHICH-50 about Big Tech's role in using advertising technology to spread misinformation about COVID-19 and quotes Sally Hubbard, Director of Enforcement Strategy at Open Markets Institute.
Advertising technology belonging to tech giants Google and Facebook is fuelling the global spread of COVID-19 misinformation, despite the efforts of both to stem the widespread fraudulent misconduct.
And the problem again exposes the great weakness of self-regulation — the platforms themselves are among the beneficiaries of commercial malfeasance, albeit inadvertently, due to the way their algorithms optimise for engagement.
On March 11th the World Health Organisation confirmed COVID-19 to be a pandemic — “a crisis that will touch every sector”. As of 5 April, there were 1.2 million confirmed cases of coronavirus and over 64,000 deaths worldwide, although Australia has so far avoided the worst-case outcomes that are emerging in the US, UK, Italy and Spain.
More than a month before it declared a pandemic, WHO had already warned information associated with the virus would cause an “infodemic”.
Since then the amount of COVID-19 related information spreading online has eclipsed that of any other event.
“I’ve never seen anything like it,” says Sydney University Associate Professor, Dr Adam Dunn, an expert in biomedical informatics and digital health.
Dunn and his colleagues use social media data and machine learning to monitor what people are exposed to online and how the information impacts their behaviour. His recent work has focused on the effect of the online “anti-vax” movement, where groups have sought to discredit the safety and importance of vaccinations.
“When we do that, when we collect those data on Twitter, we have no problem at all. We can do that very easily within the free API endpoints that we have to collect the tweets.”
But the COVID-19 crisis is different. Dunn says neither he nor his global peers have been able to keep up with the sheer volume of tweets about the virus.
Dunn’s analysis of COVID-19 keywords last week showed between 11 and 13 million related tweets every day. But he stresses that’s a conservative estimate based on select keywords on one platform.
“It’s probably ten times what we’re catching now,” Dunn told Which-50.
Facebook is even harder to monitor, according to Dunn, because it is a much more balkanised platform and the company is wary of sharing data with researchers since the Cambridge Analytica scandal.
Dunn says it could take years for researchers to get a handle on the flow of information online, including the proportion of misinformation. But it’s already at dangerous levels.
“We’ve seen misinformation spreading constantly about alternative cures [and] all sorts of things that can affect people’s health behaviours that are just wrong.”
The misinformation is coming from the usual places. “You won’t be surprised to know that Alex Jones was trying to sell toothpaste that was going to cure or prevent coronavirus,” Dunn says.
But an increasingly connected and concerned global population is helping it to spread like wildfire and there’s no shortage of groups taking advantage.
“People are opportunistic,” Dunn says, “And if the public health crisis has shown us anything, it’s that plenty of people are very agile in how quickly they can repurpose what they’re doing to make some more money or to gain reputation or to advance their careers in some form or another.”
And others can be forgiven for taking the bait.
“It’s much harder [to ignore] if you’re currently fearful, anxious — you feel like a loss of control. And you see an advert is about something that you care about, which is COVID-19. And you’re much more likely to click on it.”
Read the full article on WHICH-50 here.