Facebook recently put bots in charge of picking articles for its Trending section – a decision that was met with backlash after it was found that the algorithms had surfaced a fake report about Fox News’ Megyn Kelly, within days of being implemented.
Several weeks later, it seems the problem still hasn’t been fixed: The Washington Post has been following Facebook’s Trending topics from August 31 to September 22 and found five fake stories, as well as three grossly inaccurate reports highlighted by the social network’s automated system.
WP noted that Facebook personalizes its trends to each user and that it only tracked articles that came up during work hours; as such, it’s possible that Facebook may have displayed even more fake posts across its networks.
Wondering why Facebook doesn’t just go ahead hire human editors to tackle this? It’s already tried that. The company had a team of editors in place prior to installing its bots, but it fired them after it was called out in May for censoring conservative news and links to Wikileaks’ DNC email dump.
So while the solution might necessitate the involvement of human oversight, it may be a while before Facebook can revisit that idea.
Ultimately, it seems like the social network’s algorithms just aren’t up to scratch when it comes to identifying credible sources of information, and its system for redressal is presently weak.
The trouble, said computer scientist Walter Quattrociocchi to WP, is that, “When Facebook selectively injects fake news into those highly personalized news diets, it risks further polarizing and alienating its more conspiracy-minded users.”
With more people turning to social networks to get their news fix, that could spell danger for societies at large.
This article was written by Abhimanyu Ghoshal from The Next Web and was legally licensed through the NewsCred publisher network.