Gibraltar Chronicle Logo
UK/Spain News

Facebook assesses trustworthiness of users based on fake news feedback

Embargoed to 1500 Monday August 8 File photo dated 30/11/15 of Facebook's logo reflected in a pair of glasses, as children who regularly use online social networks tend to perform less well in school than pupils who rarely use such sites, research has shown. PRESS ASSOCIATION Photo. Issue date: Monday August 8, 2016. The study by the Royal Melbourne Institute of Technology (RMIT) in Australia found that students who often use chat sites or Facebook were more likely to fall behind in maths, reading and science. Yet pupils who play online video games tended to perform better in school, it found. See PA story SCIENCE Social. Photo credit should read: Dominic Lipinski/PA Wire

Facebook has confirmed that it measures how trustworthy users are when they report fake news posts, as it attempts to tackle misinformation across the social network.

Users are able to flag up posts they think may be fake on Facebook, which are sent to a team of fact-checkers.

In a bid to help them sift through the reports more effectively, the tech giant has implemented a process whereby it can weigh up how reliable some users are based on previous feedback they have provided.

This means that someone who correctly reports fake news may be deemed more trustworthy than someone who has been found to be falsely flagging up posts as fake news.

The social network has not specified how it carries out these calculations, but disputed claims first published in the Washington Post that it uses a zero to one rating scale to determine a member's level of trust, saying that it has no unified score.

"The idea that we have a centralised 'reputation' score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading," a spokesman said.

"What we're actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system.

"The reason we do this is to make sure that our fight against misinformation is as effective as possible."

The process is only used by the team that works on misinformation within Facebook and not used elsewhere across the site.

Facebook set out plans to improve its handling of hoaxes and fake news reports back in 2016, by working with third-party fact-checking organisations.

It is not clear whether Facebook has used the system on members within the EU, where recently-introduced General Data Protection Regulation (GDPR) laws require companies to be transparent about the way they process personal data.

Most Read

Download The App On The iOS Store