The Social Media giant Facebook has begun to give its users a reputation score, predicting their trustworthiness on a scale from Zero to 1.
The previous unreported rating system, which Facebook has developed over the past year, shows that the fight against the gaming tech systems has changed to include measuring the credibility of users to help identify mischievous actors.
Reputation valuation is developed as a part of Facebook’s efforts against Fake news, Tessa Lyons, Product manager at Facebook, who is also in charge of fighting misinformation, said in an interview. The company, like others in tech, has long relied on its users to report problematic content but as it gives people more option, some of its users start to use it falsely reporting items as untrue, now they had to put this into mind before creating any new activity to get rid of fake news.
Lyons also said its common for people to tell us something is false simply because they disagree with the idea of a story or they’re intentionally trying to target a particular publisher.
User’s trustworthiness score between zero and 1 isn’t meant to be an absolute indication of credibility of the person, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk.
It was not clear what other criteria Facebook measures to determine a user’s trustworthiness score, whether all users have a score and in what ways the scores are used. The company is also monitoring which users have a tendency to flag content published by others as problematic and which publishers are considered trustworthy by users.
The trustworthiness assessments come as Silicon Valley, faced with Russian interference, fake news and philosophical actors who abuse the policy of the company, is recalibrating its approach to risk – and its algorithmically driven ways to understand who poses a threat.