In recent months, the social networking giant has beefed up scrutiny of what is posted on its site, looking for fake accounts, misinformation and hate speech, while encouraging people to go on Facebook to express their views.
“A lot of the work of content moderation for us begins with our company mission, which is to build community and bring the world closer together,” Peter Stern, who works on product policy stakeholder engagement at Facebook, said at a recent event at St. John’s University in New York City.
Facebook wants people to feel safe when they visit the site, Stern said. To that end, it is on track to hire 20,000 people to tackle safety and security on the platform.
As part of its stepped-up effort, Facebook works with third-party fact-checkers and takes down misinformation that contributes to violence, according to a blog post by Mark Zuckerberg, Facebook’s CEO.
But most popular content, often dubbed “viral,” is frequently the most extreme. Facebook devalues posts it deems are incorrect, reducing their viralness, or future views, by 80 percent, Zuckerberg said.
Recently Facebook removed accounts followed by more than 1 million people that it said were linked to Iran but pretended to look like they were created by people in the U.S. Some were about the upcoming midterm elections.
The firm also removed hundreds of American accounts that it said were spamming political misinformation.
Still, Facebook is criticized for what at times appears to be flaws in its processes.
Vice News recently posed as all 100 U.S. senators and bought fake political ads on the site. After approving them all, Facebook said it made a mistake.
Politicians in Britain and Canada have asked Zuckerberg to testify on Facebook’s role on spreading disinformation.
“I think they are really struggling and that’s not surprising, because it’s a very hard problem,” said Daphne Keller, who used to be on Google’s legal team and is now with Stanford University.
“If you think about it, they get millions, billions of new posts a day, most of them some factual claim or sentiment that nobody has ever posted before, so to go through these and figure out which are misinformation, which are false, which are intending to affect an electoral outcome, that is a huge challenge,” Keller said. “There isn’t a human team that can do that in the world, there isn’t a machine that can do that in the world.”
While it has been purging its site of accounts that violate its policies, the company has also revealed more about how decisions are made in removing posts. In a 27-page document, Facebook described in detail what content it removes and why, and updated its appeals process.
Stern, of Facebook, supports the company’s efforts at transparency.
“Having a system that people view as legitimate and basically fair even when they don’t agree with any individual decision that we’ve made is extremely important,” he said.
The stepped-up efforts to give users more clarity about the rules and the steps to challenge decisions are signs Facebook is moving in the right direction, Stanford’s Keller said.
“We need to understand that it is built into the system that there will be a fair amount of failure and there needs to be appeals process and transparency to address that,” she said.