Upset by something you see on Facebook? The social media giant really, really wants you to report it.
The social network on Sunday updated guidelines to its community standards, clarifying which types of content are inappropriate for the platform while emphasizing the need for users to report offensive posts. These guidelines underscore the difficulty of policing a network of over 1 billion people, and some experts say Facebook needs to do more to prevent objectionable content from slipping through the monitoring system already in place.
“If people believe Pages, profiles or individual pieces of content violate our Community Standards, they can report it to us by clicking the ‘Report’ link at the top, right-hand corner,” Facebook representatives Monika Bickert and Chris Sonderby wrote in a blog post about the update.
Simple enough.
The update to Facebook’s community standards doesn’t introduce new rules, but it does bring additional detail to specific areas including hate speech, nudity and violent content. There’s no one-size-fits-all approach, as text from the updated hate speech guidelines shows (emphasis ours):
Organizations and people dedicated to promoting hatred against these protected groups are not allowed a presence on Facebook. As with all of our standards, we rely on our community to report this content to us.
People can use Facebook to challenge ideas, institutions, and practices. Such discussion can promote debate and greater understanding. Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others about that hate speech. When this is the case, we expect people to clearly indicate their purpose, which helps us better understand why they shared that content.
Facebook itself acknowledges the challenge in determining what kinds of posts need to be taken down.
“We know that our policies won’t perfectly address every piece of content, especially where we have limited context, but we evaluate reported content seriously and do our best to get it right,” the blog post from Sunday reads.
Reached via email by The Huffington Post, a representative for Facebook declined to offer comment beyond the blog entry and an additional post about the new guidelines by Facebook CEO Mark Zuckerberg.
The reliance on self-policing may not be surprising, given that Facebook has 1.39 billion monthly users to patrol. The company does reportedly employ a group of laborers to eliminate the truly bad stuff, like beheadings and child pornography, though that’s not a great solution.
“Workers quit because they feel desensitized by the hours of pornography they watch each day and no longer want to be intimate with their spouses. … Every day they see proof of the infinite variety of human depravity,” Adrian Chen wrote for Wired. He reported that over 100,000 people do such work for social media companies worldwide.
While content moderation has been an issue since the dawn of consumer Internet connections, the topic has loomed particularly large in recent months. Just last week, Twitter formally banned revenge porn, explicit content that’s spread without a subject’s consent. In February, French President Francois Hollande asked “major Internet firms” to crack down on hate speech following the January attacks by Islamic militants on a satirical newspaper and a kosher supermarket in Paris. Facebook and Twitter have also been working to block accounts linked to the Islamic State group.
And there’s even more from last year, when #GamerGate attacked feminist voices in the gaming community and online trolls began harassing the woman at the center of Rolling Stone’s controversial story about campus sexual assault.
“Facebook has the poor bastards that are just watching this endless tide,” Finn Brunton, an assistant professor of media, culture and communication at New York University, told HuffPost in a phone interview regarding content moderation on social media. “You can automate so many different aspects of building a giant community platform on the internet, but it’s hard to automate [moderation].”
Mindless automation probably isn’t the answer. Imagine if a group could get someone banned on social media by spamming reports simply because they don’t like an individual’s opinion. It’s happened before.
“The person who can figure out how to scale moderation … is going to be the next super gigantic dot com success,” Brunton told HuffPost.
He expects that “deep learning techniques” will eventually allow computers to analyze the sentiment behind content posted online to remove it without the aid of a human moderator. Facebook is already developing such techniques, including one that can identify if a user looks drunk in a picture they post online. A similar idea could be applied to a variety of circumstances.
In any case, it’s time for something to change, according to Brunton.
“We’ve passed through a period where it’s acceptable for companies to rely on their users,” he told HuffPost. “The free speech of one group leads to muteness and withdrawal of threatened groups.”
Read More http://ift.tt/18Ur3EZ
Bagikan Berita Ini
0 Response to "Facebook Says It Needs Your Help To Root Out Offensive Posts"
Post a Comment