Facebook released its internal moderation guidelines this week, a 27-page document explaining the criteria used to decide which users’ contents should be published and which should be banned for violating its standards. An earlier version of the rules only gave a general overview, with the details being kept secret from the vast majority of the 2.2 billion users.
The rules concern posts inciting hatred and terrorism, content that sexually exploits or endangers children, ads promoting escort services and red-light activities as well as commercial spam, fake news and infringement of intellectual property rights.
Following the violation of 87 million profiles by Cambridge Analytica (214,134 Italian users were potentially involved), Mark Zuckerberg’s platform has committed itself to restricting the sharing of “commercial spam” and preventing “misleading advertising, fraud and security violation” from going online. What’s new is that those who’ll see their posts removed will be able to appeal to a court set by Facebook itself, with a judge making the final decision within 24 hours. Although the measure was already being implemented, it has been extended to pictures, videos and text. In the next months, it will cover other kinds of violations, too.
To detect forbidden content, Facebook will rely on a combination of artificial intelligence and crowdsourced flagging, said Global Policy Management vice president Monika Bickert. Flagged content is revised by the Community Operations team, which will work around the clock in more than 40 languages. They’ve increased their number of content reviewers by 40 percent since last year, to 7,500 people. In his testimony to Congress, Zuckerberg said 20,000 digital-inspectors will be hired. As for privacy, Facebook claims not to disclose its users’ identities to clients and not to sell their data.
This is a flashy reform with a view to May 25, the day when the new European rules on data protection (General Data Protection Regulation) take effect. Most of the norms Facebook clarified were specifically required by the EU Commission: the terms of service, the prohibition to amend them without a 15-day notice, the obligation to set up a system to deal with complaints. Also, the platforms have to provide a list of independent mediators in case of controversy. Kids aged 13-15 will need a parental authorization if they want to use social media, while the limit is 16 in Italy. Zuckerberg defined the GDPR as “very positive,” and European Commissioner for Justice and Privacy Vera Jourová said she was pleased with the way Facebook has reported its changes.
But not everything is as smooth. Reuters has revealed that Facebook transfered 1.5 billion users out of the European jurisdiction, thereby shielding itself from potential sanctions. Apart from that, according to The New York Times, the new GDPR rules will further strengthen the Google-Facebook duopoly on the online advertising market. The GDPR requires consent for the processing of personal data in order to avoid fines up to €20 million. Most companies have their access to users mediated by social platforms. The new regulation will reinforce the duopoly that will draw 49 percent of the digital advertising at world level in 2018.
Italian privacy guarantor Antonello Soro and a Facebook delegation, lead by head of data protection and privacy Yvonne Cunnane met this week. The company committed to reveal which political marketing firms have access to users’ data. “We’re expecting full cooperation,” said Soro. “If we don’t see an appropriate safeguard of personal data we can impose prescriptive measures and heavy fines” established by the GDPR. Soro will coordinate with other EU Authorities in the “Social Media Working Group” within the “Article 29” Working Party.
Subscribe To Our Newsletter
Your weekly briefing of progressive news.