Last week, Facebook announced it had removed 32 Pages and accounts from its platform and Instagram for “coordinated inauthentic behavior” — a term used by Facebook to define efforts by a network of accounts aiming to spread malicious content. The bad actors behind the misinformation campaigns included eight Facebook Pages, 17 Facebook accounts and seven Instagram accounts.

“This kind of behavior is not allowed on Facebook because we don’t want people or organizations creating networks of accounts to mislead others about who they are, or what they’re doing,” wrote Facebook in its July 31 announcement that the accounts had been taken down.

One week later, Facebook took down four more Pages that belonged to conspiracy theorist and Infowars founder Alex Jones for repeatedly posting content that broke the company’s Community Standards Guidelines. (Spotify, Apple, YouTube and others have also restricted or removed Jones’ content on their platforms.)

Facebook’s decisions to take down content, and the accounts attached to it, are a direct result of the fallout after the company failed to identify a surge in misinformation campaigns plaguing the platform during the 2016 US election cycle. Since admitting it did not do enough to police malicious content and bad actors, Facebook has pledged to prioritize its content review process.

How do these efforts affect marketers? While Facebook’s actions are aimed at people and organizations with malicious intent, marketers looking to build and foster brands on Facebook need to be aware of Facebook’s rules around content — especially since the content review policies and systems apply to Facebook ad policies as well. We’ve put together a rundown on Facebook’s content review process, the teams involved and how it’s working so far.

Removing content vs. limiting distribution

In April, Facebook released its first ever Community Standards guidelines — a rule book outlining the company’s content policies broken down into six different categories: violence and criminal behavior, safety, objectionable content, integrity and authenticity, respecting intellectual property, and content-related requests. At the time, Facebook said it was using a combination of artificial intelligence and reports from people who have identified posts for potential abuse. Posts reported for violating content policies are reviewed by an operations team that is made of up of more than 7,500 content reviewers.

“Here’s how we think about this: if you are who you say you are and you’re not violating our Community Standards, we don’t believe we should stop you from posting on Facebook.”

Regarding the review process, Facebook says its content review team members are assigned a queue of reported posts to evaluate one by one. Facebook says the reviewers are not required to evaluate any set number of posts — there is no quota they must meet when it comes to the amount of content being reviewed.

In a July 24 Q&A on Election Integrity, Facebook’s News Feed product manager, Tessa Lyons, said the company removes any content that violates its Community Standards guidelines, but that it only reduces the distribution of problematic content that may be false but does not violate Community Standards. According to Lyons, Facebook will show stories rated false by fact-checkers and display them lower in the News Feed so dramatically fewer people see them. (According to Facebook’s data, stories that were ranked lower in the News Feed resulted in future views being cut by more than 80 percent.)

Lyons addressed criticism around Facebook’s policy to limit the distribution of content identified as false versus removing it, explaining it’s not Facebook’s policy to censor content that doesn’t violate their rules.

“Here’s how we think about this: if you are who you say you are and you’re not violating our Community Standards, we don’t believe we should stop you from posting on Facebook. This approach means that there will be information posted on Facebook that is false and that many people, myself included, find offensive,” said Lyons.

More recently, Facebook offered a deeper dive into the reasons behind why it would remove a Page.

“If a Page posts content violates our Community Standards, the Page and the Page admin responsible for posting the content receive a strike. When a Page surpasses a certain threshold of strikes, the whole Page is unpublished.”

Facebook says the effects of a strike vary depending on the severity of the content violation, and that it doesn’t give specific numbers in terms of how many strikes a Page may receive before being removed.

“We don’t want people to game the system, so we do not share the specific number of strikes that leads to a temporary block or permanent suspension.” Facebook says multiple content violations will result in an account being temporarily blocked or a Page being unpublished. If an appeal is not made to reinstate the Page — or if an appeal is made, but denied — the Page is then removed.

Announced in April, the appeal process is a new addition to Facebook’s content review system.

Facebook’s content review teams & technology

In recent months, Facebook has said multiple times it would be hiring 20,000 safety and security employees during the course of this year. As of July 24, the company confirmed it had hired 15,000 of the 20,000 employees it plans to recruit.

The content review teams include a combination of full-time employees, contractors and partner-companies located around the world, along with 27 third-party fact-checking partnerships in 17 countries. In addition to human reviews, Facebook uses AI and machine learning tech to identify harmful content.

“We’re also investing heavily in new technology to help deal with problematic content on Facebook more effectively. For example, we now use technology to assist in sending reports to reviewers with the right expertise, to cut out duplicate reports, and to help detect and remove terrorist propaganda and child sexual abuse images before they’ve even been reported,” wrote Facebook’s VP of global policy management, Monika Bickert, on July 17.

Facebook’s content review employees undergo pre-training, hands-on learning and ongoing coaching during their employment. The company says it also has four clinical psychologists on staff, spread across three regions, to design and evaluate resiliency programs for employees tasked with reviewing graphic and objectionable content.

What we know about recently removed content

Regarding the 32 Pages and accounts removed last week, Facebook said it could not identify the responsible group (or groups), but that more than 290,000 Facebook accounts had followed at least one of the Pages. In total, the removed Pages and accounts had published more than 9,500 organic posts on Facebook, one piece of content on Instagram, ran approximately 150 ads (costing a total of $11,000) and created about 30 Events dating back to May 2017 — the largest of which had 4,700 people interested in attending and 1,400 users who said they would attend.

The Alex Jones Pages were taken down because they violated Facebook’s graphic violence and hate speech policies. Before being removed, Facebook had removed videos posted to the Pages for violating hate speech and bullying policies. The Page admin, Alex Jones, was also placed on a 30-day block for posting the violating content. Within a week, Facebook made the decision to remove all the Pages after receiving more reports of content violations.

Looking beyond these two specific actions, Facebook says that it is currently stopping more than a million accounts per day at the point of creation using machine learning technology. The company’s first transparency report, released in May, showed Facebook had taken action against 1.4 billion pieces of violating content, including 837 million counts of spam and 583 million fake accounts. Excluding hate speech violations, Facebook says more than 90 percent of the content was removed without being reported in nearly all categories, including spam, nudity and sexual activity, graphic violence and terrorist propaganda.

In the Q&A on Election Integrity issues, Facebook said it took down tens of thousands of fake likes from Pages of Mexican candidates during Mexico’s recent Presidential elections, along with fake Pages, groups and accounts that violated policies and impersonated politicians running for office. (In advance of the November US midterm elections, Facebook has launched a verification process for any person or group wanting to run political ads and a searchable archive of political ad content going back seven years that lists an ad’s creative, budget and the number of users who viewed it.)

But is it working?

While Facebook’s transparency report offered insight into just how many spam posts, fake accounts and other malicious content the company has identified since last October, there is still work left to do.

Last month, advertisers discovered Facebook ads with words like “Bush” and “Clinton” were removed after being tagged as political ads by advertisers that had failed to be verified. A barbecue restaurant ad that listed the businesses location on “President Clinton Avenue” and a Walmart ad for “Bush” baked beans were both removed — most likely the result of Facebook’s automatic systems incorrectly identifying the ads as political ads.

More concerning, a report from the BBC’s Channel 4 news program “Dispatches” showed a Dublin-based content review company contracted by Facebook failed to act on numerous counts of content that violated the app’s Community Standards. The report also accused Facebook of practicing a “shielded review” process, allowing Pages that repeatedly posted violating content to remain up because of high follower counts.

Facebook responded to the charge by confirming it did perform “Cross Check” reviews (its definition for shielded reviews), but that it was part of a process to give certain Pages or Profiles a “second layer” of review to make sure policies were applied correctly.

“To be clear, Cross Checking something on Facebook does not protect the profile, Page or content from being removed. It is simply done to make sure our decision is correct,” wrote Bickert, in response to the Channel 4 report.

Ever since admitting Facebook was slow to identify Russian interference on the platform during the 2016 elections, CEO Mark Zuckerberg has said time and time again that security is not a problem that can ever be fully solved. Facebook’s News Feed product manager spoke to the complicated intersection of security and censorship on the platform during the company’s Q&A on Election Integrity: “We believe we are working to strike a balance between expression and the safety of our community. And we think it’s a hard balance to strike, and it’s an area that we’re continuing to work on and get feedback on — and to increase our transparency around.”

From the Q1 transparency report to its latest actions removing malicious content, Facebook continues to prove it is trying to rid its platform of bad actors. The real test of whether or not the company has made any progress since 2016 could very well be this year’s midterm election in November. As Facebook puts more focus on content and its review process, marketers and advertisers need to understand how these systems may impact their visibility on the platform.


About The Author

Amy Gesenhues is Third Door Media’s General Assignment Reporter, covering the latest news and updates for Marketing Land and Search Engine Land. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs.com, SoftwareCEO.com, and Sales and Marketing Management Magazine. Read more of Amy’s articles.

Let’s block ads! (Why?)

Go to Source
Author: Amy Gesenhues

Powered by WPeMatico

About the author

Related Search

Related Search is a website dedicated to sharing valuable articles focusing on SEO, social media, PPC, digital marketing, online trends, and much, much more.

Leave a Comment