Thursday, 29 March 2018

Here's What Facebook Says It's Doing to Protect Election Security

Earlier today, members of Facebook's staff held a small press event with a status update on efforts to prevent its platform from being weaponized to influence major national events like elections.

Last year, Facebook came under fire when it was revealed that it had been weaponized by foreign actors to spread misinformation and divisive content in hopes of influencing the 2016 U.S. presidential election.

Facebook published a transcript of today's remarks, where VP of Product Management Guy Rosen indicated the network would be focusing on four core areas of election protections:

  1. "Combating foreign interference"
  2. "Removing fake accounts"
  3. "Increasing ads transparency"
  4. "Reducing the spread of false news"

Here's a look at the work Facebook says it's doing in each area.

1. "Removing Fake Accounts"

This might be the most complex and far-reaching area where Facebook will be putting new efforts into place. In order to remove fake accounts, Facebook's Chief Security Officer Alex Stamos explained, the network will have to identify fake identities and audiences alongside false narratives and facts.

Doing so begins by identifying motives, which boils down to three main areas: influencing public debate, money, and what the "classic internet 'troll,'" Stamos said. 

Fake accounts motivated by the first item on the list range from what Stamos called "ideologically motivated groups" to state intelligence agencies, whose target audiences could exist within their own countries or others.

The second motivator, money, is the most common one. Many times, these bad actors stand to financially profit by driving traffic to their sites -- even if it means, speculatively, doing so by linking to false or divisive content.

Countering that, Stamos said, will require decreasing the account's profits by increasing its operational costs -- which is how Facebook has previously curbed activity from spammers. Facebook has made similar efforts in the past to penalize content with "clickbait" link titles that don't necessarily lead to quality or genuine content.

These motivations can vary or even be combined according to the event the actor is trying to influence. That's why Stamos said Facebook will be enlisting the help of external experts who are familiar with the various geographical or cultural factors that could play a role in what different actors are trying to accomplish.

2. "Combating Foreign Interference"

Samidh Chakrabarti, a product manager at Facebook, spoke on how proactive measures to combat bad actors of foreign origin relate to some of the efforts to combat fake accounts -- which, he said, is one of the most common ways such bad actors "hide". 

At this point, Chakrabarti explained, Facebook blocks "millions" of fake accounts on a daily basis as they're being created, which can help stop them before they can create and distribute content. Machine learning is said to play a major role here, which has been trained to identify suspect activity without having scan actual content.

Previously, members of the Facebook community were responsible for reporting what looked like suspicious activity, especially anything that might pertain to an election. Now, Chakrabarti said, Facebook will deploy an "investigative tool" that proactively looks for this kind of activity, like the creation of Pages with foreign origin that are sharing misinformation. Once these Pages are identified, they are sent to Facebook's security team for a manual review to determine if Community Standards or Terms of Service have been violated.

The efforts here appear to be two-fold: machine learning capabilities that stop the creation of these pages before they can distribute content, as well as technology that seeks out existing Pages engaging in such activity. For the latter, Chakrabarti said the manual process is quick, though he didn't provide a specific average time frame. 

These tools were utilized as recently as last December during a special Senate race in the state of Alabama, when efforts to identify foreign interference led to the discovery of politically-focused bad actors based in Macedonia, who seemed to be spreading misinformation leading up to that election. They were later blocked from Facebook.

3. "Increasing Ads Transparency"

Part of the effort to remove fake accounts also plays into actions that will verify the authenticity of the ads that accounts want to post.

That will include a new feature called View Ads, which has been tested in Canada and will be rolled out globally this summer. As the name suggests, it allows users to view any ads the Page is running under its "About" section. 

The summer rollout will come in the months leading up to the 2018 U.S. midterm elections -- and prior to that, said Product Management Director Rob Leathern, a new ad review and verification process will begin, which will require all Page admins to submit government-issued IDs and provide a physical mailing address before they can publish any promoted content. 

That way, Facebook can confirm the physical location and identification of advertisers, in part by physically mailing a letter to the address provided with an access code that can only be used by that specific admin for that particular Page. In addition to this process, advertisers must declare which, if any, candidate, organization, or business they represent.

And once the verification process is complete, Leathern explained, ads pertaining to an election will be clearly labeled as such in both Facebook and Instagram feeds, including the individual, business, or organization that paid for it.

And this summer, he said, Facebook will unveil a public ad history archive that contains any such content with a political label. Each entry will include details like the amount spent on the ad, as well as the number of impressions it received and demographic information about the audience it reached. The archive will keep this information for up to four years after the ad ran.

4. "Reducing the Spread of False News"

Finally, Product Manager Tessa Lyons spoke to Facebook's effort to curb the spread of such misinformation as false news, which will largely be powered by partnerships with fact-checkers.

To determine which content needs to be fact-checked, Lyons said, the platform will use various "signals" that include reports from Facebook users themselves. From there, fact-checkers can rate a story as false -- and if they do, its ranking in the News Feed will be dropped, which leads to an average of 80% fewer views.

Anyone who's shared the story in the past will be warned about this fact-check, as will anyone who tries to share it in the future -- and if it does appear in someone's News Feed, it will be displayed with information from those who fact-checked it. That information will help to teach a machine learning module to identify these stories quicker, without human intervention. 

These efforts will apply to text, photos, and videos -- and will also play into efforts to increase ad transparency and combat fake accounts. Any Page that habitually shares false news, Lyons said, will face reduced distribution, and lose its advertising and monetization privileges, "stopping them from reaching, growing, or profiting from their audience."

Currently, Facebook has fact-checking partners in six countries, including the U.S., where the platform has joined forces with Associated Press reporters to identify misinformation and false news relating to the country's upcoming elections, whether local, state, or federal. These reporters will also be tasked with disproving false claims made by such stories.

Lyons noted that these efforts are "a place to start."

"Like any company that’s had a PR crisis, Facebook is trying to take control of and own the misinformation narrative," said HubSpot Social Campaign Strategy Associate Henry Franco. "It looks like the company is taking some pretty serious steps to address it, too, both in terms of identifying and prohibiting bad actors."

But as Lyons remarked -- it's a start. "What remains to be seen," Franco said, "is whether it's enough to earn back the trust of users."



from Marketing http://bit.ly/2GBziex
via

No comments:

Post a Comment