Wednesday, November 6, 2024

After Las Vegas Shooting, Fake News Regains Its Megaphone

October 3, 2017 by  
Filed under Choosing Lingerie

Comments Off

A Facebook spokesman said, “We are working to fix the issue that allowed this to happen in the first place and deeply regret the confusion this caused.”

But this was no one-off incident. Over the past few years, extremists, conspiracy theorists and government-backed propagandists have made a habit of swarming major news events, using search-optimized “keyword bombs” and algorithm-friendly headlines. These organizations are skilled at reverse-engineering the ways that tech platforms parse information, and they benefit from a vast real-time amplification network that includes 4Chan and Reddit as well as Facebook, Twitter and Google. Even when these campaigns are thwarted, they often last hours or days — long enough to spread misleading information to millions of people.

The latest fake news flare-up came at an inconvenient time for companies like Facebook, Google and Twitter, which are already defending themselves from accusations that they have let malicious actors run rampant on their platforms.

On Monday, Facebook handed congressional investigators 3,000 ads that had been purchased by Russian government affiliates during the 2016 campaign season, and it vowed to hire 1,000 more human moderators to review ads for improper content. (The company would not say how many moderators currently screen its ads.) Twitter faces tough questions about harassment and violent threats on its platform, and is still struggling to live down a reputation as a safe haven for neo-Nazis and other poisonous groups. And Google also faces questions about its role in the misinformation economy.

Newsletter Sign Up

Continue reading the main story

Part of the problem is that these companies have largely abrogated the responsibility of moderating the content that appears on their platforms, instead relying on rule-based algorithms to determine who sees what. Facebook, for instance, previously had a team of trained news editors who chose which stories appeared in its trending topics section, a huge driver of traffic to news stories. But it disbanded the group and instituted an automated process last year, after reports surfaced that the editors were suppressing conservative news sites. The change seems to have made the problem worse — earlier this year, Facebook redesigned the trending topics section again, after complaints that hoaxes and fake news stories were showing up in users’ feeds.

There is also a labeling issue. A Facebook user looking for news about the Las Vegas shooting on Monday morning, or a Google user searching for information about the wrongfully accused shooter, would have found posts from 4Chan and Sputnik alongside articles by established news organizations like CNN and NBC News, with no obvious cues to indicate which ones came from reliable sources.

More thoughtful design could help solve this problem, and Facebook has already begun to label some disputed stories with the help of professional fact checkers. But fixes that require identifying “reputable” news organizations are inherently risky because they open companies up to accusations of favoritism. (After Facebook announced its fact-checking effort, which included working with The Associated Press and Snopes, several right-wing activists complained of left-wing censorship.)

The automation of editorial judgment, combined with tech companies’ reluctance to appear partisan, has created a lopsided battle between those who want to spread misinformation and those tasked with policing it. Posting a malicious rumor on Facebook, or writing a false news story that is indexed by Google, is a nearly instantaneous process; removing such posts often requires human intervention. This imbalance gives an advantage to rule-breakers, and makes it impossible for even an army of well-trained referees to keep up.

Advertisement

Continue reading the main story

But just because the war against misinformation may be unwinnable doesn’t mean it should be avoided. Roughly two-thirds of American adults get news from social media, which makes the methods these platforms use to vet and present information a matter of national importance.

Facebook, Twitter and Google are some of the world’s richest and most ambitious companies, but they still have not shown that they’re willing to bear the costs — or the political risks — of fixing the way misinformation spreads on their platforms. (Some executives appear resolute in avoiding the discussion. In a recent Facebook post, Mark Zuckerberg reasserted the platform’s neutrality, saying that being accused of partisan bias by both sides is “what running a platform for all ideas looks like.”)

The investigations into Russia’s exploitation of social media during the 2016 presidential election will almost certainly continue for months. But dozens of less splashy online misinformation campaigns are happening every day, and they deserve attention, too. Tech companies should act decisively to prevent hoaxes and misinformation from spreading on their platforms, even if it means hiring thousands more moderators or angering some partisan organizations.

Facebook and Google have spent billions of dollars developing virtual reality systems. They can spare a billion or two to protect actual reality.


Continue reading the main story

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Facebook says 10 million US users saw Russia-linked ads

October 3, 2017 by  
Filed under Choosing Lingerie

Comments Off

NEW YORK (Reuters) – Some 10 million people in the United States saw politically divisive ads on Facebook that the company said were purchased in Russia in the months before and after last year’s U.S. presidential election, Facebook said on Monday.

Facebook, which had not previously given such an estimate, said in a statement that it used modeling to estimate how many people saw at least one of the 3,000 ads. It also said that 44 percent of the ads were seen before the November 2016 election and 56 percent were seen afterward.

The ads have sparked anger toward Facebook and, within the United States, toward Russia since the world’s largest social network disclosed their existence last month. Moscow has denied involvement with the ads.

Facebook has faced calls for increased U.S. regulation from U.S. authorities. Chief Executive Mark Zuckerberg has outlined steps that the company plans to take to deter governments from abusing the social media network.

Earlier on Monday, Facebook said in a separate statement that it planned to hire 1,000 more people to review ads and ensure they meet its terms, as part of an effort to deter Russia and other countries from using the platform to interfere in elections.

The latest company statement said that about 25 percent of the ads were never shown to anyone.

“That’s because advertising auctions are designed so that ads reach people based on relevance, and certain ads may not reach anyone as a result,” Elliot Schrage, Facebook’s vice president of policy and communications, said in the statement.

For 99 percent of the ads, less than $1,000 was spent, he said. The total ad spend was $100,000, the company has said.

Still, he said it was possible Facebook would find more Russia-linked U.S. ads as it continues to investigate.

Schrage, while criticizing the ad buyers for using fake accounts, also said many of the ads otherwise “did not violate our content policies” and could have remained if bought using real accounts.

“While we may not always agree with the positions of those who would speak on issues here, we believe in their right to do so – just as we believe in the right of Americans to express opinions on issues in other countries,” he wrote.

Facebook is working with others in the tech sector, including Twitter Inc and Alphabet Inc’s Google, on investigating alleged Russian election meddling, Schrage added.

The 1,000 new workers represent the second time this year that Facebook has responded to a crisis by announcing a hiring spree. In May, it said it would hire 3,000 more people to speed up the removal of videos showing murder, suicide and other violent acts that shocked users.

Like other companies that sell advertising space, Facebook publishes policies for what it allows, prohibiting ads that are violent, discriminate based on race or promote the sale of illegal drugs.

With more than 5 million paying advertisers, however, Facebook has difficulty enforcing all of its policies.

The company said on Monday that it would adjust its policies further “to prevent ads that use even more subtle expressions of violence.” It did not elaborate on what kind of material that would cover.

Facebook also said it would begin to require more thorough documentation from people who want to run ads about U.S. federal elections, demanding that they confirm their businesses or organizations.

Reporting by David Ingram in New York; Editing by Lisa Von Ahn and Cynthia Osterman

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS