World Leaders, Tech Bosses Work on Stemming Online Violence

Livestreaming terrorist attacks. Using social media to spread deadly ideas. Manipulating banned videos to keep sharing them online.

World leaders and tech bosses are meeting Wednesday in Paris to find ways to stop all this. They’re working all day on the “Christchurch Appeal,” named after the New Zealand city where 51 people were killed in a March attack on mosques.

 

The attacker streamed the killing live on Facebook, which announced tougher livestreaming policies on the eve of the meetings “to limit our services from being used to cause harm or spread hate.”

 

New Zealand Prime Minister Jacinda Ardern welcomed Facebook’s pledge to restrict some users from Facebook Live and invest in research to stay ahead of users’ attempts to avoid detection.

 

She said she herself inadvertently saw the Christchurch attacker’s video when it played automatically in her Facebook feed.

 

“There is a lot more work to do, but I am pleased Facebook has taken additional steps today… and look forward to a long-term collaboration to make social media safer,” she said in a statement.

 

Facebook said it’s tightening up the rules for its livestreaming service with a “one strike” policy applied to a broader range of offenses. Any activity on Facebook that violates the social network’s most serious policies, such as sharing a terrorist group’s statement without providing context, will result in the user immediately being blocked from Facebook Live for as long as 30 days.

 

Previously, the company took down posts that breached its community standards but only blocked users after repeated offenses.

 

The tougher restrictions will be gradually extended to other areas of the platform, starting with preventing users from creating Facebook ads.

 

Facebook said it’s also investing $7.5 million in new research partnerships to improve image and video analysis technology aimed at finding content manipulated through editing to avoid detection by its automated systems — a problem the company encountered following the Christchurch shooting.

 

“Tackling these threats also requires technical innovation to stay ahead of the type of adversarial media manipulation we saw after Christchurch,” Facebook’s vice president of integrity, Guy Rosen, said in a blog post.

 

Ardern is playing a central role in the Paris meetings, which she called a significant “starting point” for changes in government and tech industry policy.

 

Twitter, Google, Microsoft and several other companies are also taking part, along with the leaders of Britain, France, Canada, Ireland, Senegal, Indonesia, Jordan and the European Union.

 

Officials at Facebook said they support the idea of the Christchurch appeal, but that details need to be worked out that are acceptable for all parties. Free speech advocates and some in the tech industry bristle at new restrictions and argue that violent extremism is a societal problem that the tech world can’t solve.

 

Ardern and the host, French President Emmanuel Macron, insist that it must involve joint efforts between governments and tech giants. France has been hit by repeated Islamic extremist attacks by groups who recruited and shared violent images on social networks.

 

Speaking to reporters ahead of the meetings, Ardern said, “There will be of course those who will be pushing to make sure that they maintain the commercial sensitivity. We don’t need to know their trade secrets, but we do need to know what the impacts might be on our societies around algorithm use.”

 

She stressed the importance of tackling “coded language” that extremists use to avoid detection.

 

Before the Christchurch attack, she said, governments took a “traditional approach to terrorism that would not necessarily have picked up the form of terrorism that New Zealand experienced on the 15th of March, and that was white supremacy.”

 

         

leave a reply: