Facebook Using Artificial Intelligence To Fight Terrorism On Its Social Media Platform

By Shawn Rice

facebook

Facebook says it is using artificial intelligence to help it combat terrorists’ use of its platform in a blog post on June 15, 2017. This will serve as a new way to stem terrorist activity on the Internet.

In the wake of the London attacks, British Prime Minister Theresa May has accused Facebook and other companies of not doing enough to crack down on terrorist activity. This week May said she and French President Emmanuel Macron were working on a plan that would make Internet companies legally liable for extremist materials on their services.

The company’s announcement comes as it faces growing pressure from government leaders to identify and prevent the spread of content from terrorist groups on its massive social network. This is a departure from Facebook’s usual policy of only reporting suspect content if users report it first.

They further said that when the social media company receives reports of potential “terrorism posts,” it reviews those reports urgently. In addition, it says that in the rare cases when it uncovers evidence of imminent harm, it promptly informs authorities.

“Just as terrorist propaganda has changed over the years so have our enforcement efforts. We are now really focused on using technology to find this content so that we can remove it before people are seeing it,” said Monika Bickert, a former federal prosecutor who runs global policy management, the team that decides what can be posted on Facebook. “We want Facebook to be a very hostile environment for terrorists and we are doing everything we can to keep terror propaganda off Facebook.”

Facebook has approximately two billion monthly users, and routinely finds its platform in the crosshairs of deadly and terrorism-related events. For instance, the shooter behind Wednesday’s attack on a congressional softball game had previously posted “vitriolic anti-Republican and anti-Trump viewpoints” on Facebook, according to the SITE Intelligence Group, which tracks extremists. However, the posts that have come to light stopped short of threatening specific acts of violence.

One aspect of the novel technology it is talking about for the first time is image matching. If someone tries to upload a terrorist photo or video, the systems look to see if this matches previous known extremist content to stop it going up in the first place.

A second area is experimenting with AI to understand text that might be advocating terrorism. This is analysing text previously removed for praising or supporting a group such as IS and trying to work out text-based signals that such content may be terrorist propaganda.

Photo Credit: Source

Go to the full article.

Source:: Business 2 Community

Be Sociable, Share!