Historically, illicit actors have utilized media sources to gain supporters and new recruits, engage in psychological warfare, and spread propaganda. The perpetrators of the Rwanda genocide, for example, used the radio to vilify a target population, incite violence, and spread hate propaganda.
They, like other propagandists who exploit media platforms, wanted to polarize society by classifying the targets of their hatred as enemy “others.”
While the aims haven’t changed, the platforms have. And so must the measures used to combat them.
At the time of the Rwanda genocide, proposals to jam the radio signals of those spreading hate propaganda and inciting violence were dismissed as being too difficult and too costly to implement.
This inaction was devastating. Hate messages spread. Calls on the radio for the extermination of an entire ethnic group influenced listeners to viciously attack and kill. An estimated 70% of Rwanda’s Tutsi population—between 500,000 and one million people—were murdered during a 100-day period in 1004.
Today, terrorists use social media platforms to spread messages of hate. They use the Web to call supporters to take up arms, to kill or harm their purported enemies (a category very broadly defined), and to terrorize target populations.
Meanwhile, arguments against limiting terrorists’ access to such platforms are similar to those made during the Rwanda genocide. Once again, opponents say blocking terrorist sites is both too technically difficult and too costly.
The terrorist groups have taken advantage of the free pass they have effectively been given.
If there’s any doubt about how important social media is to their strategy, consider that the Islamic State in Iraq and Syria (ISIS) made death threats against Twitter employees who attempted to take down terrorists’ accounts.
Most social media companies acknowledge that terrorists and those who support and encourage terrorism have no place on their platforms. Yet terrorism-related content is proliferating, giving terrorists unfettered and unprecedented access to millions of users around the globe.
Let’s not forget that terrorism is a form of theater. Each act is designed to provoke emotions (that is, fear in the population) and a desired response (an over-reaction by the public, security professionals and government agencies in the form of discriminatory practices against a misidentified target population and/or expansive police and surveillance powers).
The over-reactions in turn are used in terrorists’ propaganda campaigns to legitimize their cause and actions.
But quashing the use of social media by violent extremist groups should not be considered an “over-reaction.” It has become essential to the fight against terrorism.
Social media platforms must proactively take down accounts that support, encourage, and promote terrorism. Currently, the majority of these platforms only react when threats or dangers are brought to their attention, and the reaction is usually selective. With a few exceptions, most of the major Web service companies place the onus of identifying content that violates their terms of service on the public.
That’s wrong.
Some private groups, such as Anonymous, have acted on their own. A case in point is their takedown of ISIS and ISIS supporter Twitter accounts following the 2015 terrorist attacks in France.
It is important to remember that social media platforms are private and not public platforms, which means that these private platforms can regulate conduct as they see fit. When users utilize them, they effectively agree to the terms of service that set appropriate rules of behavior online.
Such rules of behavior include barring certain types of conduct, such as nudity, abuse, and hate speech. More recently, the rules have explicitly included bans on the support, encouragement and promotion of terrorism.
So a new law or regulation isn’t really needed. Nor do we need an expansion of police powers to monitor and search social media content.
The platforms should merely live up to their terms of service, by enforcing them pro-actively. While many suggest the complexities of the web make this unfeasible, social media platforms already have developed programs to monitor illicit activities ranging from child pornography and bullying to the theft of copyrighted works.
Surely, such programs can be modified to block and take down terrorism-related content.
The tools for doing so are already there. Social media platforms can “shadow- ban” a user—that is, they can make the offending users’ posts invisible to all but the person who is posting. Why can’t this practice be utilized on terrorists and terrorist supporters?
Another program, which can hash and detect questionable images on both visible and deep web sites, can be easily modified to identify and block images of terrorists and terrorist propaganda.
Still another can copyright both images and videos to prevent them from being uploaded online. An example of this type of program is YouTube’s ContentID, which enables users to upload copyrighted videos to a database; the program then searches for the copyrighted content on YouTube.
Why can’t such technology be used to take down leading jihadist propagandist videos from YouTube, such as those of Anwar al-Awlaki, which many law enforcement specialists say has inspired recent terrorist actions in London?
As most people know, ISIS has posted videos on YouTube depicting violent acts such as beheadings and Mujatweets, which are brief videos that depict ISIS as a generous organization and positive presence in its territories.
Why can’t a program like ContentID be leveraged to remove these videos from YouTube—videos which clearly violate the site’s terms of service?
A limiting factor in taking a proactive approach to blocking and removing terrorism-related content is not difficulty, but cost. Social media platforms apparently don’t want to invest their time, or their human and financial resources to engage in this practice. In view of that, social media platforms should be provided with incentives to offset the costs that these organizations incur by engaging in these practices.
The reality is that there are solutions; they just take time and money.
There’s no reason why the media mega-giants who dominate the Internet today can’t take a proactive approach to dealing with terrorists’ use of social media. In fact, Facebook recently announced that it would use artificial intelligence to remove terrorist content from its platform. This illustrates that social media platforms can do more.
They just need to be persuaded that it is in their interests, as well as ours, to do so.
Marie-Helen Maras is a former U.S. Navy law enforcement and security specialist and author of “Cybercriminology.” She is currently an associate professor at John Jay College of Criminal Justice. She welcomes comments from readers.
1 Comment
Laws and policy will help, yet is not enough to fix this…. The inmates are running the asylum