By Shruti Banerjee
Recent terrorist attacks in San Bernardino, Brussels and South Carolina have led government officials to increase pressure on major tech companies to take greater measures to help security agencies monitor terrorist activities. This has led to a vigorous debate on the issues of corporate responsibility, individual privacy rights and the government’s ability to monitor terrorist activities. This is the second post in a two-part series about technology, terrorism and human rights. This post will analyze the various positions in this debate, and consider the how the government and tech companies can work together to effectively combat the root causes of radicalization and terrorism while still upholding fundamental human rights.
THE ENCRYPTION DEBATE
With the rise of internet communications, terrorist groups have been using email, messaging applications, online forums and other internet tools to recruit members and plan attacks. Many government officials have firmly argued that tech companies must take greater measures to provide security agencies with data that will help them monitor this extremist activity.
But even if many companies wanted to comply, private messaging systems such as WhatsApp and iMessage automatically encrypt messages, meaning that they are secretly encoded and cannot be read without a key. Thus, companies cannot turn over messages to law enforcement because they have no mechanism of retrieving them. This predicament has sparked a fierce debate over how to monitor and combat terrorist activities on the internet. Many officials argue that tech companies there should create a backdoor—a way for a “secure” system to be accessed through coding or other vulnerabilities—in various applications for law enforcement officials to use when investigating criminal activity.
However, critics, including many leading figures in the tech industry, caution that creating a backdoor to encrypted applications may open a whole new can of worms. Tech companies point out that if any backdoor exists, hackers will eventually find it and reduce data security for all individuals. The BBC notes that if major companies were required by law to introduce back doors, terrorists would simply utilize other platforms, such as free add-on applications that automatically encrypt messages. This would make gathering information even more difficult for security agencies. A backdoor would leave services for innocent individuals far less secure, while dangerous people would be operating on systems that are even harder to gain access to. Furthermore, tech companies such as Microsoft, Google, Apple and Yahoo vowed to protect the privacy of their users from government surveillance by making encryption a default option after the NSA surveillance scandal in the US and UK. Opening up a backdoor would backslide on their promise and endanger individuals’ right privacy on the internet.
Aside from privacy and security concerns, the efficient use of these back doors would also be a challenge. While implementing a system that scanned every online message for extremist or terrorist keywords and hate speech is technically feasible, with approximately 1.3 billion internet users around the world, the number of cases that could be labeled as potential threats would be overwhelmingly high. This type of wide-scale reporting to authorities would be an immense undertaking for tech companies like Facebook, according to the BBC.
CURBING TERRORISM IN OTHER WAYS
Despite the impasse between the government and tech companies on the encryption debate, there are still a myriad of ways tech companies can and do cooperate with the government to help tackle terrorism. For example, Alan Woodward, a cybercrime consultant, says that encrypted messages can be useful in combating terrorist attacks because they still reveal metadata, such as information about who talked to whom and for how long. He explained that metadata was used to arrest the attackers who carried out the attacks in Paris in November 2015. Security agencies can use link analysis to figure out communication patterns and identify potential threats or sources of information.
Internet Protocol addresses (IP addresses), unique identifying numbers assigned to any devices connected to the internet, are also important in the fight against extremism and terrorism. Tracing the IP addresses of recruitment messages and their followers can help intelligence agencies determine the identities of supporters and potential recruits. Tech companies such as Google have complied with the government’s requests for IP address information and if tech companies continue to help track encrypted messages and IP addresses, they could contribute immensely to the fight for security.
When it comes to website content, tech companies could work to block, delete or monitor extremist and hateful content. Companies such as Google do not allow hateful content that incites violence or extremely “graphic or gratuitous” violent content on their platforms. This is usually taken to include violent videos of beheadings used by the Islamic State of Iraq and Syria (ISIS) as scare tactics and recruitment messages. However, since there is no algorithm that can prevent the uploading of violent or extremist content, companies are largely dependent on users to flag inappropriate content, which is typically removed within a few hours. Similarly, other tech companies do not have a mechanism to stop the creation of new extremist websites. While Europe is developing a police team specializing in monitoring ISIS terrorist activities and blocking jihadi sites online, developing a way to quickly delete or prevent the creation of these sites may be helpful. In instances when leaving websites or forums up may be helpful, tech companies and the government could work to monitor or infiltrate extremist groups to gather intelligence, as has already been done by Ghost Security Group, a hacker group committed to the fight against extremism.
HUMAN RIGHTS IMPLICATIONS
As human rights advocates have pointed out, if tech companies are pressured to lessen encryption, create backdoors for the government to investigate terrorist activities and hand over user data, it could be problematic for the basic privacy and free speech rights of many individuals. Many notable factors make it hard for the people to believe that turning over more data to the government will actually make society safer. Firstly, there has been significant mistrust of the government after the U.S. National Security Agency surveillance scandal was exposed by Edward Snowden. After this leak, governments around the world considered and passed pieces of legislation allowing for widespread surveillance of their populace. To address this issue, the United Nations General Assembly adopted resolution 68/167 in December 2013, which reaffirmed internet and technology users’ right to privacy in the digital age.
Secondly, even if the government is given users’ data, there is a good chance that they will not use it. Daryl Johnson, former Analyst at the U.S. Department of Homeland Security, pointed out that right-wing extremist groups were not being monitored effectively (or at all) in the U.S. despite the sharp increase in domestic terrorism carried out by right-wing groups. Right-wing terrorists, such as Dylann Roof and Timothy McVeigh, are known for leaving hateful online manifestos and plans of action. This information was public and the government had full access to it. Instances like these indicate that even if the government is granted access to personal information of individuals, there is no guarantee this data will be analyzed effectively and accurately.
Moreover, several national laws, such as the U.S. Patriot Act, already offer the government significant access to the online activities of individuals and have been criticized for their overreach and lack of privacy protection. After the Charlie Hebdo attacks, France passed a sweeping surveillance bill, similar to the U.S. Patriot Act, to which the U.N. Human Rights Council voiced serious concern for its lack of oversight. Prime Minister Manuel Valle responded to the passage of this bill by saying, “France now has a secure framework against terrorism.” The most recent attacks in Paris, which took place after this law went into effect, suggest that sweeping surveillance powers do not function as a “secure framework against terrorism.” Rather, tech companies and the government need to work together to create a safer system that helps monitor hate speech and terrorist recruitment methods while protecting individual privacy rights.
WHAT WE CAN DO
It is clear that reactionary measures will not prevent future terrorist attacks. U.S. Government forces killed Osama Bin Laden, but now has to contend with ISIS. Hundreds of jihadist sites and accounts have been shut down, just to see more accounts opened. The U.S. and France passed bills granting the government sweeping surveillance powers, which did not prevent the most recent attacks in Paris and San Bernardino. While we focus on foreign terrorist threats, right-wing extremist groups are allowed to organize with almost no oversight and consequences. Effectively combating terrorism will require a two-pronged approach: (1) the government must attack the root of the problem by understanding the socioeconomic conditions which create terrorist breeding grounds, promote recruitment and allow for certain threats to go overlooked; and (2) the government and tech companies must find a way to work together to enhance security and stop hateful speech while simultaneously protecting privacy and free speech rights.
David Mair, a cyber-terrorism researcher at Swansea University, told the BBC that poverty, social exclusion and a lack of positive role models for young Muslim men all drive radicalization. Tackling these core issues will help the West overcome credibility issues with potential extremist recruits and engage individuals in more meaningful ways. He explained that extremist groups are reaching out to alienated young men in the West and offering them an opportunity to join a brotherhood in Syria where they can fit in. Mair argued that this propaganda can be countered by demonstrating why life under ISIS is not utopian and how the religious arguments made by these extremist groups are false. The government must also act to counter the drastic increase in hate crimes against Muslims after the Paris attacks. These bias crimes further exacerbate racial and religious tensions, and promote further radicalization instead of combating the root of the problem.
In line with these actions, spreading truthful facts and thwarting hate speech is also necessary in combating terrorism. After a recent attack on a Planned Parenthood, the Governor of Colorado noted that it was time to tone down the rhetoric that “is inflaming people to the point where they can’t stand it, and they go out and they lose connection with reality in some way and commit these acts of unthinkable violence.” We must do more to monitor and stop right-wing extremism and hate speech that incites violence.
Responding with force after lives have been lost is a reactionary measure that will not eradicate the root of the problem. Our methods to combat terrorism have been failing, and we need to start attacking terrorism comprehensively, from implementing new ways to track terrorist activity online to preventing radicalization and the socioeconomic conditions that foster terrorist breeding grounds. Tech companies and the government can also work together to implement creative mechanisms that monitor important data and thwart hate or extremist speech. If tech companies keep moving in a socially-responsible direction and the government begins to effectively and accurately analyze the data they have, then the internet can become a powerful tool in preventing future terrorist attacks in a rights-respecting way. This type of private-public partnership, coupled with policies promoting education, health care, economic stability and human rights, will be the only effective way to prevent terrorism.
Shruti Banerjee is a 2L at Fordham Law School.
The views expressed in this post remain those of the individual author and are not reflective of the official position of the Leitner Center for International Law and Justice, Fordham Law School, Fordham University or any other organization.
Photo Credit: Yuri Samoilov/Creative Commons