Once more, terrorism is front and center. With images in mind of a mangled pickup truck and victims lying on a bike path in Lower Manhattan, we’re again wondering about our ability to respond to terrorist threats in cities across America: How do we protect soft targets, like the sidewalk just outside a high school? How do we prevent individuals from being radicalized by terrorist groups like ISIS, particularly when their hateful propaganda is readily accessible across social-media platforms? How do we even know when this sort of radicalization is taking place?
This last question remains pressing after Tuesday’s attack. Sayfullo Saipov, the alleged attacker, was on the radar of federal agents since at least 2015. While he was never the main focus of investigations, reports suggest that law enforcement had searched his online activity, interested in his social-media connections to individuals who were the subjects of terror investigations. We now find ourselves asking, What did they miss?
Though hindsight is 20/20 at times like this, it nonetheless pushes us to think critically and act strategically about domestic counterterrorism efforts. After 9/11, we learned to more effectively share intelligence between federal government agencies. After the Boston bombing, we better understood the importance of local law enforcement and community-based efforts to help prevent and respond to terrorist incidents. After this Manhattan truck attack, we should consider how to leverage technology more effectively to thwart attacks — to stop the spread of terrorist propaganda and better identify individuals being radicalized, before they are able to commit acts of terror.
In part, this is about enhancing efforts underway, in government and in Silicon Valley, to pinpoint and monitor pathways to radicalization that unfold online — posts and videos on social-media sites, discussions in open and closed chat rooms, and more. It’s a particularly difficult problem given the sheer amount of data and the uncertainty surrounding what you’re looking at. Is a photo of ISIS meant to inspire followers or is it imagery for a news story? Is an online conversation about al-Qaeda intended to recruit supporters or is it, perhaps, part of academic research and public debate?
This is where emergent technology, paired with human interfaces, has the potential for major impact. For example, Facebook recently began using artificial intelligence to stop the spread of content from ISIS and al-Qaeda on its sites. Algorithms and self-learning systems flag and block propaganda, analyze text to learn what phrases and sentence structures are used to incite violence, and develop language-based signals that detect when such activities are spreading. This application of advanced computing systems can also identify clusters of terrorist supporters connected via social media, in order to disrupt these dangerous online networks.
Such artificial intelligence will be increasingly helpful to counterterrorism efforts as the technology is further refined, leveraging automation to effectively sift through the mountains of online data available in order to recognize trends and analyze networks. The key is to improve mechanisms for computer-human interfaces in parallel, so that this information can be screened by experts who understand the context and nuance of whatever the system flags. This will help ensure that social media and online sites aren’t so easily abused by bad actors but remain places of open dialogue for the rest of us.
Since much of terrorist recruitment and radicalization occurs offline, we must also consider how technology can be used to make connections between what’s happening online and what’s going on in the real world. Again, this is where robust human interface is key and where partnerships among the private sector, governments, civil society, and academics become critical. It’s not about more monitoring or increased data collection but, rather, using available information smartly — to have self-learning computer systems cull data, find patterns, and forecast outcomes, then use human expertise across a variety of sectors and with access to multiple strands of information to understand the wider context and potential real-world implications.
With this in mind, the recently formed Global Internet Forum to Counter Terrorism may prove impactful. Launched in June, this collaboration between Microsoft, Twitter, YouTube, and Facebook aims to support technological solutions, research, and information-sharing that combat the spread of extremism online. While it remains to be seen whether this initiative will live up to its promise, even the public commitment to jointly tackle the problem is a step in the right direction.
Ultimately, all of these ideas also mean continued education, at multiple levels, on the technological needs and analytical capabilities that future counterterrorism efforts will require. For the moment we largely focus on how we prepare members of the workforce — in law enforcement and in Silicon Valley — for this strategy. But it also means better preparing college and secondary-school students, our next generation of computer scientists, counterterrorism officials, scholars, and more, to understand and develop the interdisciplinary skills critical to our future national security.
There is no silver bullet to the ongoing terrorist threat. Tuesday’s attack underscored that point. But it also reminded us that it’s time, once again, to step up our counterterrorism game. Evolving technological tools and the education that underpins them should be the first place we start.
Marisa Porges is head of school at the Baldwin School, an all girls’ independent school in Bryn Mawr, and a former counterterrorism policy adviser in the Bush and Obama administrations. firstname.lastname@example.org