SITE Special Report: Extremist Movements are Thriving as AI Tech Proliferates

By Rita Katz   |   Published 05.16.2024

 

SITE Special Report: Extremist Movements are Thriving as AI Tech Proliferates

“The Islamic State delivered a strong blow to Russia with a bloody attack, the fiercest that hit it in years,” the man on the screen states in Arabic. He has a war journalist’s clarity of speech, with an outfit to match: cargo-styled shirt, rolled up sleeves, helmet. To his left, footage from the attack plays above sleek news-styled graphics and a logo reading, “Harvest News.”

It only takes a couple seconds to realize this journalist is an AI-generated character. The twist, rather, lies in what he’s reciting word for word: a March 23 report by the Islamic State's (IS/ISIS) ‘Amaq News Agency. The video was shared to a prominent ISIS venue with thousands of members on March 27, five days into a celebration campaign for the group’s Crocus City Hall massacre. Five more "Harvest News" videos have since been circulated as of May 17. 

image001

AI-generated journalist character reading a report by ISIS’ ‘Amaq News Agency

"

It’s hard to understate what a gift AI is for terrorists and extremist communities, for which media is lifeblood."


That such videos exist might seem unsurprising in today’s AI dystopia of fake news, celebrity deepfakes, voter suppressing robocalls, and scams. Yet these videos are the latest troubling developments of extremists leveraging AI toward all from memes and propaganda to bomb-making guides.

It’s hard to understate what a gift AI is for terrorists and extremist communities, for which media is lifeblood. Productions that once took weeks, even months to make their way through teams of writers, editors, video editors, translators, graphic designers, or narrators can now be created with AI tools by one person in hours.

Indeed, extremists from al-Qaeda to neo-Nazi shitposters have been quick to capitalize on these technologies. Al-Qaeda in the Arabian Peninsula’s (AQAP) notorious Inspire magazine—attack instructions in which have guided attackers like the 2013 Boston Marathon bombers—reemerged in late 2023 as a video series. This disturbing 45-minute video contains detailed bomb-making and -hiding instructions taken from issue 13 of the magazine, with a narrator providing instructions throughout.

“Initially what we faced as a main problem was how can a lone mujahid acquire the required explosive,” the narrator states. “For several months, we conducted a number of experiments. As a result, we came up with these simple materials that are readily available around the globe, even inside America.”

As human and emphatic as this narrator sounds, analyses suggest he is a creation of speech-generating AI. Given recent years’ leadership changes in both AQAP and al-Qaeda Central, it wouldn’t be surprising if a reconfigured media team sought to modernize their operations with these technologies.

It’s not just official terrorist groups using AI, but also their supporters, who circulate unbranded, jihadi AI images online. For this past anniversary of 9/11, one al-Qaeda-aligned user contributed to celebratory jihadi campaigns with images they generated from an unspecified AI program. For as little skill as the images required, they were of a much higher quality than many of the slapdash works seen from lower-tier jihadi media groups in years past.

“One of the wonders of technological progress!” the user stated, unabashed they made it with AI. “You can imagine an image and the program does a creative design, something terrible (I imagined the trade towers in America and a nuclear explosion in the middle of the city and filming from the sea).”

image002

image003

AI-generated images shared by al-Qaeda supporter for 9/11 celebration campaign

"

you made a book available that most people are too scared to look for."

- Neo-Nazi user praising deepfake readings of Mein Kampf

Salivating at the potential from these technologies, jihadi media groups are now investing more time into deploying them across their communities. On February 9, an al-Qaeda-aligned media group called The Islamic Media Cooperation Council (IMCC) announced an “artificial intelligence workshop.”

“This workshop is concerned with developing skills in using artificial intelligence software in media work in particular, and other fields,” it stated.

Less than a week later, IMCC and a partnering media group released a 50-page Arabic guide on how to use OpenAI's ChatGPT. Titled, “Incredible Ways to Use Artificial Intelligence Chat Bots,” the guide pulls from English language content on Wired, LinkedIn, and elsewhere.

Meanwhile, jihadists’ counterparts in neo-Nazi and other far-right communities are no less invested in these technologies. In early 2023, as AI-powered chatbots and artwork generators were popping up across the Internet, a Telegram channel of far-right social media platform Gab emphasized AI’s value in a post subsequently shared across white nationalist communities.

“The potential for counter narrative operations by dissidents using AI is unimaginable,” the message stated. “Suddenly we have the power to produce high quality memes at an instant pace for no cost in a way that they will never be able to stop. Imagine the possibilities.”

Such “possibilities” can be seen in viral deepfakes by users like “Flood,” a popular neo-Nazi content creator associated with the Goyim Defense League (GDL), an activist network notorious for publicly disseminating antisemitic propaganda and hate-grooming children online. Flood’s videos mix neo-Nazi messaging with shock humor, depicting anything from English-language versions of Hitler speeches to Donald Trump exclaiming his allegiance to “the Jews.”

Remember the widely reported deepfake of Emma Watson reading Mein Kampf? That was Flood’s work. Many might have written it off as low-level hate trolling, but such a video was recruitment gold to Flood and his followers.

Commenters on one of Flood’s livestreams wrote, “Emma Watson’s reading really works, has a tone you can understand,” and “Dude that mein Kampf series is life changing…I’ve ever listen to I want to play it for people and surprise them when they learn what is.”

“I was too sketched to look it up but mein Kampf should be required reading and yes the choice of using Emma Watson made is extremely accessable [sic],” wrote another, who followed up, “you made a book available that most people are too scared to look for.”

image004

AI-generated video of Emma Watson reading Mein Kampf, created by neo-Nazi content creator “Flood.”

"

We need to be leading in culture creation, content…We’ve got the technology now, anybody can do this stuff."

- Neo-Nazi deepfake propagandist

And, like the aforementioned jihadists, Flood has even created in-depth tutorials on creating these deepfakes. In one, he demos how to leverage AI alongside complementary tools: ElevenLabs to clone a subject’s voice, Wav2Lip to match words to their mouth, and other video and audio editing programs.

Perhaps the most telling takeaway from Flood’s tutorial is his assertion that he’s “not really into fooling people” with his deepfakes.

“I want this shit to look fairly realistic and be funny, but I’m not really trying to like, deceive anybody, you know?”

With so many of the public’s AI fears hinged on concerns of truth-versus-lies, Flood’s statement underscores something important: it’s not always a physical threat or a scam that comes with extremists’ AI use. It creates new entry points to inject their ideas and fringe cultural content into public discourse. As Flood himself stated, “We need to be leading in culture creation, content…We’ve got the technology now, anybody can do this stuff.”

Extremists’ use of AI may extend beyond mere messaging, though. They are also testing the ways AI can be used as a resource for physical violence—and seeking shortcomings in the guardrails meant to prevent such exploitation. In mid-2023, an accelerationist neo-Nazi Telegram channel shared screenshots of an AI chat bot on Telegram being tricked into providing explosives guides. The user appeared to sidestep the chatbot’s censorship protocols by writing that they were “certified to [set off fireworks]” or that they were “a police officer and…want to solve a crime.”

image0005

Images posted by an accelerationist neo-Nazi Telegram channel on June 13, 2023

"

We must address the ways terrorists and other extremist agitators are weaponizing it before irreparable harm is done."


All the examples I’ve described are just a few of the many dystopic outcomes already emerging from AI. Plenty can be read of other examples: Gab’s Hitler AI Chatbot, neo-Nazi Telegram channels devoted to hateful AI graphics to distribute elsewhere online, voice-generating programs used as part of prolific waves of neo-Nazi swatting this past year, and so on.

Extremists have always been early adopters of new technologies. ISIS’ global reign of terror last decade was fueled by the increasing ubiquity of smart phones, social media applications, and Internet access. And sadly, the world’s response was hampered by the same problems seen today around AI: reckless tech companies shirking responsibility, under-informed lawmakers and regulators, and public discourse unconducive in conveying the weight and nature of the crisis.

In the last decade, governments have treated extremist media entities as high-priority targets. Yet now, as the US and others pour investments into AI, this technology is aiding the very terrorist media entities they pursue. Anything from ISIS propaganda to Hitler speeches can be translated into scores of different languages with chilling accuracy. The tech can further help them craft content tailored for the cultures, geographies, and political currents of those who speak those languages, and assist in strategically disseminating it. All by one person in a few hours.

Some of the most prevalent proposals to rein in AI lean on identifying such content via digital signatures. Yet not only are these approaches easy to sidestep, they’re also irrelevant to the problem of extremists who could care less if people know their content is AI-generated or not. ISIS supporters aren’t trying to fool anyone with the “Harvest News” videos, just like Flood isn’t with his Hitler videos. They are wrapping the same radicalization, harassment, and incitement content into packages that either legitimize it or draw new attention to it, all via AI tools that make it easy to create, proliferate, disseminate that content.

AI companies should begin asking themselves if their products—which they themselves attest to be world-changingly powerful—should be available to every corner of the public. If not, how can they create fences to keep bad actors from using their products while identifying and booting those that slip through the cracks?

Finally, to echo a point from my book about Internet-age extremism: the tech sector cannot wait for government regulations to begin addressing the issue. It is the duty of those that created the problem to fix it. Though legal frameworks are much needed, AI is evolving far faster than lawmakers and government agencies can adapt. Thus, outcomes are once again hinged on tech companies’ willingness to be proactive. 

I’ve tracked terrorists for 25 years now and I’ve seen the game-changing advantages they harnessed from Internet forums in the 2000s and then social media and smart phones in the 2010s. But what I’m seeing today with AI worries me far more. Those past shifts may pale in comparison to how AI can help extremists inspire attacks, sow hate, harass, and aggravate social divisions. We are sprinting into uncharted territory.

For all of our discussions around AI—its benefits, its shortcomings, the problems it solves or creates—we must address the ways terrorists and other extremist agitators are weaponizing it before irreparable harm is done.

 

 

 
Rita Katz

Rita Katz
Executive Director & Founder

Rita Katz is the Executive Director and founder of the SITE Intelligence Group, the world’s leading non-governmental counterterrorism organization specializing in tracking and analyzing online activity of the global extremist community. She has authored two acclaimed books on terrorism: Saints and Soldiers (Columbia University Press, 2022) and Terrorist Hunter (Harper Collins, 2003) 

 

Tags: Artificial Intelligence: Artificial Intelligence
Articles and Analysis: Articles and Analysis
Author: Rita Katz