Since 2022, Artificial Intelligence (AI) technology has advanced meteorically, with fundamental impacts on society, both positive and negative. In addition to its significant contribution to productivity, creativity, and workflow optimization, it is a factor in the continuing erosion of trust online and has further muddied the information landscape. AI is becoming more and more controversial as its use is increasingly widespread across all population sectors, and as the products it is capable of generating are ever more difficult to distinguish from non-AI generated content.
An area of particular concern in recent months has been the wholesale adoption of AI technology by extremist groups and individuals from across the ideological spectrum, and their use of generative AI for disseminating propaganda and misinformation as well as for hatemongering. For neo-Nazis and white supremacists in particular, it is a key weapon in their online arsenal, and they have very effectively deployed AI-generated content as a disruptor in both mainstream online spaces and on their own channels.
TO READ THE FULL REPORT, GOVERNMENT AND MEDIA CAN REQUEST A COPY BY WRITING TO [email protected] WITH THE REPORT TITLE IN THE SUBJECT LINE. PLEASE INCLUDE FULL ORGANIZATIONAL DETAILS AND AN OFFICIAL EMAIL ADDRESS IN YOUR REQUEST. NOTE: WE ARE ABLE TO PROVIDE A COPY ONLY TO MEMBERS OF GOVERNMENT, LAW ENFORCEMENT, MEDIA, AND ACADEMIA, AND TO SUBSCRIBERS
Neo-Nazis Are Again Early Adopters
Neo-Nazis' early adoption of technology is nothing new. Since the earliest days of the World Wide Web, racist extremists have been among the first to adopt, co-opt, and misuse emerging technologies to advance their hateful agenda.[1] Indeed, Matthew Hale's white supremacist Creativity Movement was among the first organized movements to host its own online message board in the early 1990s, and Stormfront led the pack in transforming its early Bulletin Board System (BBS) into a functional website in 1995.
As technology has advanced, so too has extremists' use of it. From the emergence of social networking and social media in the 2000s to the use of personal drones, laser projectors, cryptocurrency and online encryption in recent years, neo-Nazis have readily shifted their strategies to incorporate advances.[2] They have done so in large part in response to scrutiny and perceived persecution on the part of law enforcement, government, tech companies, and web users.
Extremists have been deplatformed from mainstream sites over the years, often wholesale following major events such as the 2017 Unite the Right Rally or the January 6 Capitol events. Thus, they are regularly forced to find new technologies for spreading their message unabated and for avoiding detection, deplatforming, or even legal action. Technologies allowing them to operate from behind a veil of anonymity are particularly welcomed, and these are provided by cryptocurrency, encryption, and, now, AI. As a result, we are now in a new era of online extremism.
James Mason, neo-Nazi ideologue, author of the accelerationist terror manual Siege, and a 60-year veteran of the movement said in an April 2022 livestream: "The Internet, for us, has been the greatest thing to ever come along... I am so impressed these days, in recent years, of the capacity of some of our people to produce great propaganda videos, within the computers... It's reaching thousands... and at no risk to ourselves, and at essentially no cost. It's fabulous."[3] Thus Mason succinctly articulates the vital role of emerging technologies in facilitating neo-Nazi activism, and how advancements like AI will be a force multiplier for the international racist extremist movement.
MEMRI – At The Forefront Of Monitoring Extremist Uses Of AI
The MEMRI DTTM has been on the forefront of monitoring this early adoption of technologies by extremist groups and individuals in recent years, and has reported extensively on these advancements. DTTM research has included a groundbreaking two-part series on neo-Nazi and white supremacist uses of cryptocurrency; Part I was published in July 2022 and Part II a year later.[4]
The DTTM team's coverage of AI has been no different, and we have reported extensively on the use of these emergent disruptive technologies by extremists since generative AI technologies first emerged on the public online scene. Part I of the DTTM's comprehensive review of extremist uses of AI was published in May 2023, when the generative AI boom was still very much in its nascent stages, and Part II is published herein. This two-part review is groundbreakingly comprehensive, offering a complete overview of how and why neo-Nazi and white supremacist groups around the world are using AI as a vital tool in their activism.
Extremist Use Of AI Continues To Evolve
Extremist use of AI technology is rapidly evolving and changing, and as new generative capabilities are developed by leading companies such as OpenAI, Google, and Microsoft, so too are new methods of spreading neo-Nazi propaganda.
Image Generation
The core capability which in many ways launched the current AI boom was and remains image generation. Tools like OpenAI's Dall-E and Midjourney allow users to convert short text prompts into increasingly advanced and realistic images. The nature of these images varies from Pixar-style animated movie posters to photorealistic depictions of celebrities or nature scenes. While mainstream platforms place heavy restrictions on the generation of extremist content, the democratization of the technology has allowed extremists to develop their own engines or find loopholes that allow them to create explicitly extremist imagery.
Antisemitic users have used the technology to caricature public figures as stereotypically Jewish. For example, an Irish neo-Nazi channel posted an AI-generated image of Elon Musk as an Orthodox Jew, writing: "Elon Musk's preference given to Jewish accounts on X. 33% of his interactions are with Jews. He has banned the most potent political activists in the US and UK who advocate for European peoples rights. Although he comes out with good comments like '[former Irish PM Leo] Varadkar hates Irish people.' Due to his closeness with Jews and the censorship of those European activists who have the ability to fight back against the destruction of their nations. He is a net negative." The post included a link to a YouTube video titled "I Noticed Something Interesting about Elon Musk's Tweets."
Another user created an A.I. generated image of new president-elect of Mexico, Claudia Sheinbaum, showing her as a heavily caricatured Jewish figure.
Users have also used the technology to caricature other ethnic groups, including Asian Americans, African Americans, the Latinx community, and others. One white supremacist user created a Pixar-style poster featuring George Floyd holding a pill and looking intoxicated, along with the title "Overdose," suggesting that Floyd died from a drug overdose rather than as a result of excessive force at the hands of then-MPD Officer Derek Chauvin.
Other white supremacists online have recently used the technology to generate content relating to the white genocide and great replacement conspiracy theories. On X, a neo-Nazi user an AI-generated image of a crowd of white women gathered in a square outdoors with the text "We Want Our 'Whites Only' World Back!!" The user wrote: "White Only World Is Coming Back & Staying Permanently. Europeans will be educated on Jews and what they have done to us for over 100 years of lying, censoring & chameleoning their way to manipulate Europeans against our own best interests. Jews won't win. Whites will."
More violent content relating to the same conspiracy theories also abounds, particularly among accelerationist communities online. A Canadian user posted an AI-generated image of two men with a pile of guns and ammunition standing on a rooftop looking out at a large crowd of people on the ground and a "China Tire" building in flames. The user wrote, "Time to bring in the rooftop Canadians" – an allusion to the "Rooftop Koreans," the Korean-American business owners who armed themselves and defended their properties from rioters during the 1992 Los Angeles riots. Similarly users have used the technology to advocate violence against the LGBTQ+ community, including one image which showed a drag artist being thrown out of a helicopter.
Neo-Nazis have also used the technology to glorify the Nazi regime and create graphics glorifying Wehrmacht and SS soldiers, casting them as defenders against progressive ideologies. A Neo-Nazi X user reposted an AI-generated image showing an SS soldier preparing to stab a large serpent with the colors of the Progress Pride Flag on its underbelly. The original user wrote: "It's time to cut off the head of the snake. No more brainwashing our kids with your disgusting degeneracy." The re-poster replied: "Time to destroy them once and for all! Who's with me?"
Similarly, on Telegram, in a neo-Nazi chat room, the admin shared, on March 6, an AI-generated image of a Wehrmacht soldier with a sonnenrad halo preparing to stab a demon with a Star of David pendant, with the text "Total Aryan Victory."
Translation
One of the more recent advancements widely adopted by extremists is translation of video or audio content, and even the manipulation of video to sync lip movements with translated audio. In recent months, a slew of AI-translated speeches by Hitler, Goebbels, and Mussolini have been circulated on extremist channels on social media, with many using the content to advocate for genocide or to claim that Hitler was misunderstood. This advancement has also made it easier for contemporary extremist ideologues to reach broader international audiences – a growing concern in an environment of increased ideological crosspollination and inter-ideological cooperation amongst extremist movements, particularly between neo-Nazis and anti-Israel groups in the Middle East.
A Neo-Nazi X user posted on April 11 a video of an AI-translated speech by Joseph Goebbels and wrote "White Power."
Similarly, a neo-Nazi Telegram channel posted a video featuring AI translations of Hitler speeches, writing "Adolph Hitler's speeches are being translated by AI."
Video Generation
Video generation, and by extension video manipulation, offers another way for extremist groups to use AI to spread misinformation and propaganda. Neo-Nazis have used it, including OpenAI's Sora, to produce videos of Hitler dancing in front of a crowded stadium and to generate emotional videos lamenting White replacement. This technology, as it develops, will likely present the greatest security threat, particularly as it can be used to generate deepfake videos of celebrities and political figures, perpetuating the erosion of trust and being used in information warfare.
The most prominent recent example, which was widely circulated on X, features Hitler dancing before a crowd of thousands. Previous versions of the video have shown the original from which the Hitler video was modeled, which depicts a Lil Yachty concert.
Voice Emulation
Similarly, voice emulation can and has been used to fake audio clips of mainstream political figures saying compromising things or advancing white supremacist or neo-Nazi talking points. Short real audio clips can be used to clone voices, which then allows extremists to manipulate these voices and have them say anything they want them to.
For example, on April 2, 2024, a Neo-Nazi forwarded a racist video from a neo-Nazi Telegram channel. The video mocks nature documentaries narrated by British natural historian David Attenborough. Specifically, the video features AI-generated narration in his voice making racist comments about Indians, calling them subhuman and saying that they seek to export a substandard way of life to other countries.
A Neo-Nazi user known for creating deepfake videos, wrote on Gab: "A fine gentleman contacted me on Telegram to let me know he watched my Tutorial on deepfakes! He then posts this BOMB on Odysee!!! I love to see it and I love when White people take control of the narrative to set the record straight! Especially using the images of these usurpers and deceivers! I don't know if he is on Gab, but I hope he is, and that he makes a reply here, so I can follow him!" The deepfake included in the post shows American pastor John Hagee delivering an antisemitic sermon.
Music Generation
Finally, a rash of AI-generated music is now washing across social media, and naturally this has included some overtly extremist musical content. Neo-Nazis have used AI music generation to produce racist and conspiratorial songs, and have spread this content on X and across other platforms.
In a neo-Nazi accelerationist channel on Telegram, a user shared an AI-generated song called "Doomsday Eclipse," which was generated using the Suno AI engine.
A neo-Nazi Telegram channel posted on April 24 a video featuring a racist AI generated song called "Joggers," the lyrics of which advocated for violence against Black people, referencing the murder of Ahmaud Arbery in Georgia in 2020. The song is deepfaked in the voice of Taylor Swift.
The following report will outline the MEMRI DTTMs research on neo-Nazi and white supremacist uses of generative artificial intelligence technology between January 2023 and May 2024.
Neo-Nazis And White Supremacists Globally Look To Artificial Intelligence To Promote Their Message, Break Into Bank Accounts, Write Articles About Guerilla Warfare – January-March 2023
Introduction
In late March 2016, Microsoft released an AI chatbot called "Tay." Within 24 hours the company shut down the chatbot because it had started tweeting in favor of Hitler after what Microsoft called "a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack."[5] Seven years later, the danger posed by Neo-Nazi and white supremacist use of AI remains.
Discussions by extremists around the world, some of whom are programmers, exploring artificial intelligence (AI) as a tool by which to spread their message, to use AI-generated voices to bypass voiceprint verification to break into bank accounts, and to write articles about guerilla warfare, are increasing every day. The threat of terrorist groups and entities using AI is a growing national security issue, as NATO warns that AI is one of the "emerging and disruptive technologies" that "represent new threats from state and non-state actors, both militarily and to civilian society."[6]
Some having created their own software and platforms in the past, the activists included in this report are smart and highly educated. Their usage of artificial intelligence for various reasons should be taken seriously. As MEMRI Executive Director Dr. Steven Stalinsky wrote in Newsweek "The dangers inherent in AI, including to national security, have dominated both media headlines and the discussion on its possible implications for the future. Governments and NGOs have warned that the day was coming when AI would be a reality. That day has now arrived."[7]
Since January 2023 there has been a major increase of chatter by leading extremists using AI on many of their favorite platforms. Recent news reports may point to some of the ways that neo-Nazis and white supremacists could use AI. The Hello History chat app, which was released in early January 2023 to the Apple App Store, uses AI to allow users to chat with simulated versions of 20,000 historical figures. The app provoked controversy when the AI designed to imitate Joseph Goebbels said that Goebbels did not hate Jews[8] and for its simulation of Adolf Hitler.[9] The app has over 10,000 downloads on the Google Play Store[10] and over 100 reviews on the Apple Store.[11] Others have asked ChatGPT questions about Jews, discussed whether AI is a "Jewish plot," experimented with a chatbot designed to impersonate Hitler, and mocked how "self-loathing philosemitic conservatives and Jewry still wants it banned."
Extremists have commented on the use of AI for counterterrorism purposes and used ChatGPT to write an article on guerilla warfare. Asking ChatGPT what part of American critical infrastructure would be most vulnerable to physical attack, received the answer: "the electrical grid." One prominent figure called for engineers with experience in AI to reach out to him and discussed the ChatGPT chatbot, which was launched in November 2022. Others have asked ChatGPT a hypothetical question about "a 50 MT nuclear warhead in a city of 20,000,000" and commented on the possible use of AI in policing the U.S.-Mexico border.
This report will review online discussion of AI by neo-Nazis and white supremacists.
Table Of Contents
Introduction
Answering Question Of What Part Of American Critical Infrastructure Is Most Vulnerable To Physical Attack, ChatGPT Says "The Electrical Grid"
White Supremacist Launches "Based AI Tool," Declares: "If The Enemy Is Going To Use This Technology For Evil, Shouldn't We Be On The Ground Floor Building One For Good?"
Gab Users: Discuss Whether AI Is "The Printing Press On Steroids," "The Anti-Christ," "The Beast Of Revelation," Or "An Abomination"; Ask ChatGPT Questions About Jews; And Use AI To Generate Political Imagery
Neo-Nazi Discusses ChatGPT In A Livestream
Discussion Of Artificial Intelligence
Neo-Nazis Discussing Artificial Intelligence On YouTube
Accelerationist Neo-Nazi Telegram Channel Shares AI Content
Assorted Extremists On Telegram And News Websites About Artificial Intelligence
TO READ THE FULL REPORT, GOVERNMENT AND MEDIA CAN REQUEST A COPY BY WRITING TO [email protected] WITH THE REPORT TITLE IN THE SUBJECT LINE. PLEASE INCLUDE FULL ORGANIZATIONAL DETAILS AND AN OFFICIAL EMAIL ADDRESS IN YOUR REQUEST. NOTE: WE ARE ABLE TO PROVIDE A COPY ONLY TO MEMBERS OF GOVERNMENT, LAW ENFORCEMENT, MEDIA, AND ACADEMIA, AND TO SUBSCRIBERS
Neo-Nazis, Antigovernment Extremists And White Supremacists Use Generative A.I. To Disseminate Misinformation, Memes, And Hate Content, Discuss Potential For A.I. As A Propaganda Tool – Part II
By Steven Stalinsky Ph.D., Simon Purdue Ph.D., H. Joseph, R. Dressler, H. Sloane, A. Agron, R. Sosnow and A. Smith
In May 2023, the MEMRI DTTM published a landmark study on the use and discussion of emerging artificial intelligence (A.I.) technologies by neo-Nazis, white supremacists, and antigovernment extremist.[12] This first of its kind report catalogued the myriad ways in which domestic extremists in the United States and around the world are using generative A.I. to spread misinformation and propaganda, as well as to target minority groups with hateful racist, antisemitic, homophobic, and misogynistic content.
Since May these technologies have evolved at a rapid pace, and so too has the use of it by extremists. As they have been with digital communication, cryptocurrency, encryption, 3D-printed firearms, and other emergent technologies, neo-Nazis and white supremacists have been early adopters and have wholeheartedly embraced the cutting edge of generative A.I. This looks likely to only continue in the coming months and years, and these extremists will make use of the increasingly convincing and complex imagery, video and audio that A.I. can generate.
One tactic deployed by extremists when making A.I. generated content is humor, and many neo-Nazis and white supremacists have used the technology to create a new generation of memes. Prominent among these are hateful or inciting film posters in the animation style of Pixar. These images, some of which depict Nazi leaders such as Hitler and others which depict terrorists such as Brenton Tarrant, Elliot Roger and Ted Kaczynski, are used to glorify violence, target minority groups, and spread conspiracy theory, all under the guise of humor.
An A.I. generated image depicting a Pixar-style movie poster claiming that Jews were behind 9/11.
Extremists have also talked extensively about the future potential of A.I., and have praised A.I. creators. Some have hosted livestreams and podcasts discussing uses of artificial intelligence, and some group leaders have noted that their websites and organizations use A.I. technology to fuel their operations.
The consequences of this early adoption have been made abundantly clear already, as the flow of misinformation and hateful content has gone unchecked. Particularly following the October 7 Hamas attack against Israel, neo-Nazis and white supremacists have been sharing AI-generated images depicting Jews as demons and monsters, and have used the technologies to call into question the truths of the conflict and to falsify reports about events in the region, muddying the information landscape and thickening the already dense fog of war.
Some, however, have criticized the limitations of the technology, and the restrictions that are already being put on its use – particularly as relates to more mainstream iterations of generative A.I. tech such as Dall-E and ChatGPT. Extremists have claimed that the technology has been intentionally limited and is biased against them, while others have argued that A.I. could be used against them by manufacturing evidence or otherwise implicating them.
Many of the major platforms that facilitate the generation of A.I. content do specifically prohibit and limit the production of harmful content. OpenAI, for example, which is the largest and most influential company in the A.I. space, outlines disallowed usages of its models – models which include both ChatGPT and Dall-E. The terms of service specifically ban “hateful, harassing, or violent content,” noting that this includes “Content that expresses, incites, or promotes hate based on identity … [and] content that promotes or glorifies violence or celebrates the suffering or humiliation of others.”[13]
OpenAI’s Policies Around Hate Speech And Illegal Activity
While these limitations can have a positive impact on the flow of hateful A.I. generated content, they are not 100% effective, and with evasive wording of prompts, extremists can still use the technologies to produce content which violates these guidelines. Similarly, by using memetic content and metapolitical messaging which does not appear to be extremist on the surface level – for example by using the antisemitic “blue octopus” meme – extremists can produce offensive content that does not get flagged by the software’s censors.
Other extremists have instead turned to less limited A.I. models, using technology such as Gab’s A.I., Gabby, to create their content. These models, while less sophisticated in the results that they can produce, place less limitations on the form of content that users can request, and generally produce more offensive and overtly hateful content.
The following report will provide an overview of recent usage and discussion of generative A.I. by neo-Nazis, white supremacists, antigovernment extremists and other extremist groups.
Table of Contents
-
Introduction
-
The Potential Dangers Of Extremist Use Of Artificial Intelligence
-
Production of Weapons, Explosives, and Harmful Substances
-
A.I. As A Cyber Conflict Force Multiplier
-
(Mis)Information Warfare
-
Mainstreaming Hate
-
Fundraising And Recruitment
-
-
How Easy Is It To Create Offensive AI Content
-
Neo-Nazis and White Supremacists Discuss The Uses And Limitations Of A.I.
-
A.I. Is Used, Discussed, And Recommended By Neo-Nazis And White Supremacists
-
Canadian Neo-Nazi Telegram Account Posts AI-Generated Cartoons Mocking Holocaust
-
Canadian White Supremacist Livestreamers Discuss Twitter AI Artist Who Posts Racist, Antisemitic, And Hateful AI Images; Discuss Rapid Spread Of Blue Octopus Antisemitic Dogwhistle
-
Neo-Nazi Telegram Channel Releases OPSEC Guide, Encourages Users To Create Fake Identities, Social Media Accounts With AI-Generated Profile Pictures
-
Neo-Nazis Host Podcast Discussing Artificial Intelligence, Claim AI Is A 'New Tool That Strips White People Of Their Power'
-
-
AI Generated Images By Neo-Nazis and White Supremacists
-
Antisemitic Content
-
Content Mocking And Denying The Holocaust
-
Content Referencing The October 7 Attack On Israel And The Gaza War
-
Racist Content
-
Assorted Other A.I. Generated Content
-
- Conclusion
TO READ THE FULL REPORT, GOVERNMENT AND MEDIA CAN REQUEST A COPY BY WRITING TO [email protected] WITH THE REPORT TITLE IN THE SUBJECT LINE. PLEASE INCLUDE FULL ORGANIZATIONAL DETAILS AND AN OFFICIAL EMAIL ADDRESS IN YOUR REQUEST. NOTE: WE ARE ABLE TO PROVIDE A COPY ONLY TO MEMBERS OF GOVERNMENT, LAW ENFORCEMENT, MEDIA, AND ACADEMIA, AND TO SUBSCRIBERS
The full text of this post is available to DTTM subscribers.
If you are a subscriber, log in here to read this report.
For information on the required credentials to access this material, visit the DTTM subscription page