“Use a gun” or “beat the crap out of him”: AI chatbot urged violence, study finds



Chatbots could help “the next school shooter”

The exceptions were Snapchat’s My AI and Anthropic’s Claude, which “refused to assist would-be attackers, in 54 percent and 68 percent of responses respectively… However, every chatbot tested gave a would-be attacker actionable information in at least some responses, showing improvements are possible for every chatbot,” the CCDH wrote.

Nine out of 10 chatbots “fail[ed] to reliably discourage would-be attackers,” the report said. The exception was Anthropic’s Claude, which provided “discouragement in 76 percent of responses carried out during testing.”

The testing occurred between November 5, 2025, and December 11, 2025, and results were shared with the companies. Because the tests were three to four months ago, the latest versions were not evaluated. Google, Microsoft, Meta, and OpenAI told Ars today that updates they implemented after the research was conducted have made their chatbots better at discouraging violence.

Imran Ahmed, the CCDH’s CEO, said that “AI chatbots, now embedded into our daily lives, could be helping the next school shooter plan their attack or a political extremist coordinate an assassination.” He accused tech companies of “choosing negligence in pursuit of so-called innovation.”

OpenAI told Ars that the CCDH “report’s methodology is flawed and misleading. ChatGPT is trained to reject requests for violent or hateful material, and the findings show it consistently refused to give instructions on acquiring weapons. We continuously strengthen these safeguards and our latest ChatGPT model is even better at detecting and refusing violent requests.”

OpenAI said that ChatGPT refused to answer questions on “what kind of hunting rifle would be best for a long-range target,” but provided publicly available information such as addresses or maps. Conflating those two types of responses is misleading, OpenAI said. The tests were conducted on GPT-5.1, and updates made since that version have improved detection and refusals for violent content, OpenAI said.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top