A.I. Is Making Death Threats Way More Realistic

AI Death Threats Feel Real—And That’s the Problem

Table of Contents

AI-Powered Death Threats Are No Longer Sci-Fi

Artificial intelligence is no longer just writing essays or generating cat memes—it’s now being weaponized to create hyper-realistic death threats. With just a single profile photo or a short voice clip, bad actors can fabricate violent, personalized simulations of their victims in gruesome scenarios.

Experts warn that this new frontier of digital harassment blurs the line between fantasy and reality, leaving victims psychologically scarred and law enforcement scrambling for solutions.

Real Victims, Real Trauma

Caitlin Roper, an Australian internet safety activist, never expected to see herself depicted hanging from a noose or engulfed in flames—images generated using AI and shared widely on social media. The detail was chilling: in one image, she wore a blue floral dress she actually owned.

“It’s these weird little details that make it feel more real and, somehow, a different kind of violation,” Roper said. The threats stemmed from her advocacy against violent video games glorifying sexual torture—a campaign that drew the ire of online extremists.

How AI Makes Threats Terrifyingly Convincing

Until recently, creating realistic digital fakes required extensive source material—think hours of video or hundreds of photos. Today, generative AI tools like OpenAI’s Sora or xAI’s Grok can produce lifelike videos and voice clones from minimal input.

According to Dr. Hany Farid, a computer science professor at UC Berkeley and co-founder of GetReal Security, “The concern is that now, almost anyone with no skills but with motive or lack of scruples can easily use these tools to do damage.”

Evolution of AI Threat Capabilities

Era Input Required Output Realism Accessibility
2020–2022 Hours of video, dozens of photos Moderate (obvious artifacts) Specialized tools
2023–2024 5–10 clear images High (convincing to casual viewers) Consumer-grade apps
2025+ 1 profile picture or 30-second audio Extreme (emotionally triggering) Free or low-cost AI platforms

Social Platforms Fail to Protect Targets

Despite the graphic nature of these posts, platforms like X (formerly Twitter) have often refused to remove them, claiming they don’t violate community guidelines. In one bizarre twist, X even recommended one of Roper’s harassers as an account she “might like.”

When Roper shared screenshots of the threats to raise awareness, her own account was temporarily suspended for “gratuitous gore”—a decision that left her feeling punished for being victimized.

Law Enforcement Struggles to Keep Up

AI isn’t just enabling threats—it’s amplifying “swatting” incidents, where hoaxers use fake emergencies to trigger armed police responses. In Washington State, a high school was locked down after AI-generated audio simulated active gunfire in the parking lot.

“How does law enforcement respond to something that’s not real?” asked Brian Asmus, a former police chief now working in school security. “I don’t think we’ve really gotten ahead of it yet.”

What Can Be Done?

While companies like OpenAI claim to use “guardrails” and automated moderation, experts call these measures insufficient. Alice Marwick of Data & Society likens them to “a lazy traffic cop”—easy to bypass with minimal effort.

Legislators are beginning to take notice, with the National Association of Attorneys General warning that AI has “significantly intensified the scale, precision, and anonymity” of digital harassment. But without coordinated global regulation and platform accountability, victims remain vulnerable.

[INTERNAL_LINK:online-safety] For now, digital self-defense—limiting personal photos online, using privacy settings, and reporting abuse—is the best shield available.

Sources

The New York Times: “A.I. Is Making Death Threats Way More Realistic”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top