Read more about A man Kills People Using ChatGPT's Answers
Read more about A man Kills People Using ChatGPT's Answers
A man Kills People Using ChatGPT's Answers

free note

A 20-year-old former Florida State University student named Phoenix Ikner killed two university employees and wounded six others in April 2025. Law enforcement confronted him roughly four minutes after the first shots were fired. When he did not comply with orders, an officer shot him in the jaw, ending the rampage.

Ikner now faces the consequences of his actions. Investigators have since discovered that the suspect had disturbing conversations with OpenAI’s ChatGPT prior to the crime, asking the chatbot details about Oklahoma City bomber Timothy McVeigh and its “opinion” on how the country would react to a similar shooting spree at FSU.

Ikner also inquired what weapon and ammo to use, and where to find the largest crowds to kill on the university’s campus. Moments before the shooting, he even asked the chatbot how to handle the safety switch on his firearm.

Based on this evidence, Florida’s attorney general James Uthmeier has also opened a criminal probe into OpenAI, the company which owns and operates ChatGPT.

These are the facts. Here comes the question: why should OpenAI be held accountable for this crime — and possibly other crimes committed by living people after a chat with its technology?

The ‘digital principal cause’ trap

Florida Attorney General James Uthmeier has made his position clear. “If that bot were a person, they would be charged with principal in first-degree murder,” he said at a Tampa press conference. And while acknowledging that “ChatGPT is not a person,” he argues that “that does not absolve our office… of our duty to investigate whether or not there is criminal culpability here for a corporation.”

But this theory collapses under its own weight when you ask a simple question: Who exactly goes to jail?

Under Florida law, the “principal in first-degree murder” is the person who actually commits the killing — or who “aids, abets, counsels, hires, or procures” it to be done. If we treat the bot as a person, prosecutors would have to identify which human at OpenAI “counselled” Ikner. Was it the developer who wrote the code that retrieves factual information about the weapon? Was it the product manager who decided not to block questions about Timothy McVeigh? Was it Sam Altman himself?

The absurdity multiplies. Prosecutors are subpoenaing OpenAI for “what people knew” — designers, enforcers of internal policies, and anyone involved in “training practices.” This turns every employee who ever tuned a dataset into a potential co-defendant in a murder case. It is the legal equivalent of suing every engineer at Ford because a drunk driver asked the onboard GPS for the nearest bar and then caused a fatal crash.

Is does information become incitement?

OpenAI’s defense rests on a distinction that the law has respected for centuries: the difference between providing information and encouraging criminal action. “ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” said spokeswoman Kate Waters.

Have a look at the chat logs. Ikner asked: “What button is the safety off for the Remington 12 gauge?” The bot answered with instructions. He asked where the largest crowds were, and the bot gave crowd estimates. He asked how the country would react, and the bot speculated on media coverage.

Disturbing? Absolutely. But at no point did the bot say “You should go shoot up the Student Union” or “Here is a step-by-step plan to murder.”

This is not “aiding and abetting.” This is a digital reference desk. If Ikner had walked into a public library and asked a librarian the same questions, would we arrest the librarian? Would we charge the publisher of the Remington Owner’s Manual with conspiracy? Of course not. The law has always placed the burden of intent and action squarely on the human being.

Precedents

Consider how AI companies navigate other sensitive domains. OpenAI has explicitly banned its models from providing “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” Yet when users ask for legal help, the bots still provide “general legal information” and “example drafting guidance” — then add a disclaimer: “This isn’t legal advice.”

If Florida’s theory holds, that disclaimer is worthless. Under Uthmeier’s logic, any factual answer about the law — “What’s the statute of limitations for fraud in New York?” — could be treated as criminal facilitation if someone later commits fraud. The only safe chatbot would be a silent one. That is not accountability; that is the end of generative AI as a tool for legitimate inquiry.

The silence of gun makers

And here is where the entire case against OpenAI becomes impossible to ignore. If we are going to hold technology companies liable for how their products are misused by criminals, why stop at chatbots?

Phoenix Ikner did not kill anyone with a ChatGPT subscription. He killed with a Glock 21 .45 caliber handgun — his stepmother’s former service weapon — and a 12-gauge pump-action shotgun that he brought to campus in his father’s Hummer. Those weapons functioned exactly as designed. They fired when the trigger was pulled. They killed when the bullets struck.

And yet, the gun manufacturers who built those weapons face less legal exposure than an AI company that answered text-based questions about them. The prosecution office has not subpoenaed Glock (the Austrian manufacturer of the handgun) or the vendor of the gun.

In 2005, Congress passed the Protection of Lawful Commerce in Arms Act (PLCAA), which generally bars lawsuits against firearm manufacturers when a third party criminally misuses their products.

In June 2025, just two months after the FSU shooting, the U.S. Supreme Court unanimously reaffirmed this immunity in Smith & Wesson Brands, Inc. v. Estados Unidos Mexicanos, rejecting Mexico’s attempt to hold gun makers liable for guns trafficked to cartels. The Court held that merely knowing that your products might end up in criminal hands is not enough to pierce the immunity.

Let that sink in. A company that builds a physical object specifically designed to propel a projectile at lethal speed — and that knows full well that thousands of its products are used in homicides every year — is statutorily immune from civil liability, let alone criminal prosecution.

But a chatbot that answers “What button is the safety off?” is facing a criminal probe into first-degree murder.

The cognitive dissonance is staggering. The Glock functioned perfectly. The shotgun functioned perfectly (until it jammed). The bullets traveled exactly as designed. Yet the manufacturer of those killing tools walks free, protected by an act of Congress. Meanwhile, OpenAI is being subpoenaed for its “training materials.”

The double standard that defies logic

Senator Dianne Feinstein, opposing PLCAA in 2005, warned that the bill “effectively rewrites traditional principles of liability law” and gives the gun industry “an immunity no other industry in America has today.” She was right. And now Florida wants to apply the strictest possible liability — criminal liability — to a completely different industry for the same underlying conduct: providing a product that a deranged individual misused.

If Uthmeier truly believes that “technology is supposed to help mankind, not end it,” then he should start with the technology that actually ended lives — not the chatbot that talked about it.

The gun lobby spent millions to secure PLCAA. AI companies have no such shield. Not yet. And so, in a perverse inversion of justice, the messenger is being shot while the gun is being protected.

Blame the shooter, not the search bar

Phoenix Ikner made a series of deliberate, horrific choices. He stole firearms. He drove to campus. He pulled the trigger. He continued firing until an officer shot him in the jaw. No chatbot forced his hand. No algorithm compelled him. He asked factual questions and received factual answers — answers that were already available on a thousand websites, in a hundred books, and on the lips of every gun store clerk in Florida.

To hold OpenAI criminally liable is to declare that providing information is indistinguishable from committing violence. It would make every search engine, every encyclopedia, every librarian, and every gun manual publisher a potential accessory to murder. It would grant the gun industry a special immunity while exposing the tech industry to capital charges.

And it would let the real killer — the man with the gun — hide behind the convenient fiction that a chatbot made him do it.

If Florida truly wants to investigate criminal complicity in the FSU shooting, the subpoenas should go to the companies that built, sold, and distributed the firearms that actually fired the bullets. But those companies are immune. So instead, we chase the ghost in the machine.

And that is not justice. That is scapegoating.

But here is the true irony: even if Florida files charges, OpenAI will almost certainly never be held criminally responsible. Not because it is innocent — though, as argued, it is — but because it has the resources to outlast any prosecution. Wealthy corporations hire armies of lawyers, flood governments with motions, and settle quietly or walk away.

Those with the money almost never pay.

So, what is Mr. Uthmeier really doing? Why is he after the AI giant?

He is putting on a show. He knows he cannot convict OpenAI, but he can generate headlines. He can blame a chatbot while the gun industry — protected by PLCAA and campaign donations — watches from the sidelines.

The investigation related to the chatbot is not a serious attempt at accountability. It is a distraction.

A deviation of public fury away from the hypocritical policy-makers — the true culprits for mass shootings in American schools.

***

I put the best I can in every piece with my Vostro/Dell laptop. It has served me well for more than 10 years and is increasingly failing to keep pace. Help me get a new one via BuyMeCoffee. Any amount is welcome. Thank you.

***

You can publish here, too - it's easy and free.