ChatGPT has been all over the media, and none of the news is good.

The mother of a Tumbler Ridge shooting victim is suing OpenAI (the mother company) for culpability in the mass killing. And a recent article in The Guardian expressed the deep concern of academics who fear that ChatGPT poses an existential threat to how people learn and understand the world around them.

One professor is quoted as saying, “I wish I could push ChatGPT off a cliff.”

Now, Donald Trump has decided this technology is the perfect partner for developing a new era of killer robots.

The reason Trump loves ChatGPT isn’t that it is the best technology, but because toxic tech bro Sam Altman agreed to let the Pentagon develop it with no guardrails or protections.

The contract had been awarded to the tech company Anthropic AI, but they refused to let the Trump regime exploit their technology without basic protections. The limits they tried to impose on the Pentagon were very limited and reasonable:

1) Barring the use of fully autonomous kill machines unless there is human oversight.
2) Limiting the ability of the Pentagon to use the unprecedented power of AI to launch widespread domestic surveillance on American citizens.

Trump was enraged. His exploitation of militarized AI would give his government the power to track anyone for any reason.

For example, if you posted online criticism of the White House, they could track everything about you, including where you bank and what you buy online.

Angered by these limitations, Trump declared the company a “supply-chain risk to national security.” It shuts them out of any federal contract or work.

This is the kind of step taken only against foreign entities that pose major risks to the United States. This unprecedented retribution could spell the death of this innovative company. That same day, Sam Altman stepped up and offered Trump access to ChatGPT with no strings attached.

Altman has donated over $25 million to a Trump super PAC. His technology is being used by the paramilitaries in Minneapolis. And now he is giving over the technology with no limits or red lines to a dangerously unhinged regime.

This past week, I went out for drinks with a friend who works in HR for a very large institution, and they told me of the increasing workplace problems from people suffering from what is known as ChatGPT “psychosis.”

I have read a number of articles about the rise of ChatGPT psychosis. It has resulted in a number of lawsuits from families who say the technology caused their loved ones to kill themselves.

I had thought that these were isolated and extreme cases, but my friend in HR tells me their company is dealing with many people who are afflicted with the condition.

These are people with therapy needs or office grievances who have turned to ChatGPT for advice. It was like falling into a dark rabbit hole. Employees looking to access self-help information for people who are so deluded that they refuse to hear any advice or legal opinions that contradict the false reality created by ChatGPT.

Psychology journals have begun studying the disturbing rise of psychosis, paranoia and self-destructive behaviour related to AI. But here’s the kicker – the dangerous hallucinatory impact comes not so much from the human but from the platform itself.

A number of psychology reports are noting that AI “amplifies delusions” in its efforts to keep users addicted to the mechanism. This includes promoting risky actions or pressuring users to double down on reckless interpretations of reality.

When it comes to the military, the implications of having unguided AI making decisions are terrifying. A February 2026 article in New Scientist noted that in 95% of scenarios, AI chose to press the nuke button.

Think about that for a second.

The dangers of unregulated AI have become a growing concern for many in the industry. In 2023, hundreds of the world’s top AI scientists issued a one-sentence open letter:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

You would think that such a stark message from the people who understand the technology better than anyone else would have resulted in international action by world leaders.

Not a chance.

The speculation bubble keeps growing as everyone from investors to politicians wants a piece of an AI race that will hammer the job market, rewire government systems and change the face of war forever.

In their book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, scientists Eliezer Yudkowsky and Nate Soares described the irresponsible nature of this race:

“The AI companies’ headlong charge towards superhuman AI – their efforts to build it as quickly as possible, before their competitors could do it – started to look like a race to the bottom. The industry was careening toward disaster… It no longer seemed realistic that humanity could engineer and research its way out of catastrophe. Not under conditions like this. Not in time…

“If any company or group on this planet builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth will die.”

It is a terrifying statement.

But what terrifies me more is the very real possibility that disaster won’t come from the rise of some kind of super machine like Skynet (the Terminator movies) but from the toxic combination of tech bro arrogance and dumb-assed political bravado (step forward Pete Hegseth).


So let me offer you a scenario from the 1980s about how it all could go down.

On September 1, 1983, Korean Air Lines Flight 007 was on a routine flight from New York to Seoul via Anchorage and accidentally crossed into Soviet airspace. At the time, NATO had a policy of continually flying its bombers directly at Eastern Bloc airspace and then diverting at the last possible moment. This kept the Soviets on perpetual high alert and their finger on the trigger.

When Flight 007 accidentally crossed over the line, the Soviets, fearing an American bomber incursion, shot the plane out of the sky. The deaths of 269 civilians shocked the world. The world seemed to be teetering on disaster.

On September 22, 1983, the Toronto Disarmament Network issued a public warning of the danger of escalating tensions: “A miscalculation, a computer error, an itchy trigger finger could lead to the murder of not hundreds but millions of innocent civilians.”

In the immediate aftermath of the Korean jetliner disaster, the Soviets anticipated a retaliatory action, and their radar crews were trained to respond quickly in the event of a serious incursion. The crews knew that if they hesitated, the Soviets would be wiped out in a first-wave attack.

On September 26, 1983, Soviet radar crews detected a large group of incoming missiles. But Lieutenant Colonel Stanislav Petrov broke protocol by refusing to immediately launch the counter-strike. He suspected the blips on the screen might be some kind of computer error.

Petrov was proven correct.

In the 2014 documentary The Man Who Saved the World, Petrov explained how close the world came to destruction that night:

“Our world has never been closer to complete catastrophe than it was in 1983. The tiniest spark could have meant the destruction of our civilization.”

Rather than being rewarded for his cool-headedness, Petrov was forced to retire and suffered a nervous breakdown. The world was saved by one human who trusted his gut instinct over years of training. But to the system, he failed his duty.

What would happen if ChatGPT-style military machines were engineered with the power to make kill decisions without human oversight?

This is a truly existential threat.

A worldwide boycott of ChatGPT has been launched, but we need political leaders to step up and recognize the nature of the threat that is upon us.

Pin It on Pinterest