There's a 90% chance that a highly advanced AI could pose its own threat to humanity.
AI is an acronym I’m hearing multiple times a day lately, and usually only with a 30% hit rate of being used for the right actual thing. LLMs like ChatGBT and DeepSeek are constantly in the news while we talk about putting AI in everything from our gaming chips to our schools. It’s easy to dismiss this as a pop-culture phase, just like uranium fever had gripped the glove in the past with nuclear anxiety.
The comparison between launching an a-bomb and an AI might seem hyperbolic, but the Guardian has reported AI experts are calling for a safety test akin to what was put in place for the Trinity test for the first detonation of a nuclear weapon.
Max Tegmark, a professor of physics and AI researcher at MIT along with three of his students have published a paper recommending a similar approach. In this paper they call for a required calculation of whether or not any significantly advanced AI might slip out of humans control. This test is being compared to those carried out by Arthur Compton in ascertaining the likelihood of a nuclear bomb detonating in atmosphere before Trinity was allowed to take place.
In those tests, Compton approved the go ahead of Trinity after declaring the likelihood of such an explosion to be slightly less than one in three million. Tegmark when carrying out similar calculations, has found it to be 90% likely that a highly advanced AI could pose its own threat to humanity, as opposed to Windows bugs. This level of currently theoretical AI has been dubbed an Artificial Super Intelligence or ASI.
The calculations have left Tegmark convinced that safety implementations are needed, and that companies have a responsibility to be checking for these potential threats. He also believes a standardised approach agreed to and calculated by multiple companies is required to create the political pressure for companies to comply.
“The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,” he said. “It’s not enough to say ‘we feel good about it’. They have to calculate the percentage.”
This isn’t Tegmark’s first push for more regulations and thought to go into making new AIs. He’s also a co-founder at a non-profit towards the development of safe AI called the Future of Life Institute. The institute published an open letter in 2023 that called for a pause on developing powerful AIs that gained the attention and signature of folks like Elon Musk and Steve Wozniak
Tegmark also worked with world-leading computer scientist Yoshua Bengio, as well as researchers at Open AI, Google, and DeepMind on The Singapore Consensus on Global AI Safety Research Priorities report. It seems if we ever do release an ASI onto the world, we’ll at least know the exact percentage chance it has of ending us all.
Best gaming monitor: Pixel-perfect panels.
Best high refresh rate monitor: Screaming quick.
Best 4K monitor for gaming: High-res only.
Best 4K TV for gaming: Big-screen 4K PC gaming.