In a provocative new paper, an English scientist wonders if so-called “artificial intelligence” (AI) is one reason why we’ve never found other intelligent beings in the universe, Popular Mechanics reports. Michael Garrett is a radio astronomer at the University of Manchester and the director of the Jodrell Bank Centre for Astrophysics, with extensive involvement in the Search for Extraterrestrial Intelligence (SETI). In this paper, which was peer reviewed and published in the journal of the International Academy of Astronautics, he compares theories about artificial superintelligence against concrete observations using radio astronomy. Today, AI is not capable of anything close to human intelligence. But, Garrett writes, it is doing work that people previously didn’t believe computers could do. If this trajectory leads to a so-called general artificial intelligence (GAI)—a key distinction meaning an algorithm that can reason and synthesize ideas in a truly human way combined with incredible computing power—we could really be in trouble. And in this paper, Garrett follows a chain of hypothetical ideas to one possible conclusion. How long would it take for a civilization to be wiped out by its own unregulated GAI? Unfortunately, in Garrett’s scenario, it takes just 100-200 years. He assumes there is life in the Milky Way and that AI and GAI are “natural developments” of these civilizations.