Artificial intelligence may be responsible for our inability to communicate with alien civilizations

By | May 14, 2024

This article was first published at: Speech. The publication was contributed to the article by Space.com. Expert Voices: Commentaries and Insights.

Michael Garrett Sir Bernard Lovell is Head of Astrophysics and Director of the Jodrell Bank Center for Astrophysics at the University of Manchester.

Artificial intelligence (AI) has advanced at an astonishing pace in the last few years. Some scientists are now looking at the development of artificial superintelligence (ASI); this is a form of artificial intelligence that will not only exceed human intelligence but will also not depend on the speed at which humans learn.

But what if this milestone wasn’t just a remarkable achievement? But what if it also represents a formidable bottleneck in the development of entire civilizations, so compelling that it prevents their long-term survival?

Relating to: Can AI find alien life faster than humans and tell us?

This idea is at the heart of a research paper I recently published in Acta Astronautica. Could artificial intelligence be the “great filter” of the universe? Could this be a very difficult threshold to cross that prevents most life from evolving into space-faring civilizations?

It’s a concept that may explain why the search for extraterrestrial intelligence (SETI) has yet to detect signatures of advanced technical civilizations elsewhere in the galaxy.

The large filter hypothesis is ultimately a proposed solution to the Fermi Paradox. This begs the question why we haven’t detected any signs of alien civilization in a universe so vast and ancient that it could host billions of potentially habitable planets. The hypothesis suggests that there are insurmountable obstacles in the evolutionary timeline of civilizations that prevent them from evolving into space-faring beings.

I believe the emergence of ASI may be such a filter. The rapid advance of artificial intelligence, which has the potential to lead to AI, may coincide with a critical phase in the development of a civilization, namely the transition from a single-planet to a multi-planet species.

a silver cylinder flies towards a reddish orange planet

a silver cylinder flies towards a reddish orange planet

This is where many civilizations may flounder as AI advances much faster than our ability to control it or sustainably explore and populate our Solar System.

The problem with artificial intelligence, and AI in particular, lies in its autonomous, self-reinforcing and healing nature. It has the potential to evolve its own capabilities at a rate that surpasses our own evolutionary timelines without artificial intelligence.

The potential for something to go badly wrong is huge, and it will lead to the collapse of both biological and AI civilizations before they even have a chance to become multi-planets. For example, if nations increasingly rely on and cede power to autonomous AI systems competing with each other, military capabilities could be used to kill and destroy on an unprecedented scale. This could potentially lead to the destruction of our entire civilization, including artificial intelligence systems.

In this scenario, I estimate that the typical lifespan of a technological civilization could be less than 100 years. This is roughly the time between being able to receive and broadcast signals between stars (1960) and the estimated emergence of ASI on Earth (2040). Compared to the cosmic time scale of billions of years, this period is alarmingly short.

This estimate, when combined with optimistic versions of the Drake equation, which attempts to estimate the number of active, communicating extraterrestrial civilizations in the Milky Way, suggests that there are only a handful of intelligent civilizations out there at any given time. Moreover, their relatively modest technological activities like ours can make them very difficult to detect.

Radio telescopes point to the sky at sunset.Radio telescopes point to the sky at sunset.

Radio telescopes point to the sky at sunset.

wake up call

This research isn’t just a cautionary tale of potential disaster. It serves as a wake-up call to humanity to create robust regulatory frameworks to guide the development of artificial intelligence, including military systems.

This isn’t just about preventing the malicious use of AI on Earth; it is also about ensuring that the evolution of artificial intelligence is compatible with the long-term survival of our species. This shows that we need to devote more resources to becoming a multi-planetary society as quickly as possible; It’s a goal that has been dormant since the heady days of the Apollo project, only recently reignited by advances made by private companies.

As historian Yuval Noah Harari notes, nothing in history has prepared us for the impact of the introduction of unconscious, superintelligent beings to our planet. Recently, the implications of autonomous AI decision-making have led prominent leaders in the field to call for a moratorium on the development of AI until a responsible form of control and regulation is introduced.

But even if every country agrees to abide by strict rules and regulations, it will be difficult to rein in rogue organizations.

The integration of autonomous artificial intelligence into military defense systems should be a matter of particular concern. There is already evidence that humans will voluntarily delegate significant amounts of power to increasingly capable systems because these systems can perform useful tasks much more quickly and effectively without human intervention. Governments are therefore reluctant to regulate in this area, given the strategic advantages offered by AI, as was recently demonstrated devastatingly in Gaza.

RELATED STORIES:

— Should we be looking to artificial intelligence in the search for alien life?

— Machine learning could help track down alien technology. Here’s how

— Fermi Paradox: Where are the aliens?

This means that we are already dangerously close to a precipice where autonomous weapons operate beyond ethical boundaries and evade international law. In such a world, surrendering power to AI systems to gain a tactical advantage could unwittingly trigger a chain of rapidly escalating and highly destructive events. In the blink of an eye, the collective intelligence of our planet could be destroyed.

Humanity is at a crucial point in its technological orbit. Our actions now may determine whether we become a lasting interstellar civilization or succumb to challenges of our own creation.

Using SETI as a lens through which we can examine our future development adds a new dimension to the debate about the future of artificial intelligence. It is our responsibility to ensure that when we reach the stars, we do so not as a cautionary tale for other civilizations, but as a beacon of hope, a species learning to co-evolve with artificial intelligence.

Actually published In The Conversation.

Leave a Reply

Your email address will not be published. Required fields are marked *