Artificial intelligence may be responsible for our inability to communicate with alien civilizations

By | May 8, 2024

<açıklık sınıfı=sdecoret/Shutterstock” src=”https://s.yimg.com/ny/api/res/1.2/.Yy6dWKNuf3I2R725cV85A–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTcwNg–/https://media.zenfs.com/en/the_conversation_464/69378f5cd4e7 381586f0f6a910424064″ data-src =”https://s.yimg.com/ny/api/res/1.2/.Yy6dWKNuf3I2R725cV85A–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTcwNg–/https://media.zenfs.com/en/the_conversation_464/69378f5cd4e73815 86f0f6a910424064″/>

Artificial intelligence (AI) has advanced at an astonishing pace in the last few years. Some scientists are now looking at the development of artificial superintelligence (ASI); this is a form of artificial intelligence that will not only exceed human intelligence but will also not depend on the speed at which humans learn.

But what if this milestone wasn’t just a remarkable achievement? But what if it also represents a formidable bottleneck in the development of entire civilizations, so compelling that it prevents their long-term survival?

This idea is at the heart of a research paper I recently published in Acta Astronautica. Could artificial intelligence be the “great filter” of the universe? Could this be a threshold so difficult to cross that it prevents most life from evolving into space-faring civilizations?

This is a concept that may explain why signatures of advanced technical civilizations elsewhere in the galaxy have not yet been detected in the search for extraterrestrial intelligence (Seti).

The large filter hypothesis is ultimately a proposed solution to the Fermi Paradox. This begs the question why we haven’t detected any signs of alien civilization in a universe so vast and ancient that it hosts billions of potentially habitable planets. The hypothesis suggests that there are insurmountable obstacles in the evolutionary timeline of civilizations that prevent them from evolving into space-faring beings.

I believe the emergence of ASI may be such a filter. The rapid advance of artificial intelligence, potentially leading to AI, may coincide with a critical phase in the development of a civilization, namely the transition from a single-planet to a multi-planet species.

This is where many civilizations may flounder as AI advances much faster than our ability to control it or sustainably explore and populate our Solar System.

The problem with artificial intelligence, and AI in particular, lies in its autonomous, self-reinforcing and healing nature. It has the potential to evolve its own capabilities at a rate that surpasses our own evolutionary timelines without artificial intelligence.

The potential for something to go badly wrong is huge, and this will lead to the collapse of both biological and AI civilizations before they even have the chance to become multi-planets. For example, if nations increasingly rely on autonomous AI systems competing with each other and cede power to them, military capabilities could be used to kill and destroy on an unprecedented scale. This could potentially lead to the destruction of our entire civilization, including artificial intelligence systems.

In this scenario, I estimate that the typical lifespan of a technological civilization could be less than 100 years. This is roughly the time between being able to receive and broadcast signals between stars (1960) and the estimated emergence of ASI on Earth (2040). Compared to the cosmic time scale of billions of years, this period is alarmingly short.

Image of star-studded cluster NGC 6440.

There are an incredible number of planets out there. NASA/James Webb telescope

This estimate, when combined with optimistic versions of the Drake equation, which attempts to estimate the number of active, communicating extraterrestrial civilizations in the Milky Way, suggests that there are only a handful of intelligent civilizations out there at any given time. Moreover, their relatively modest technological activities like ours can make them very difficult to detect.

wake up call

This research isn’t just a cautionary tale of potential disaster. It serves as a wake-up call to humanity to create robust regulatory frameworks to guide the development of artificial intelligence, including military systems.

This isn’t just about preventing the malicious use of AI on Earth; it is also about ensuring that the evolution of artificial intelligence is compatible with the long-term survival of our species. This shows that we need to devote more resources to becoming a multi-planetary society as quickly as possible; It’s a goal that has been dormant since the heady days of the Apollo project, only recently reignited by advances made by private companies.

As historian Yuval Noah Harari notes, nothing in history has prepared us for the impact of the introduction of unconscious, superintelligent beings to our planet. Recently, the implications of autonomous AI decision-making have led prominent leaders in the field to call for a moratorium on the development of AI until a responsible form of control and regulation is introduced.

But even if every country agrees to abide by strict rules and regulations, it will be difficult to rein in rogue organizations.

The integration of autonomous AI into military defense systems should be a matter of particular concern. There is already evidence that humans will voluntarily delegate significant amounts of power to increasingly capable systems because these systems can perform useful tasks much more quickly and effectively without human intervention. Governments are therefore reluctant to regulate this area, given the strategic advantages offered by AI, as was recently demonstrated devastatingly in Gaza.

This means that we are already dangerously close to a precipice where autonomous weapons operate beyond ethical boundaries and evade international law. In such a world, surrendering power to AI systems to gain a tactical advantage could unwittingly trigger a chain of rapidly escalating and highly destructive events. In the blink of an eye, the collective intelligence of our planet could be destroyed.

Humanity is at a crucial point in its technological orbit. Our actions now may determine whether we become a lasting interstellar civilization or succumb to challenges of our own creation.

Using Seti as a lens through which we can examine our future development adds a new dimension to the debate about the future of artificial intelligence. It is our responsibility to ensure that when we reach the stars, we do so not as a cautionary tale for other civilizations, but as a beacon of hope, a species learning to evolve alongside artificial intelligence.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SpeechSpeech

Speech

Michael Garrett does not work for, consult, own shares in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond his academic duties.

Leave a Reply

Your email address will not be published. Required fields are marked *